The CTO of OpenAI said that government regulatory agencies should have a "significant role" in controlling AI.

 

The CTO of OpenAI said that government regulatory agencies should have a "significant role" in controlling AI.


Translation: Mira Murati, head of technology at OpenAI, believes that government oversight agencies should have "a significant role" in developing safety standards for advanced artificial intelligence models like ChatGPT.


Furthermore, she believes that proposing a 6-month halt in development is not the right way to create a safer system, and the industry is still far from achieving an AI that has the ability or knowledge and understanding at the human level. Her opinion came from an interview with the Associated Press published on April 24.


When asked about the security concerns OpenAI has before the launch of GPT-4, Murati explained that the company uses slow training methods not only to curb unwanted behavior but also to seek downstream concerns related to these changes:


"You have to be very careful because you might create other imbalances, and you have to constantly monitor [...] and be careful every time you intervene."


After the launch of GPT-4, experts who are afraid and unfamiliar with the future of AI have called for intervention, from adding government regulations to stopping AI development worldwide for 6 months.


Murati supports the idea of increasing government involvement, stating that "these systems should be controlled" and saying that "at OpenAI, we are talking to governments, oversight agencies, and other organizations that are developing regulations to ensure the safe use of AI."


Reference

Image