The executive recently sounded off about government regulators, the “pause” letter, and how close the company is to reaching artificial general intelligence.
News
Mira Murati, the chief technology officer at OpenAI, believes government regulators should be “very involved” in developing safety standards for the deployment of advanced artificial intelligence models such as ChatGPT.
She also believes a proposed six-month pause on development isn’t the right way to build safer systems and that the industry isn’t currently close to achieving artificial general intelligence (AGI) — a hypothetical intellectual threshold where an artificial agent is capable of performing any task requiring intelligence, including human-level cognition. Her comments stem from an interview with the Associated Press published on April 24.
Related: Elon Musk to launch truth-seeking artificial intelligence platform TruthGPT
When asked about the safety precautions OpenAI took before the launch of GPT-4, Murati explained that the company took a slow approach to training to not only inhibit the machine’s penchant for unwanted behavior but also to locate any downstream concerns associated with such changes:
“You have to be very careful because you might create some other imbalance. You have to constantly audit […] So then you have to adjust it again and be very careful about every time you make an intervention, seeing what else is being disrupted.”
In the wake of GPT-4’s launch, experts fearing the unknown-unknowns surrounding the future of AI have called for interventions ranging from increased government regulation to a six-month pause on global AI development.
The latter suggestion garnered attention and support from luminaries in the field of AI such as Elon Musk, Gary Marcus, and Eliezer Yudkowski, while many notable figures including Bill Gates, Yann LeCun and Andrew Ng have come out in opposition.
a big deal: @elonmusk, Y. Bengio, S. Russell, ??@tegmark?, V. Kraknova, P. Maes, ?@Grady_Booch, ?@AndrewYang?, ?@tristanharris? & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4 https://t.co/PJ5YFu0xm9
— Gary Marcus (@GaryMarcus) March 29, 2023
For her part, Murati expressed support for the idea of increased government involvement, stating, “these systems should be regulated.” She continued: “At OpenAI, we’re constantly talking with governments and regulators and other organizations that are developing these systems to, at least at the company level, agree on some level of standards.”
But, on the subject of a developmental pause, Murati’s tone was more critical:
“Some of the statements in the letter were just plain untrue about development of GPT-4 or GPT-5. We’re not training GPT-5. We don’t have any plans to do so in the next six months. And we did not rush out GPT-4. We took six months, in fact, to just focus entirely on the safe development and deployment of GPT-4.”
In response to whether there was currently “a path between products like GPT-4 and AGI,” Murati told the Associated Press that “We’re far from the point of having a safe, reliable, aligned AGI system.”
This might be sour news for those who believe GPT-4 is bordering on AGI. The company’s current focus on safety and the fact that, per Murati, it isn’t even training GPT-5 yet, are strong indicators that the coveted general intelligence discovery remains out of reach for the time being.
The company’s increased focus on regulation comes amid a greater trend towards government scrutiny. OpenAI recently had its GPT products banned in Italy and faces an April 30 deadline for compliance with local and EU regulations in Ireland — one experts say it’ll be hard-pressed to meet.
Such bans could have a serious impact on the European cryptocurrency scene as there’s been increasing movement towards the adoption of advanced crypto trading bots built on apps using the GPT API. If OpenAI and companies building similar products find themselves unable to legally operate in Europe, traders using the tech could be forced elsewhere.