Sam Altman has squashed rumors that OpenAI is already working on ChatGPT-5, just a month after the company’s release of its GPT-4. Currently, there is no GPT-5 in training, Altman said while speaking virtually at an event at the Massachusetts Institute of Technology.
Interviewer Lex Fridman, an AI researcher at MIT, asked Altman for his thoughts on the recently released and widely circulated open letter demanding an AI pause. In response, the OpenAI founder shared some of his critiques. “An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not, and won’t for some time,” Altman noted. “So in that sense, [the letter] was sort of silly.”
Advertisement
But, GPT-5 or not, Altman’s statement isn’t likely to be particularly reassuring to AI’s critiques, as first pointed out in a report from the Verge. The tech founder followed up his “no GPT-5″ announcement by immediately clarifying that upgrades and updates are in the works for GPT-4. There are ways to increase a technologies’ capacity beyond releasing an official, higher-number version of it.
“We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter,” Altman said—in essence, admitting that OpenAI is shipping product tweaks that may not be totally optimized for the good of humanity or user safety. For instance, in late March, the company released a plug-in for GPT-4 that lets its large language model browse the internet, which could lead to even more data privacy and user manipulation concerns.
Advertisement
Advertisement
Altman did attempt a meager effort at assuaging AI fears. He said that OpenAI spent over six months training GPT-4 before its public release. He also noted that, “taking the time to really study the safety of the model…that’s important.”
“As capabilities get more and more serious, the safety bar has got to increase,” Altman added. “I think moving with caution and an increasing rigor for safety issues is really important. The letter, I don’t think is the optimal way to address it.”
Advertisement
Yet you should probably take OpenAI CEO’s safety prioritization claims with some skepticism. Even in Thursday’s MIT interview, not everything the controversial entrepreneur said rang true.
Asked if OpenAI will continue to be transparent going forward, Altman said “we certainly plan to continue doing that.” Except the question itself is a misleading softball. OpenAI, which was once a truly open source, non-profit organization, has become an increasingly closed-off, for-profit corporation. GPT-4, especially, is a black box. The company has not released any information on the training data its most recent chatbot was fine tuned on. Nor has it shared any information on GPT-4’s architecture, construction, or other true inner workings.
Advertisement
“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar,” OpenAI wrote in the technical report that the company published alongside the GPT-4 release.
Altman was also sure to give himself an out. The things he says, well he might not be positive they’re correct:
I think a lot of other companies don’t want to say something until they’re sure it’s right. But I think this technology is going to so impact all of us, that we believe that engaging everyone in the discussion, putting these systems out into the world—deeply imperfect though they are in their current state—so that people get to experience them, think about them, understand the upsides and the downsides; It’s worth the trade-off, even though we do tend to embarrass ourselves in public and have to change our minds with new data frequently.
Advertisement
You might believe AI chatbots are the beginning of the end of the human race. Or you may think that all of this so-called “artificial intelligence” stuff is overhyped. Regardless where you stand on the call for a six-month AI-moratorium though, Altman’s answer to the open letter is, ultimately, something of a non-answer.
Services Marketplace – Listings, Bookings & Reviews