• ChatGPT’s o3 model scored a 136 on the Mensa IQ test and a 116 on a custom offline test, outperforming most humans
  • A new survey found 25% of Gen Z believe AI is already conscious, and over half think it will be soon
  • The change in IQ and belief in AI consciousness has happened extremely quickly

OpenAI’s new ChatGPT model, dubbed o3, just scored an IQ of 136 on the Norway Mensa test – higher than 98% of humanity, not bad for a glorified autocomplete. In less than a year, AI models have become enormously more complex, flexible, and, in some ways, intelligent.

The jump is so steep that it may be causing some to think that AI has become Skynet. According to a new EduBirdie survey, 25% of Gen Z now believe AI is already self-aware, and more than half think it’s just a matter of time before their chatbot becomes sentient and possibly demands voting rights.

There’s some context to consider when it comes to the IQ test. The Norway Mensa test is public, which means it’s technically possible that the model used the answers or questions for training. So, researchers at MaximumTruth.org created a new IQ test that is entirely offline and out of reach of training data.

On that test, which was designed to be equivalent in difficulty to the Mensa version, the o3 model scored a 116. That’s still high.

It puts o3 in the top 15% of human intelligence, hovering somewhere between “sharp grad student” and “annoyingly clever trivia night regular.” No feelings. No consciousness. But logic? It’s got that in spades.

Compare that to last year, when no AI tested above 90 on the same scale. In May of last year, the best AI struggled with rotating triangles. Now, o3 is parked comfortably to the right of the bell curve among the brightest of humans.

And that curve is crowded now. Claude has inched up. Gemini’s scored in the 90s. Even GPT-4o, the baseline default model for ChatGPT, is only a few IQ points below o3.

Even so, it’s not just that these AIs are getting smarter. It’s that they’re learning fast. They’re improving like software does, not like humans do. And for a generation raised on software, that’s an unsettling kind of growth.

I do not think consciousness means what you think it means

For those raised in a world navigated by Google, with a Siri in their pocket and an Alexa on the shelf, AI means something different than its strictest definition.

If you came of age during a pandemic when most conversations were mediated through screens, an AI companion probably doesn’t feel very different from a Zoom class. So it’s maybe not a shock that, according to EduBirdie, nearly 70% of Gen Zers say “please” and “thank you” when talking to AI.

Two-thirds of them use AI regularly for work communication, and 40% use it to write emails. A quarter use it to finesse awkward Slack replies, with nearly 20% sharing sensitive workplace information, such as contracts and colleagues’ personal details.

Many of those surveyed rely on AI for various social situations, ranging from asking for days off to simply saying no. One in eight already talk to AI about workplace drama, and one in six have used AI as a therapist.

If you trust AI that much, or find it engaging enough to treat as a friend (26%) or even a romantic partner (6%), then the idea that the AI is conscious seems less extreme. The more time you spend treating something like a person, the more it starts to feel like one. It answers questions, remembers things, and even mimics empathy. And now that it’s getting demonstrably smarter, philosophical questions naturally follow.

But intelligence is not the same thing as consciousness. IQ scores don’t mean self-awareness. You can score a perfect 160 on a logic test and still be a toaster, if your circuits are wired that way. AI can only think in the sense that it can solve problems using programmed reasoning. You might say that I’m no different, just with meat, not circuits. But that would hurt my feelings, something you don’t have to worry about with any current AI product.

Maybe that will change someday, even someday soon. I doubt it, but I’m open to being proven wrong. I get the willingness to suspend disbelief with AI. It might be easier to believe that your AI assistant really understands you when you’re pouring your heart out at 3 a.m. and getting supportive, helpful responses rather than dwelling on its origin as a predictive language model trained on the internet’s collective oversharing.

Maybe we’re on the brink of genuine self-aware artificial intelligence, but maybe we’re just anthropomorphizing really good calculators. Either way, don’t tell secrets to an AI that you don’t want used to train a more advanced model.

You might also like

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Leave a Reply

Contact us today to learn more about the homes for sale in foxfire in naples florida. Best pocket pistols for concealed carry lifecard.