The reports of humanity’s imminent demise at the hands of sentient killer robots have been greatly exaggerated.
Based on the current state of artificial intelligence – that is, it’s really good at sifting through data and it can usually tell the difference between a dog and a cat – we don’t have to worry about “conscious” AI anytime soon.
I put “conscious” in quotes because, as every article you’re likely to read on the subject will point out, we don’t really understand consciousness.
There’s a contingency of experts who believe consciousness manifests in specialty organisms and there’s an emergent group who feels that everything – and they mean everything – is conscious.
The idea that consciousness only exists in certain entities is a fun one: it means we’re the cosmos’ special little people. And that makes us very, very important.
But let’s take a gander at the idea that doesn’t make us the center of the known universe too, just for fun: Panpsychism.
This snippet from an article by Caroline Delbert at Popular Mechanics does a fantastic job of explaining what universal consciousness could be:
The resulting theory is called integrated information theory (IIT) … In IIT, consciousness is everywhere, but it accumulates in places where it’s needed to help glue together different related systems.
The revolutionary thing in IIT … it’s that consciousness isn’t biological at all, but rather is simply this value, phi, that can be calculated if you know a lot about the complexity of what you’re studying.
If your brain has almost countless interrelated systems, then the entire universe must have virtually infinite ones. And if that’s where consciousness accumulates, then the universe must have a lot of phi.
I don’t know about phi, but if the universe itself is where consciousness is derived: that’s probably bad news for AI. At least under its current paradigm.
Simply put: non-algorithmic intelligence would be the baseline norm in a universe where consciousness manifested as a result of systemic perturbation. That’s another way of saying that the only reason we have free will is because you can’t brute-force consciousness using algorithms.
This is because the existence of algorithmic-consciousness would indicate that you could determine exactly what any given consciousness would do in perpetuity, if you could simply recreate the algorithms it runs on. And that means there’s no such thing as free will: we’d basically all just be pre-determined intelligence systems executing our code.
But this doesn’t really fit in with our experience of reality or the theory of universal consciousness. We appear to be quantum creatures. Our brains can surface thoughts based on a theoretically near-infinite number of parameters. And the amount of compute it would take in a binary system to imitate this could be unfathomable.
Have you ever tried to remember the name of a song or TV character for weeks and then had that memory triggered by a taste or smell? Ever made up a silly rhyme to help you memorize something for a test? This is evidence of the vast interconnected quantum neural network operating inside your skull. This indicates we’re probably operating as nonalgorithmic-consciousnesses.
If intelligence and consciousness are manifestations of quantum mechanics, it could very well be impossible to recreate them in a binary system.
So, the the bad news is that you’re unlikely to have a truly-alive robot pal anytime soon. We’ve only just begun to dabble in quantum computing, and if you believe the universal consciousness theory: we’re probably a very long way away from general quantum AI and cracking the consciousness code.
The good news is that this would also mean there’s almost no chance an AI will become sentient and decide to create killer robots to murder us all so the machines can rule the Earth.