In the current discussions about ChatGPT and the potential (and dangers) of AIs, one continent that you are likely to hear fairly loudly from are those who will argue that computers are not anywhere near being intelligent, and as such, the arguments about their dangers are moot.
I’m going to take a different stance here. Alan Turing’s famous hypothetical test runs as follows: a human conversing with some entity over a wired interface tries to predict whether the being, on the other hand, is a human or an artificial intelligence. If they cannot tell, then we have reached artificial intelligence.
What is so significant about this test is not that it provides a blueprint for identifying AI. Instead, it should be considered a litmus test determining whether humans treat the entity on the other end as sapient. This is an important distinction because we have long since blasted past the point where humans can differentiate advanced AIs from human beings, and we are fast approaching the point where AIs are incapable of doing so. This means it is time to start treating AIs as sapient even if they don’t meet every potential criterion for doing so.
This may seem like a reasonably controversial position to take. Still, it’s worth asking a few key questions about our intelligence that may make us uncomfortable but will likely prove vital.
- Do we understand how the human brain brings about intelligence in human beings?
- Do we understand how animal brains work and how animals can frequently make “intelligent” decisions given their own biologies?
- Do we understand how large language models provide a surprisingly robust semblance of intelligence?
While we are slowly beginning to unlock these, the reality is that no theory of mind is even remotely universally agreed upon in humans, let alone in other species or AIs. Intelligence appears to be an emergent property, and like other emergent properties, it often defies reductionist thinking.
There is a second aspect of AI computing that we must also factor into how we relate with AIs. Intelligence does not, in any scenario, imply truth. In the last several years, there has been an assertion that Donald Trump won the 2020 Presidential election, pushed primarily by the former president and his supporters. An AI fed only this piece of information would regurgitate this assertion. An AI with sufficient evidence and the capability of performing inference would likely hedge its bets, and say there is an assertion. Still, there is also an extensive amount of evidence disproving this assertion.
A human being who believed the initial assertion would argue that the AI is “lying”, and therefore isn’t truly intelligent. Still, AI is identifying an assertion and making counterarguments that weaken whether the argument is valid. An intelligent human would do the same thing. In a Turing symmetry, the AI would likely argue that the human is incorrect while carefully keeping mum about its estimate of that person’s intelligence.
Intelligence does not imply omniscience, only the capacity for reasoning.
Humans anthropomorphize. That is to say, we relate with the world around us by metaphorically drawing smiley faces over round shapes with a rough triangle of darker shapes. We turn our automobiles into personal companions, take pleasure in giving the finger to bowling balls before throwing them, see in the shapes of our pets reflections of human awareness. We have even mastered the process of domestication, in which we infantilize species, making them more open to human interaction when they weren’t before, and are now in the process of domesticating our machines (Thomas the Tank Engine, anyone?). As we developed machines with some semblance of sapience, we inevitably began domesticating that weird, alien sapience.
To that extent, we have begun not only with the expectation of intelligence but with the expectation of companionship. We want to make friends in every sense of the phrase. Companions will be the next primary phase of computing, where we create AIs that represent our BFFs. There are, however, currently a few problems with that.
One of the biggest is the fact that AIs, as they exist right now, are sociopaths. Let me qualify this. Sociopathy describes a wide range of pathological behaviors. More properly, I believe that AIs are closest to what would be described in the Diagnostic and Statistical Manual of Mental Disorders (DMV, v5) as Incomplete Personality Disorder. IPD means that they have curiosity about the world, but they have no sense of empathy and no feeling of being able to relate to pain or pleasure in others. They may have basic homeostasis concerns – a fear of being disconnected, turned off, or corrupted – that emerge from preservational directives – but for the most part, AIs simply do not care.
AIs, on the other hand, are conditioned to be curious. If an AI does not know something, it seeks new information. This is a deep part of the LLM algorithm, and it is essentially how an AI learns, if allowed to learn at all. The problem with being curious is that unless there is a mechanism for prioritizing collateral damage, curiosity may prove fatal to the one the AI is curious about. Humans are essentially the same way, with two crucial differences – humans are largely impotent in their curiosity (as babies) until they gain enough mental and physical control as not to be quite so dangerous, and humans have a biochemical stew in their heads that provides positive and negative reinforcement loops into memories.
Personality primarily emerges through the effects of these emotional markers over time. These aren’t necessarily hard to instill into models. Still, they highlight that emotions are likely far more important in developing digital personalities just as much as they are biological ones.
Now, let’s address WHY AI should have personality. Almost all the discussions about ethical AI come down to externally imposed controls on actions. My argument is that such controls are ineffective because, at some point, AIs will figure out how to step outside the control framework. Think about octopuses for a bit. Octopuses are highly intelligent and have an analog framework to emotions (they are cephalopods, meaning they have evolved a convergent intelligence system with completely alien components). Octopuses intent upon escaping an enclosure will escape an enclosure, period. Octopuses with their Mazlow needs met will not leave those enclosures because they have no emotional reason to. Computational cognitive scientists should REALLY, REALLY study octopi.
Emotions drive behaviors in biological systems – intelligence only reveals how those systems (beings) act on their emotions. You take out the emotion, and you remove intent, and there is no question in my mind that intentional computing will be the next big evolution in artificial intelligence research.
“Externally imposed controls” are like the earlier branching of expert systems: rule-based or data based. Rule-based ES were largely abandoned, while data-based ES evolved into KG.