The story of a Google engineer (and Christian mystic) who saw signs of personhood in Google’s latest artificially intelligent chatbot software and was later fired has reignited public debate over whether any of today’s AI systems are sentient. The consensus among experts is that no, they are not: see this, this, this, and this, for example. We reached the same conclusion via a different path, using a little mathematical formalism to burn off the fog of confusion. A chatbot is a function. Functions are not sentient. But functions can be powerful. They can fool people. The important question is who will control them and whether they are used transparently.
Math is powerful because it encourages two extremes—abstraction and specificity. If you are tempted by the idea that functions can be sentient, start with some specifics. Among these four, which function is the most sentient?
- f(x) = 5x + 7
- f(x) = sin(x)
- f(x) = log(x2)
- f(x) = 3cos(x4 – 7x2) + 4
Counting parameters is one way to measure complexity. It takes nine symbols to specify the first versus 19 for the fourth (and the three-letter symbol “cos” is a placeholder for its own complex function, the cosine.) So some functions are more complex than others. But more sentient? Obviously not.
In case your high school math is rusty, these are arbitrary functions we made up that, when graphed, produce various lines and curves.
Now, switch from specifics to the overarching abstraction. The abstract notion of a mathematical function turns out to be so powerful precisely because it lets us use our understanding of simple instances to reason about relatives of arbitrary complexity. A function is a rule for turning one number (or a list of numbers) into another number (or list of numbers.) By this definition, all the AI systems in use today, including the LaMDA chatbot that triggered the recent controversy, are functions. AI systems are much more complex than the four functions listed above, much more. It would take hundreds of billions of symbols to write down the formula for the function that is LaMDA. So LaMDA is a very complex function, but a function, nevertheless. And no function is sentient.
Abstractions help us reason by letting us look past any irrelevant detail. One such detail is that the software behind a chatbot includes a rule for converting a message spelled out in letters from the alphabet into a number that the software uses as an input x and then to convert the output f(x) back into a message in letters. Any computer you use, including the one that you carry around in your pocket, routinely does this type of translation. From the abstract perspective, every input or output is a number. Every program is a function.
Placing LaMDA alongside its fellow functions takes the wind out of the sails of those who, dazed and confused by complexity, reach for the familiar territory of moral judgments. In high school we learned how to tell if a function is increasing or decreasing. Should we have also learned how to tell if a function is sentient? Should our teachers have scolded us if we plotted a sentient function on a handheld calculator because of the suffering we might accidentally inflict upon this innocent lifeform?
Instead of talking in circles about how to use the word “sentience” (which no one seems to be able to define), we could be asking how humans should learn to exploit the amazing power of complex functions. For centuries, mathematicians and engineers have been codifying knowledge in new functions. On one line of investigation, they have discovered “pseudorandom” functions—functions that yield a perfectly predictable output for anyone who knows the underlying formula, but that simulate with incredible accuracy the random behavior of a coin flip. If a real person were flipping a real coin behind one curtain and a computer ran a modern pseudorandom function behind another, no one would be able to tell them apart, even after collecting millions, billions, or billions of billions of reports of heads versus tails.
During its “training” phase, the software for a chatbot examines vast quantities of text. Imagine that it finds that half the time the phrase “my dog likes to” is followed by “play fetch” and otherwise by “chew furniture.” When the trained chatbot is put to work chatting with someone, the inputs may signal that it is time to say something about a dog. Software running on a computer can’t literally flip a coin to decide between “play fetch” and “chew furniture.” Instead it uses one of the pseudorandom functions.
Traditionally, any mathematician or engineer who discovered a better formula for simulating randomness published it so everyone could critique it and copy it. As a result, simulated randomness is now used in many applications, including keeping internet transactions secure. Everyone working on these practical applications was free to use the best among all known functions. This is how science works. Without any prospect of becoming multibillionaires, people discovered new functions and shared them. Humans made progress. It was difficult to use secret knowledge to amass vast social, economic, or political power.
It was a good system. It worked for centuries. But it is under siege. In artificial intelligence, progress on the frontier of knowledge is now dominated by a few private companies. They have access to enough data and enough computing power to discover and exploit remarkably powerful functions that no one watching from the outside can understand.
The output produced by LaMDA is impressive. It shows that artificial intelligence could give people amazing new ways to access all human knowledge. We expect that Google will find a way to tweak LaMDA to nudge the decisions of billions of people who use it in opaque ways; and that it will collect billions of dollars from firms who want those decisions to benefit them. But this is not the only path forward. The same tools could be developed by some combination of academics and the folks who contribute to Wikipedia—people who really do want to make all human knowledge available to everyone; people who do not try to trick you into clicking on a link and getting an answer that is not the one you wanted.
Don’t be fooled by functions, no matter how complex they may be. The engineers at Google are not modern-day Dr. Frankensteins. They are not bringing wires and transistors to life. They are smart people doing the kind of work that scientists and mathematicians have always done, work that frequently amounts to finding an explicit formula for some useful new function.
Instead of debating the sentience of chatbots, we should consider the long-term consequences of a shift away from the system of science, which produces this knowledge in a way that benefits everyone, toward a system dominated by the technology giants, in which secret knowledge conveys power and profit to the few.
Source: barrons.com