The Endless Thread team will be presenting tales about the emergence of bots throughout the coming weeks. They can be found on all social networking sites, in chat rooms, on mobile apps, and elsewhere. What sly, unexpected ways are these pieces of software, which are designed to mimic human behaviour and language, influencing our daily lives?
Our co-hosts start out with exploring the background of ELIZA, the first chatbot therapist in history. Why did the person who created this machine have conflicted emotions on the advancement of AI? We also think about the more general issue of whether AI can assist us in overcoming mental health problems.
Encourage the show:
We hope to be able to continue creating Endless Thread for a very long time because we love doing it. If you also want that, we would be incredibly grateful for any contribution you could make to our work. Everyone who donates on a recurring basis will get access to special extra materials. To access the donation page, click here. I’m grateful.
complete transcript
The intended purpose of this item was for audio. For clarity, the transcript has been altered from our original script. Be aware that some aspects—such as music, sound effects, and tone—are more difficult to convey in writing.
Ben Brock Johnson: For the autumn semester, I’m back in school. or at the very least today. I’m trying to plan my route so I can meet a professor during office hours.
Could you please confirm that this is the brand-new computer science building? Are you aware?
Friendly campus pedestrian: Yes, that’s this structure. This particular one that you are presently viewing.
Amory Sivertson: I have to admit that I wouldn’t have thought of Ben as an office hours student.
Ben: That’s true, but I enjoy socialising with everyone. even academics. and in particular lecturers who labour in immaculate new structures.
Ben: Beautiful structure.
Friendly campus pedestrian: Do you live nearby?
Ben: No.
Amory: Well, that does seem to make sense. I didn’t much enjoy school.
Ben: Do you tend to “drool” in school?
Amory: Constantly. So much saliva. What were you discussing with Soroush Vosoughi, an assistant professor at Dartmouth College?
Ben: Let’s start by mentioning how amazing his building’s atrium is.
What does it appear to be to you, Ben?
Vosoughi, Soroush: Huh. like a colony of bees. (Laughs.)
Ben: I suppose it does. Yeah.
Soroush: I agree.
Ben: A whole or something is made up of a lot of little cells.
Soroush: That’s right. Yeah. a whole that is larger than the sum of its parts. Yeah.
Amory: That’s a wonderful remark, but based on his response, I’m assuming Soroush isn’t an architecture professor.
Ben is right. He does, in a sense, deal with specific types of architecture. the methodical assembling of items.
Soroush: I specialise on machine learning, natural language processing, and social media data analysis.
Ben: I came here to ask Soroush to tell me about a project he recently worked on with a few graduate students that is kind of on the cutting edge of academic research for what computers can do with social media data. a system created to foresee the start of significant mental health problems.
He leads me to the building’s top floor by way of a floating stairway. past storeys of software and hardware laboratories. a remote-controlled sailboat with a luxurious appearance.
It’s actually an autonomous sailboat, Soroush.
Is it truly, Ben?
Soroush: I agree. It teaches itself how to sail.
Ben: The air conditioning is on when we enter Soroush’s office. For an audio interview, never good.
Is there a way to shut that air off, Ben?
Soroush: I dunno.
Ben: Software regulates the air. Centrally. Ironically, this computer science professor is unable to fix it at the moment. He claims that the listserv of the school of computer science is currently host to an irate email thread about this precise problem.
I start off by asking the lecturer for additional credit, just like any other student would. He received both compliments and inquiries regarding the books on his bookshelves. Important books on algorithms, machine learning, and the Foundation books by Isaac Asimov. which any sci-fi geek is aware of. On there, he has something printed using a 3-D printer.
This is a mock-up of a Hodor holding the door, according to Soroush. Actually, it appears to be a doorstop.
Ben: A good joke referencing GameStop.
Yes, exactly, Soroush.
Ben: There are handmade radars made of coffee cans, mental teasers, even sterling engines that convert heat into electricity by automatically detecting temperature differentials.
Additionally, Soroush has this exquisitely crafted hand-sized box. has a basic mechanical switch attached to it. No justification. A robotic hand immediately appears and switches the switch back off when you turn it on.
Ben: I really enjoy those.
There are many parallels surrounding this one, including perhaps the futility of technology, says Soroush. You are resolving a hypothetical issue, correct? Just goes back, I guess.
Ben: It serves as a cautionary tale.
Soroush: A worthless box is exactly what you should not do. You know, that’s why it is.
Amory: Alight, I believe Ben has earned his additional credit.
Ben: This is absolutely true. It’s time to get to work. We begin with material at the 100th level.
Ben: Describe a robot.
Soroush ponders.
Ben: Amory, would you mind responding before Soroush?
Hmm, Amory I would say that a robot completes a task mechanically, automatically, and perhaps occasionally more effectively than we can?
Ben: It’s not horrible at all.
Soroush: When most people think of a robot, they probably picture a mechanical creature, although the term actually has a broader definition. I would classify anything that performs a human task in an automated way as a robot.
Ben: Soroush began his career at MIT by studying mechanical creatures. lifting and physical feats were done by robots. He is now, however, more concentrated on a certain aspect of robotics. He might refer to it as the brain. And there are a tonne of programmes that are frequently referred to as bots, a little shorter term.
Soroush and a few of his students began extracting data from Reddit earlier this year. a significant number of comments, from tens of thousands of actual Reddit users, asking users to look out for symptoms of mental illness. They were engaging in this behaviour online as a result of something Soroush seen in his offline environment.
Soroush: As a professor at Dartmouth, I’ve had a lot of conversations with students, both graduate and undergraduate, who have said that the culture from which they come still makes them uncomfortable talking about mental health issues and makes them feel stigmatised to even say, “Hey, I might, you know, be feeling anxious or maybe slightly, slightly depressed.”
Ben: It’s up to you whether you want to provide more detail on the various cultures or prefer to keep it general.
Soroush: Well, I can give you a comprehensive overview of several Asian cultures, including those of East and West Asia as well as East Asia. individuals in the Middle East, East Asia, and South Asia.
Amory: Did you know that Soroush and other academics created a bot to assist individuals from Asian cultures in recognising they were facing a mental health challenge? by checking over their posting history for indicators of mental stress? That is bizarre. Additionally problematic
Ben: I do. And that’s just one of the intriguing things about this. It is the case that millions of individuals spend their days online, largely, I believe, assuming that they are connecting with others. Everyone has heard Elon Musk, for example, complain that there may be too many Twitter bots. However, a growing number of individuals are now a part of this intricate, enormous, and teeming ecology of real and virtual devices interacting with one another. both overtly and somewhat subtly. Both positively and negatively. And that’s what we want to talk about.
I’m Amory Sivertson, who insists that he is not a robot.
Ben: My name is Ben Johnson, and you are currently listening to Endless Thread.
Amory: We are visiting you from Boston’s NPR station, WBUR. In addition, a brand-new series depicting the rise of the machines is coming your way. decent bot
Poor bot, Ben. Episode of the day: bot therapy.
Ben, I understand. Is a bot indeed a robot if it exists online?
Ben: I believe that yes, by Soroush’s definition. A robot does a human task, but in an automated way. However, Soroush, a researcher at the university where the phrase “artificial intelligence” was originally used, might not even refer to his invention as a bot. He may refer to it as a model.
The model is the foundation of the bot, according to Soroush. The second component is the input and output, which is simply plugging it into a platform and having it operate in real time. Okay, fine, go ahead.
Ben: Are those the three components of this type of input-output model for a bot?
That’s accurate, Soroush.
Ben: And does the model resemble a map, an instruction book, or something similar? How else might you explain the model?
That is a really good question, according to Soroush. The simplest way to conceptualise a model is as a mathematical function that converts input into output. Raw data gathered from the outside world serves as the input in this case. They can map a mathematical model—what that’s we refer to as the model—through some kind of transformation to an insightful output.
Amory: I have no idea, man. I’m not sure. Model? useful results? Input? Blare, blare.
Ben: Alright, now create a kind of extremely complex flowchart. The input marks the start of the convoluted flowchart, and the output marks its absolute finish. In the centre is the model.
Bleep bloop boop, says Amory.
Ben: Now let’s try out a scenario from the real world. It’s also a pertinent one. It has the name ELIZA. Joseph Weizenbaum, a renowned MIT professor, developed the natural language processing computer programme called ELIZA.
Natural language processing is…, Amory
Soroush: In the past ten years, natural language processing—basically, building models that deal with naturally spoken languages—has made great strides.
Amory: ELIZA was a computer that you could communicate with. Not just any software, though—not just any software.” a computer for therapists. And it wasn’t constructed in the last ten years. It was constructed in 1964.
Ben: To test it, we must enter a time machine, but I’m joking.
Amory: I wish, I mean. However, I would travel through time. which, for the record, is female.
Ben: Fair enough, but ELIZA was also. Supposedly. And her code is still being used online. Will you take me to the therapist, Amory?
Let’s visit the therapist, Amory.
You emailed me a link to Eliza. Okay, it says to type your questions and answers in the input box to talk to Eliza. “Hello, I’m Eliza. I’ll be your therapist today,” she introduces herself.
What should we say, Ben?
(Typing.) Hello, Eliza. Today I’m feeling really sluggish. What do you want to answer when she asks, “Do you suppose it is normal to be feeling rather lazy today?”
Ben: No.
(Typing) No, not at all, Amory Are you just saying no to be negative, she asks.
Amory and Ben: (Laugh.)
Ben: Let’s say, I’m not sure.
Amory: (Typing.) I dunno, maybe. That’s how I truly feel.
Okay, I’m onto you ELIZA, she asks, “Do you wish to be able to know that’s just how you feel?” You’re going to say, “You keep asking a question back,” at this point. (Typing.) Yes, I want to find out what is causing me to feel unmotivated and incapable of finishing tasks.
Wow, what a quick fire reaction. How do you know you can’t finish things, she asks.
Just ask my editor, Ben.
Amory: Just ask my boss. (Laughs) Would you say that you have psychiatric issues, she asks?
Ben: Alright, so would you say that you suffer from psychological issues?
Amory: (Typing) (Typing.)
What do you have, Ben?
Amory: Good. Oh, dot dot dot, let’s go on to something else for a little bit, she says.
“I have psychological issues? Question mark,” you say.
Ben: How about, Eliza, you’re depressing me.
Amory: (Typing.) Oh my God, “What makes you think I’m making you depressed, Eliza?” I think the bot is going to self-destruct.
Ben: (Laughs) (Laughs.)
She doesn’t even know her name, Amory. She is, you know, like I am aware that I am. However, who are you? What is happening?
Ben: She seems to be having a bad therapy session. Kind of a bad vibe, wouldn’t you say?
Amory: Agreed. The most useful thing I read in this exchange was her asking, “How do you know you can’t complete things?” Yeah, maybe I’ll just repeat it to myself all day. I mean, we weren’t exactly giving her the best stuff to work with.
Ben: Alright, let’s return to ELIZA and discuss why she had a bad experience. Consider Soroush’s effort as an advancement of this age-old notion, however, that human interaction with chatbots can be beneficial. Because a bot could be able to assist us see things that we otherwise wouldn’t.
Amory: If ELIZA was created roughly 60 years ago, then robots ought to be outstanding professionals in this field! Right? But no, definitely not. They’re actually terrible at it. because we are extremely complex creatures. Although robots have been analysing human language for some time, it is much more difficult for them to actually understand what is being said.
Soroush: Accordingly, it’s simple—put let’s it in quotation marks—relatively simple to examine what individuals say in terms of what they actually say directly. But using what people are saying to infer what is the interior mental state is a considerably more difficult scientific matter. Bots are unable to deduce the emotions or states of others from their speech, facial expressions, or emotions. The ability for bots to learn to understand people’s interior states is therefore crucial.
That’s extremely fascinating, Ben. So in a sense, you’re addressing a fundamental need that bots have, namely, the interpretation and comprehension of people’s underlying emotions.
Soroush: In cognitive research, this is known as theory of mind, as it was occasionally called. Naturally, humans have evolved to do that. For instance, monkeys also did. And other primates.
Ben: Over a very, very, very, very, very, very long period of time.
Soroush: That’s right.
Ben: Soroush indicates the rock polisher on the bookcase in his office. a tumbler that somewhat unnaturally speeds up a natural process
Soroush: Our actions are extremely similar. Evolution has been taking place where we are for hundreds of millions of years. But essentially in a few years.
Amory: Some people could think that this resembles pretending to be God. accelerating a piece of software’s comprehension of human thought. It’s kind of… yikes?
But ever since we envisioned the future or imagined people picturing the future, we’ve been reaching for the stars on this stuff.
Soroush: Because I love science fiction so much, pretty much everything of my study has been influenced by it in some way. However, this particular field of study focuses on mental states and, more crucially, on predicting people’s behaviour. Actually, reading Asimov’s Foundation trilogy was the main source of inspiration. The series’ central premise is that a mathematician by the name of Harry Seldon creates a model or field of study called Psychohistory.
Soroush: That use historical data to forecast how societies will develop in the future.
Amory: Let’s talk about how Soroush is emulating Harry Seldon by using the personal commenting histories of Redditors to bring psychohistory to life.
Ben: This is really essential to note: Soroush and his graduate students decided against creating a bot because of the potential repercussions of creating a tool that could identify problems with mental health or mental illness in specific people.
Amory: I like this. We are developing. Don’t construct Skynet. Maybe just create a study that speculates about what may occur if we actually created Skynet.
If we upload now, Skynet will control your military, but you’ll be in control of Skynet, right? [Terminator movie clip audio:
]
Ben: Instead, Soroush created a diagram outlining the bot’s construction. Run the model, and consider the input, output, and how to adjust that output.
Tune, Amory?
Ben: We’ll make it. Know this right now. Over time, the Dartmouth team examined a large amount of publicly available data from Reddit users.
Soroush: Thousands of them.
Ben: Good. thousands, or tens of thousands?
Tens of thousands, says Soroush.
However, their objective was not to have a computer model or bot determine whether a large group of individuals were collectively experiencing mental health issues. rather, on a personal level. which is difficult again. Because, well, we’re all different. The model in this field of computer science, known as natural language processing, must take into consideration how various communication styles vary among individuals. Consider sarcasm.
Ben: Sarcasm is really difficult. Because of this, Soroush’s team applied natural language models in a very particular manner.
Soroush: Consequently, the model learns how each person uses language in their own unique way.
Amory: It’s true that this is quite comparable to what a therapist working with a specific patient might accomplish over time, which is to become familiar with the nuances of communication. However, there is also something that, let’s be honest, is a huge, huge waste of time. This computational acceleration of evolution follows.
Ben: The team’s algorithm, or bot, starts by removing specific types of information, like references to specific events and persons, from these enormous data sets on a user’s complete Reddit posting history.
Like, say, a pandemic, Amory
Because we don’t want to record feelings that are specifically focused at certain occurrences, says Soroush. However, you know, we aim to record the subject’s internal feelings,
Amory: After that, the model employs some really sophisticated natural language processing to extract the signal, or the meaning, from the posts.
Ben: Over the past ten or so years, natural language processing has advanced significantly in the field of computer science. And Soroush’s crew is utilising the most up-to-date software to aid the bot in comprehending what the consumer is actually expressing.
Amory: The modern computer programmes are much more complex than the older ones, which could recognise keywords and phrases.
Soroush: Of course, words and phrases are instructive, but we can really look at, for instance, the grammatical structure of a post and, um, look at long-range dependencies between words and what it implies when you may say word at the beginning of the sentence. Language is that way complex.
Ben: That’s correct, but there is a big difference between saying, “I’m contemplating suicide,” and, “Wow, this, uh, high quality gif-maker is really killing it. He makes me think of myself.
Soroush: That’s right.
Ben: But here’s a significant query.
How can you tell if the bot you created is effective?
Yes, that is a pretty good question, Soroush. Therefore, analysing your physique is arguably the most difficult part of these kinds of initiatives.
Ben: The team tested and fine-tuned the model extensively before considering how to measure success. In case the model’s outputs were inaccurate, they really implemented another layer of calculation to the outputs after the model to obtain more accurate findings. They gave the test bot inputs and waited for the model to produce outputs. Then they examined two indicators of success, including whether a user joined a subreddit devoted to mental health after the bot had predicted they would, and users’ self-reports of mental health issues.
Soroush: It’s surprising how many people eventually self-report saying, “Hey, I just got diagnosed by you know, I went to these forums and I went to these subreddits and I said, “I got diagnosed with, with bipolar for example.” Right.
Ben: According to you, joining a subreddit devoted to mental health and self-reporting that you either have a mental health issue or are now facing a mental health challenge are the two criteria for success.
Soroush: That’s right. And if we had predicted it in that way before the user actually reported it, our model would have been successful. Again, if we discover it later, it is obviously meaningless. So it all depends on how early you can spot that.
Amory: The team’s work, of course, anonymizes this data. We didn’t look at any particular users or request to interview any of them because Soroush and his team had to obtain approval from an ethics board in order to even begin the work. Reddit was chosen by the team in part because, in contrast to Meta’s Facebook, Reddit makes the user post history easily accessible to researchers and offers it to them without any restrictions. However, you have to consider how subjects may feel about participating in this study.
Soroush is not actually attempting to replace therapists, Ben, to be clear. Make the newest, best ELIZA. In order to offset what he and some of his students perceive as detrimental cultural norms around discussing or accepting mental illness, he is attempting to connect the difficulties that he discussed before in some cultures and construct a bot. It might serve as a better early warning system.
Soroush: I came to the conclusion that people from those cultures would benefit greatly from having a way for people to not have to voluntarily say, Hey, I feel depressed.
Ben: Amory, how would you feel if a bot was reading all of your social media postings and suggested that you might be depressed?
Amory: To be quite honest, I’m not as leery of the Big Brother phenomenon as most people are, and perhaps that’s a terrible thing. We simply don’t have that perspective ourselves, you know, so I don’t think it would hurt to have the spotlight shined on my posting behaviours and to simply take another look back at them and say, “Oh yeah, I did post some things or say some things.”
So let’s actually return to ELIZA for a moment, Ben. as well as ELIZA’s maker.
Amory ponders: Weizenbaum, Joseph
Joseph Weizenbaum immigrated to America with his family in the 1930s, according to Dag Spicer. They were aware of what the Nazi Party and Hitler would bring.
That is Dag Spicer, Ben. who we spent some time hanging around with. He’s not in Dartmouth, New Hampshire, at all. He is on the other side of the nation from Soroush.
I’m a senior curator at the Computer History Museum named Dag Spicer. Additionally, we are currently in Mountain View, California.
Dag is a unique man with a unique name, according to Amory. He has spent almost 30 years working at the Computer History Museum. Additionally, he is an expert on computers. Additionally, he has an excellent understanding of ELIZA and Joseph Weizenbaum, who developed ELIZA and worked on a number of computers that profoundly changed how we live and interact with technology. prior to ELIZA.
Dag: The ERMA computer, a check-processing device, was used by Weizenbaum and others. So, how did it accomplish that? The magnetic ink character recognition (MICR) font, which is still visible on the bottom of checks, is the truly cool invention they came up with. You may identify it by the odd small numbers in that shape at the bottom of your check. They are from ERMA, around 1953.
Ben: According to Dag, ERMA had an impact on more than just the odd numbers at the bottom of checks. Additionally, it made tens of thousands of human check processors unemployed.
Amory: According to Dag, Weizenbaum was impacted by this.
Dag: He was a techie who genuinely cared about how his work and the field of study to which he belonged were used.
Amory: Weizenbaum, a pioneer in the fields of artificial intelligence and human computer interaction, was concerned about the problems we may try to address or the things we might try to create with technology.
Ben: Now for the amusing bit. The so-called first chatbot, ELIZA, wasn’t actually a serious undertaking. ELIZA was designed to be a parody. intended to show humans an example of how chatterbots, as they were originally named, may act inappropriately.
Mind, Amory. Blown.
Ben: Amory, that’s why our therapy session wasn’t very successful.
Amory: We were tricked!
Ben: A year after the iPhone was introduced, in 2008, Joseph Weizenbaum passed away. But according to Dag, Weizenbaum’s distrust of technology persisted throughout his entire life.
Dag: It truly began with Robert Oppenheimer, who declared that he spent the rest of his life regretting what he had done after developing the atomic bomb. Right? And he added that technologists must watch out for what he described as technologically delicious difficulties because they truly draw you in with their difficulty. But when viewed from a more compassionate angle, they might potentially be rather detrimental.
Amory: We questioned Dag about Weizenbaum’s opinion of Soroush Visogi’s study, which examined Reddit post histories to determine whether or not people were experiencing mental health problems. He didn’t want to speak in Weizenbaum’s behalf out of turn. So we specifically asked him what he thought.
Dag: Because they are essentially mood watchers, my initial gut reaction is that it’s a little frightening. And, as you may know, there are currently AIs that can read people’s faces and carry out the same function. Oh, you’re in a foul mood today, they say. They merely scrutinise your face, you know. And from there, you know, it’s just such a slippery slope toward government or private action. So, you understand, there is always a tradeoff. Well, is it worth it if it saves just one life? But, you know, I don’t think it’s a good idea in this situation.
Ben: Soroush developed the model by scraping Reddit and looking for indications of mental illness in specific users’ posts. He is therefore less sceptical. But he does come with a significant warning.
Soroush: These things shouldn’t be managed by the platforms, the government, or any other external body, telling people to go see a therapist or something similar. People should choose to use these tools privately, and any communications between them and the tool should be kept private.
Ben: Whether you agree with Soroush’s team’s vision of a society in which an opt-in programme may assist people in acknowledging their own mental health needs and issues, or you share Dag Spicer or perhaps Joseph Weizenbaum’s cynicism about the potential applications of such a programme. All of this has already occurred.
Amory: Bots already diligently connect with consumers, capture enormous datasets that are made available to the public, and much more. Sometimes we don’t even know that we aren’t just using the internet for individuals to talk to each other. It is gradually being mediated by tiny software components that have been taught the newest and best programmes. to perform a variety of tasks. Practice predicting your mental health condition now. Will you be running for office tomorrow?
Next week, Ben
You can naturally retain a record of everything you say and do since everything is digital. So it creates a level of accountability that the current politicians just don’t, don’t have.]
Ben: Good bot.
Poor bot, Amory.
Ben: WBUR in Boston is the producer of Endless Thread.
Amory: My co-host Ben Brock Johnson and Dean Russell collaborated to write and produce this episode. And I’ll be a co-host, too. By Paul Vaitkus, mix and sound design.
Megan Cattel is our web producer, Ben. Nora Saks, Quincy Walters, Grace Tatter, Matt Reed, and Emily Jankowski make up the remainder of our crew.
Amory: The television programme Endless Thread explores the hazy boundaries between online groups and worthless boxes. Contact us if you have an untold history, an unsolved mystery, or an outrageous internet tale that you would like us to share. Send an email to Endless Thread at WBUR.ORG.