Nuclear fusion, which is still decades away from becoming a reality, wasn’t the biggest scientific advance of 2022; instead, it was the introduction of chatbots powered by artificial intelligence (AI). One of these chatbots, ChatGPT, has even been compared to the printing press, electricity, the wheel, and even fire by former US Treasury Secretary Lawrence Summers. There is much to be thrilled about, yet the new technologies also lack safety nets, and my family has already experienced their negative aspects.
OpenAI created ChatGPT, a chatbot that can produce language that is fluid, coherent, and pertinent to a given environment. It can produce reports and summaries based on massive datasets, offer personalised responses to frequent consumer inquiries, assist scientists and researchers by summarising difficult research papers and articles and producing ideas for additional research. However, ChatGPT can also be used to make deepfake films or audio recordings by synthesising realistic human voices or faces, as well as fake news articles or social media postings to propagate misinformation or sway public opinion.
The issue is that ChatGPT’s responses are so plausible and convincingly authoritative that even the top economists and technologists, like Summers, are duped by them. In a blog post titled “AI’s Jurassic Park moment,” cognitive psychologist and AI researcher Gary Marcus pointed out that while these systems can be entertaining to use, they are intrinsically untrustworthy, regularly making errors in both reasoning and fact, and prone to hallucination. They may respond that “porcelain can assist balance the nutritional content of the milk, supplying the infant with the nutrients they need to help grow and develop” if you ask them to explain why crushed porcelain is beneficial in breast milk, according to Marcus.
For many AI researchers, including Marcus, the dependability and trustworthiness of ChatGPT and other comparable systems have been a subject of concern. Because of worries about the possibility for scientific and political misrepresentation, Meta AI, the company that created the Galactica chatbot, decided to pull the product three days after it was made available.
Even I didn’t consider the warnings to be important until my son Vineet began utilising a GPT technology from OpenAI and asked it to provide him with “interesting data about Vivek Wadhwa and his family.” The response was reasonable but contained several falsehoods, the most obvious of which was the claim that I am married to Ritu, a Microsoft executive and alumna who graduated from UC Berkeley, and that the two of us have three children: Anjali, Anupamam, and Arjun. It included information about the children’s employment locations and educational backgrounds.
My boys Vineet and Tarun are still sad three years after I lost my beloved wife Tavinder to cancer. I’m not sure how this AI came to possess this damaging false information or how to repair it. Ritu Wadhwa is a name I’ve never heard of, and I can’t even find a Microsoft employee by this name on LinkedIn.
The neural networks of the human brain are designed to function similarly to those of machine learning technologies, but they do it in a constrained and flawed manner. Millions, if not billions, of parameters make up deep learning systems, and their engineers can only identify them geographically within a sophisticated neural network. They are frequently referred to as “black boxes,” which refers to the fact that the methods and logic underlying their conclusions are opaque or difficult to comprehend. After a neural network has been trained, not even its creator is aware of how it is performing its function. Because of this, it is challenging to figure out how the AI system acquired its knowledge.
So, when I re-ran Vineet’s inquiry, I received a variety of results, one of which stated that I am married to Quatrina Hosain, a businesswoman and technology executive, and that our family includes a son and a daughter. She is also a mystery, and it is impossible to figure out where the AI received this false information.
The creators of OpenAI have acknowledged ChatGPT’s shortcomings, which will undoubtedly be fixed over the coming years as technology continues to progress at an exponential rate. ChatGPT is still in development. By destroying jobs in data entry, customer service, data analysis, and manufacturing and transportation, such as driving and assembly-line work, they will, however, cause far more societal problems than misinformation.
It should be noted that ChatGPT wrote more than 70% of this piece based on the notes and inquiries I provided it, proving that not even journalism positions are protected.
This is the incredible and terrifying future that we are moving quickly toward.
We require robust safeguards and stringent laws to make sure that AI is created and applied in a way that is useful and consistent with human values and ethical norms. Governments, businesses, and other stakeholders must collaborate to ensure that the benefits of AI are widely distributed and that policies are put in place to support workers who may be negatively impacted by these technologies in order to allay concerns about the potential negative impact of AI on jobs.