The artificial intelligence (AI)-powered chatbot ChatGPT experienced the fastest growth of any consumer application in history when it reached 100 million monthly active users in just two months from its launch in November 2022.
Large Language Models (LLMs), also referred to as “generative AI,” are the basis for chatbots like ChatGPT. Generative AI refers to algorithms that can produce new text, audio, image, or video outputs after being trained on vast amounts of input data. Applications like Midjourney and DALL-E 2 that create artificial digital visuals, including “deepfakes,” are powered by the same technology.
Contract Risks
A variety of contractual considerations may be raised by the use of chatbots or other AI systems.
A company must also limit its use of chatbots or AI more generally if a contract requires it to carry out tasks on its own or with the help of a specific employee rather than using AI. A chatbot service could be a subcontractor, potentially requiring prior approval from the end client, to the extent that it produces contract work product, as opposed to traditional information technology, which only provides a platform for producing work product.
Threats to Cybersecurity
Businesses face cybersecurity dangers from chatbots on two key dimensions. First, unscrupulous persons with basic programming knowledge can utilize chatbots to build malware for cyber attacks. Second, chatbots can be used to establish human-like dialogues that can be utilized for social engineering, phishing, and harmful advertising schemes, even by bad actors with limited English proficiency. This is because chatbots can convincingly imitate fluent, conversational English. However, cybersecurity researchers have discovered work-arounds that threat actors on the dark web and special-access sources have already exploited. Chatbots like ChatGPT typically disallow malicious uses through their usage policies and implement system rules to prevent bots from responding to queries that ask for the creation of malicious code per se. Companies should increase their efforts to strengthen their cybersecurity in response, and teach staff to be alert for phishing and social engineering frauds.
Threats to Data Privacy
The routine collection of personal data by chatbots is possible. According to ChatGPT’s Privacy Policy, for instance, the company gathers information about users’ IP addresses, browser types, and settings, as well as information about their interactions with the website and their browsing patterns over time and across different websites, all of which it may share “with third parties.” The services offered by a chatbot might become dysfunctional if a user chooses not to disclose such personal information. The most popular chatbots do not now appear to offer consumers the ability to erase the private data amassed by their AI models.
Hazards of Deceptive Trade Practices
Federal and state laws against unfair and deceptive trade practices may be broken if an employee outsources work to an AI program or chatbot when the consumer thinks they are speaking with a human, or if an AI-generated product is advertised as being manufactured by humans. The Federal Trade Commission (FTC) has issued guidance declaring that it has authority over the use of data and algorithms to make choices about customers as well as chatbots that impersonate humans under Section 5 of the FTC Act, which forbids “unfair and misleading” acts.
Hazards of Discrimination
When companies deploy AI systems, discrimination-related issues may appear in a variety of ways. First, bias may appear as a result of the skewed character of the data used to train AI algorithms. Human bias may be ingrained in the design, development, implementation, and use of AI systems since AI models are developed by humans and learn by consuming the data that humans produce. For instance, Amazon reportedly discontinued an AI-based hiring program in 2018 after discovering that the system was prejudiced towards women. The model was designed to screen applicants by identifying trends in resumes sent to the organization over a ten-year period, but since the training set’s majority of applicants had been male, the AI quickly learned that male applicants were favoured over female applicants.
Hazards of Misinformation
Chatbots can assist bad actors in swiftly and cheaply fabricating misleading information that has the appearance of authority. Chatbots can write news stories, essays, and scripts that promote conspiracy theories, “smoothing out human flaws like poor syntax and mistranslations and going beyond immediately discoverable copy-paste operations,” according to a recent study.
Ethical Dangers
Businesses governed by organizations that uphold professional ethics, such those that regulate lawyers, surgeons, and accountants, should make sure that their use of AI complies with their ethical commitments.
Hazards in Government Contracts
The US government is the biggest global buyer of goods and services. US government contracts are routinely awarded through formal competitive processes, and the resulting agreements frequently depart from customs in commercial contracting. These agreements typically include substantial standardized contract provisions and compliance requirements. The use of AI by private enterprises to prepare bids and proposals for government contracts and to carry out those contracts once they have been awarded will be governed by these procedural guidelines and contract specifications.
Threats to Intellectual Property
AI-related dangers to intellectual property (IP) can occur in a variety of ways.
First off, due to the massive volumes of data used to train AI systems, training data is likely to contain third-party intellectual property (IP), such as patents, trademarks, or copyrights, whose usage is not authorized in advance. Hence, the AI systems’ outputs might violate the intellectual property rights of others. Litigation has already been sparked by this situation.
Second, disagreements over who owns the intellectual property (IP) produced by an AI system may occur, especially if numerous parties contribute to its development. For instance, the user who submitted the prompts is granted “right, title, and interest” in the LLM’s output under the condition that they complied with both the law and OpenAI’s terms of usage. For the purposes of “providing and maintaining the Services, complying with applicable legislation, and enforcing our policies,” OpenAI maintains the right to use both user input and AI-generated output.
Lastly, there is the question of whether IP created by AI is really protected since there may not even have been a human “author” or “inventor” in some cases. The validity of the current IP laws’ applicability to these new technologies is already being contested by litigants. As an illustration, in June 2022, software engineer and CEO of Imagination Engines, Inc. Stephen Thaler filed a lawsuit requesting the courts to overturn the US Copyright Office’s decision to reject a copyright for artwork whose author was named as “Creativity Machine,” an AI program Thaler owns. Because the Copyright Act only gives protectable copyrights to works created by human authors with a basic level of originality, the US Copyright Office has stated that autonomously generated works by AI technology do not acquire copyright protection. The US Copyright Office declared in late February 2023 that photographs used in a book that were generated by the image-generator Midjourney in response to text input from a human were not copyrightable as they were “not the result of human authorship.”
Validation Risks
Despite how amazing they may be, chatbots are capable of making erroneous but convincing-sounding assertions, or “hallucinations.” LLMs lack sentience and “know” nothing about reality. Instead, they just have knowledge of the response that is most likely given a prompt based on the training data. OpenAI itself warns that ChatGPT has “little awareness of the world and events after 2021” and that it “sometimes produces inaccurate replies.” Users have tagged and archived ChatGPT responses that provided incorrect solutions to logic puzzles, historical questions, and mathematical issues.