The tech industry is being completely overtaken by chatbots powered by AI, and Google was recently spotted triggering a code red when the release of ChatGPT took centre stage.
According to the NYT, the popular conversational AI chatbot developed by OpenAI has caused widespread apprehension about the search engine’s future.
The Google CEO addressed the enormous threat that ChatGPT poses to the search engine’s business after being observed participating in a few meetings about the company’s AI strategy. This was made known by an internal document and an audio tape that the Times had examined.
The Trust and Safety division and a few other divisions that urge consumers to shift gears and assist in the construction and launch of numerous AI prototypes and services were mentioned by a few teams working at Google Research. According to the Times, a small number of employees were asked to create AI products that generate artwork and images remarkably similar to those produced by popular AI programmes like DALL-E, which are used by millions of people worldwide.
Although Google has not yet responded to any direct comments directed at it, the company’s initiative to develop a strong portfolio of AI products comes at a time when both Google employees and experts are debating on ChatGPT, a website managed by an ex-Y Combinator president. This has the potential to eliminate search engines, which could have an impact on Google’s business model, which is dependent on ad revenue.
These chatbots were singled out because they prevent consumers from clicking Google links that include advertisements. And that was in charge of generating $208 billion in revenue, which is around 80% of the money Alphabet would make in 2021.
A little more than one million individuals have joined ChatGPT about five days after the outbreak began. It generates information through millions of web pages in a terrific conversational tone that is really human-like. As a result, users frequently ask these chatbots for assistance with writing college-level essays, and some even offer coding guidance or act as therapy.
But it’s not unusual to see how this bot frequently makes serious mistakes. According to AI specialists, it is unable to discriminate between confirmed facts and any falsehoods being presented and cannot fact-check what is being said. It can also produce responses that some people mistake for hallucinations.
This bot has the ability to produce comments that some people may find insulting or even racist. Furthermore, there is a considerable margin of error in this situation, and Google is reluctant to release LaMDA, its own chatbot, due to its vulnerability to toxicity.
Chatbots can’t be used so reliably, according to the chief of AI at Google. Because of this, the corporation is working harder to gradually improve its search engine rather than removing it altogether.