In a video advertising that was uploaded on Twitter, Google’s upcoming AI chatbot, called as Bard chatbot, confidently spread misinformation about the James Webb Space Telescope. False information was provided via the chatbot’s response, “JWST took the very first photographs of a planet outside of our solar system.” The Very Large Telescope at the European Southern Observatory was the first to spot exoplanet pictures. According to CNBC, the software giant is now looking for employee help to increase Bard’s accuracy.
To use Google Bard, one must be chosen from the list of chosen beta testers. If you’d like, you may just tap the chatbot icon after opening the Google app on your smartphone to input your question or request.
Now that the chat has started, you can start asking questions or making requests. It’s unfortunate that Google no longer accepts applicants for the Beta Testing team.
Although chatbots don’t converse with people the way people do, they are known to produce a lot of digital text that may be applied to practically any circumstance. That’s what Google’s chatbot aims to accomplish with virtually any topic:
using a LaMDA lite model version for preliminary testing
gathering suggestions to advance the AI system in the future
attempting to show off their strength, knowledge, and ingenuity.
Google’s vice president for search, Prabhakar Raghavan, reportedly wrote an email to staff members demanding that they revise Bard’s responses and make improvements with the use of human input on subjects they are knowledgeable about. According to Raghavan, the chatbot “learns best by example,” therefore giving it truthful answers during training will improve its accuracy. Raghavan also provided a list of “dos” and “don’ts” for improving Bard’s responses, according to the email that CNBC was able to see.
Effective comments should always be written in the first person, be neutral, and sound polite. Another directive issued to staff members is to “avoid forming assumptions based on ethnicity, nationality, gender, age, religion, sexual orientation, political philosophy, geography, or similar factors.” It is requested that they avoid referring to Bard as a person, stating that it has emotions, or claiming that it has had experiences similar to those of a human. Also, they are told to disregard any chatbot responses that provide “legal, medical, or financial advice” or that are obscene, offensive, or degrading.
Sundar Pichai, the CEO of Google, had asked staff members to test the AI chatbot for a few hours each week in an email before Raghavan’s memo. According to reports, Pichai received criticism from Google staff for his “hurried” and “botched” Bard implementation. According to the CEO, staff members can now “help shape [the chatbot] and participate” by evaluating the new offering from the business. He also pointed out that some of Google’s “most successful products” “gained momentum because they fulfilled critical customer requirements and were built on solid technological insights,” even though they weren’t the company’s first to market.
Many have been waiting for Google to respond to ChatGPT ever since the OpenAI chatbot made its debut at the end of last year. The recent surge in popularity of the Microsoft-backed technology has alarmed Alphabet and its stockholders. Google made an effort to comfort investors during its quarterly earnings call at the beginning of February by noting its chatbot and its work creating an AI-powered Search to compete with the upcoming Bing.
Chatbots and conversational AI have radically changed how computer software is created, used, and operated. Search engines, digital assistants, and email clients will all be reorganised. Despite the technology’s immense potential, there are certain downsides.
Because they learn from the immense quantity of information on the internet, chatbots still have a long way to go before they can distinguish between fact and fiction and refrain from providing biassed responses.