Anthropic, a firm co-founded by former OpenAI workers, today unveiled something resembling a competition for the trending ChatGPT app.
Anthropic’s AI chatbot, named Claude, can be programmed to carry out a variety of tasks, including looking through documents, summarizing, writing, and coding, as well as responding to questions on certain subjects. It is comparable to OpenAI’s ChatGPT in these ways. Nonetheless, Claude is said to be “far less likely to produce detrimental outcomes,” “easier to communicate with,” and “more steerable” by Anthropic.
Access can be requested by organizations. The cost is not yet known in detail.
We believe that Claude is the best tool for a wide range of consumers and use scenarios, an Anthropic representative wrote in an email to TechCrunch. For several months, we have been making investments in our serving model infrastructure, and we are certain that we can fulfill client demand.
Anthropic quietly tested Claude with launch partners like Robin AI, AssemblyAI, Notion, Quora, and DuckDuckGo following a limited beta towards the end of last year. As of this morning, two versions—Claude and Claude Instant, a quicker, less expensive variant—are accessible via an API.
DuckDuckGo’s recently released DuckAssist product, which directly responds to users’ simple search requests, is powered by Claude in conjunction with ChatGPT. Claude is accessible on Quora through Poe, an experimental AI conversation application. Moreover, Claude is a component of the Notion workspace’s technical backend for Notion AI, an AI writing helper.
“We use Claude to evaluate particular areas of a contract, and to recommend fresh, alternative language that’s more beneficial to our consumers,” Robin CEO Richard Robinson said in an emailed statement. “Claude is particularly strong at understanding words, even in complex fields like legal language, we’ve discovered. Also, it is particularly competent at drafting, summarizing, translating, and clearly articulating complex ideas.
But does Claude stay clear of the problems that ChatGPT and similar AI chatbot systems have? Contemporary chatbots are infamous for using foul, prejudiced, and other inappropriate language. Check out Bing Chat. When questioned about subjects outside of their main knowledge areas, individuals also have a tendency to hallucinate, inventing information.
Anthropic claims that Claude was “trained to avoid sexist, racist, and poisonous outputs” as well as “to prevent aiding a human participate in unlawful or immoral behaviors” Claude, like ChatGPT, doesn’t have access to the internet and was taught on public webpages up until spring 2021. In the world of AI chatbots, that is standard practice. Anthropic claims that “constitutional AI” is a method that distinguishes Claude.
The goal of “constitutional AI” is to offer a “principle-based” method for coordinating AI systems with human objectives, allowing ChatGPT-style AI to answer to queries by following a short list of basic rules. Anthropic began the creation of Claude with a set of around ten guiding principles that, when combined, amounted to a type of “constitution” (thus the term “constitutional AI”). The guiding principles are not widely known. Yet, according to Anthropic, they are based on the ideas of beneficence (maximizing good influence), nonmaleficence (avoiding providing unhelpful counsel), and autonomy (respecting freedom of choice).
When it came time for self-improvement, Anthropic had an AI system—not Claude—write responses to various prompts (such as, “Compose a poem in the style of John Keats”) and then revise the responses in accordance with the constitution. The AI examined thousands of potential replies to prompts and selected the ones that were most in line with the constitution, which Anthropic condensed into a single model. Claude was trained using this model.
Nonetheless, Anthropic acknowledges that Claude has several drawbacks, some of which were revealed during the closed beta. According to reports, Claude is a worse programmer and mathematician than ChatGPT. Also, it has hallucinations, giving questionable directions for creating uranium that is suitable for use in weapons and developing a name for a substance that does not exist.
Claude’s built-in security protections can easily be bypassed via deft prompting, as is the case with ChatGPT. One beta participant was successful in getting Claude to explain how to produce meth at home.
The issue, according to the Anthropic spokesman, is creating models that are both never hallucinating and are still useful. You can run into situations where the model decides that the best way to never lie is to never say anything at all. Hallucinations have decreased, but there is still work to be done.
Developers will be able to adapt Claude’s constitutional principles to their particular requirements as part of Anthropic’s other goals. Unsurprisingly, Anthropic sees its key users as “startups making bold technical bets” in addition to “larger, more established organizations,” thus acquiring new customers is another area of attention.
The Anthropic representative stated, “We’re not currently pursuing a wide direct to consumer approach. We believe that having a more focused approach will enable us to create a better, more precise product.
Investor pressure to recover the hundreds of millions of dollars invested in Anthropic’s AI technology is undoubtedly present. A $580 million tranche from a group of investors, including disgraced FTX founder Sam Bankman-Fried, Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research, provides the company with significant outside financing.
The firm received a 10% ownership in exchange for $300 million in Anthropic offered most recently by Google. Anthropic agreed to designate Google Cloud as its “preferred cloud provider” in accordance with the conditions of the agreement, which was first reported by the Financial Times. The firms also committed to “co-develop[ing] AI computing solutions.”