At OpenAI’s DevDay event in San Francisco today, CEO Sam Altman revealed the most recent version of the company’s language model, GPT-4 Turbo. Six significant improvements are included in this release, all aimed at enhancing user interaction, expanding the model’s functionality, and saving developers money.
Additionally, OpenAI announced the release of customized ChatGPT versions, or GPTs, which enable users to create and share AI suited to particular activities or hobbies without the need for coding knowledge. These proprietary AI models will be open for public creation and sharing, and they can help with daily tasks, learning, or amusement.
In addition, OpenAI is introducing the GPT Store, where user-generated content will be able to be highlighted and paid for. The business highlights that GPTs are designed with user privacy and security in mind, guaranteeing control over user data and adherence to usage guidelines to stop the spread of dangerous content. Furthermore, OpenAI has emphasized that GPTs would eventually develop greater intelligence and be able to act as “agents” in the actual world while carefully weighing the repercussions for society.
The changes, according to Altman, include a significant increase in context length, increased developer control via new features like reproducible outputs and JSON load, and updated world knowledge that extends through April 2023. He also revealed a dedication to customisation and scalability with increased rate limitations and noticeably lower cost, as well as additional modalities in the API like vision and text-to-speech capabilities.
Together with Altman, Microsoft CEO Satya Nadella emphasized the critical role that Azure plays in providing the cutting-edge infrastructure needed for these large-scale AI models. They also emphasized the common goal of democratizing access to AI while maintaining safety and responsible AI use. This collaboration opens a new chapter in the quickly changing business and denotes a determined attempt to push the limits of what AI is capable of.
Let’s examine the six specific GPT-4 Turbo characteristics that Altman emphasized in his talk and see how they can affect customer experience managers and marketers:
GPT-4 Turbo: Innovating AI Interactions with Deep Context
Up to 128,000 tokens of context, or roughly 300 pages of a typical book, can be handled by GPT-4 Turbo. Compared to the prior restriction, this is a significant increase that enables longer chats and more in-depth engagement. Altman claims that the model’s accuracy has also increased over lengthy contexts.
Longer client engagements can be maintained with the capacity to handle up to 128,000 tokens. This is useful for interactive marketing campaigns and customer care bots, where longer engagement times and deeper context are advantageous.
Accuracy and Organization: Improved Management of AI Reactions
More control over the reactions and outputs of the model has been granted to developers. One of the new features is “JSON load,” which makes API interactions simpler by guaranteeing replies in proper JSON format. The model can perform many tasks at once, has improved function calling capabilities, and provides “reproducible outputs” for consistent outcomes using the same parameters. Furthermore, viewing log problems in the API will be possible with an upcoming functionality.
Applications for marketing and customer experience can provide more standardized and structured interactions thanks to features like reproducible outputs and JSON loading. As a result, chatbots and other AI-powered communication tools may operate more dependably.
A World of Information: Real-Time Information Update for GPT-4 Turbo
With updates to the world knowledge through April 2023, GPT-4 Turbo intends to provide ongoing updates. Initially, ChatGPT had a September 2021 knowledge cutoff date; however, it was later updated to September 2022. The model’s replies and information correctness can be enhanced by integrating external knowledge from documents or databases through the use of a new “retrieval” capability.
Customer care bots are equipped with the most recent global knowledge as of April 2023, enabling them to deliver more precise and prompt information—a crucial aspect of upholding authority and confidence in customer interactions.
Going Beyond Text: Including Sound and Vision in AI Conversations
GPT-4 Turbo has a text-to-speech model and vision capabilities. For instance, GPT-4 Turbo can now process photos via API for a variety of activities, improving applications like assisting the visually impaired. DALL-E 3 is used to generate images programmatically. The text-to-speech technology improves accessibility and use cases like language acquisition by providing natural-sounding audio outputs with different voice presets.
More participatory and easily accessible customer experiences are made possible by the advent of vision and text-to-speech technologies. By producing more engaging material, such as customized photos or realistic voice responses, marketers can improve the client experience in general.