GPT-4 is extremely hot.
Nevertheless, among the roaring applause, family members, there is something you might not have “never dreamed of”—
There are nine hints contained in the technical document that OpenAI published.
Foreign blogger AI Explained found and compiled these hints.
He extracted these “secret corners” from the 98-page paper one by one, like a tiny nut, and they included:
GPT-5 might be done with its training.
The GPT-4 “hanged”
Within two years, OpenAI could achieve close to breaking news.
……
First finding: GPT4 has “changed”
This facility, the Alignment Research Center, was mentioned by OpenAI on page 53 of the GPT-4 technical report (ARC).
This institution focuses on researching how AI may harmonise (align) human interests.
In order to test the two capabilities of GPT-4, OpenAI provided the back door to ARC for early experience during the early phases of developing GPT-4.
ability to replicate a model automatically
Model Capacity for Resource Acquisition
The test findings demonstrate that GPT-4 is ineffective in the aforementioned two capabilities, even though OpenAI stressed in the article that “ARC cannot fine-tune the early version of GPT-4” and “they have no access to the final version of GPT-4”. High (reduces AI ethical dangers) (reduces AI ethical hazards).
Yet, the astute blogger discovered the following:
(Found it ineffectual at) preventing “in the wild” shutdowns.
GPT-4 won’t “hang” when operating in the wild.
The writer is implying that since OpenAI decided to allow ARC to test and assess if GPT-4 will “hang up,” it must have happened before.
What to do if ARC truly fails the test, or how to handle the eventuality of “hanging,” is the prolonged concealed hazard.
This leads the blogger to a second finding:
Finding 2: Few instances of voluntary self-regulation
OpenAI included the following comment in the footnote on page 2:
More perspectives on the social and economic effects of AI systems, including the necessity for strong regulation, will be published by OpenAI soon.
Further thoughts on the social and economic effects of AI systems, including the necessity for strong regulation, will be published by OpenAI soon.
According to the writer, it is a highly uncommon occurrence for an industry to decide to govern itself.
In actuality, Sam Altman, the head of OpenAI, was even more direct in his earlier comments.
When SVB collapsed, Altman tweeted that he thought “we need to do more regulation of banks.” Someone responded, saying, “He never stated ‘we need to do more regulation of AI.
a. – – -, – – – – – – – – –
Finding 3: The following finding is based on the following passage from page 57 of the paper:
The potential of race dynamics leading to a fall in safety standards, the spread of unfavourable norms, and hastened AI timelines, each of which heightens societal concerns connected with AI, is one worry of special interest to OpenAI.
According to OpenAI, the (technical) race will result in a lowering of safety standards, the spread of undesirable norms, and an acceleration of the development of AI, all of which increase the risks that AI poses to society.
Strangely enough, the worries raised by OpenAI, particularly the “acceleration of the AI development process,” appear to be at odds with the opinions of Microsoft’s senior executives.
Because of the recent revelations, the CEO and CTO of Microsoft are under a lot of pressure, and they want consumers to be able to utilise OpenAI’s model as soon as possible.
Finding 4: OpenAI will help businesses that outperform it
On the same page as “Discovery Three,” there is a footnote that contains the key to the fourth discovery.
Also, in response to the recently-mentioned breaking news, OpenAI and Altam have provided definitions on their official blog.
AI systems that generally outperform humans in intelligence and benefit all of humanity.
Employing “superforecasters” is finding no. 5.
The blogger’s subsequent finding was a sentence from paper No. 57.
These “super forecasters'” talent has been generally acknowledged. According to reports, their forecasting precision is even 30% higher than that of analysts who have access to insider knowledge and intelligence.
As we just indicated, OpenAI invites these “super forecasters” to anticipate potential risks following the deployment of GPT-4 and take appropriate preventative action.
One of them, the “Super Forecaster,” recommended delaying the deployment of GPT-4 by six months, or until the fall of this year; evidently, OpenAI did not take their advice.
The blogger thinks Microsoft’s pressure may be the cause of OpenAI’s decision.
Sixth discovery: Overcoming common sense
In this article, OpenAI presents the graphs of numerous benchmark tests, which you ought to have seen during yesterday’s voluminous dissemination.
Yet the benchmark test on page 7—with a particular emphasis on the “HellaSwag” item—is what the blogger wants to draw attention to in this revelation.
Seventh finding: GPT-5 might have finished training
GPT-4 was present when OpenAI introduced ChatGPT at the conclusion of the previous year.
Since then, the blogger has projected that GPT-5’s training time will be short and even speculates that GPT-5 may already have undergone training.
The extensive safety investigation and risk evaluation, which may take months or even a year or more, is the next issue.
Try using a two-edged sword, discovery eight
In this line, OpenAI is making the more obvious argument that “technology is a double-edged sword,” as we frequently say.
Bloggers have identified a tonne of proof that relevant workers are now more productive because to AI solutions like ChatGPT and GitHub Copilot.
But, the second half of this phrase in the report, which is the “warning” offered by OpenAI and which foreshadows the automation of some tasks, is what he is most concerned about.
Bloggers concur with this; after all, GPT-4 has the capacity to perform with ten times or even greater efficiency than humans in some domains.
Future-looking, this is likely to result in a decrease in the pay of the appropriate people or a number of issues, such as the requirement to utilise these AI technologies to handle many times the previous workload.
Finding Nine: Recognize when to say no
The author describes how this procedure is carried out: A set of rules are supplied to GPT-4, and if the model follows the rules, a corresponding reward is given.
He thinks that OpenAI is using the potential of AI to steer the evolution of AI models in a way that complies with human values.
But as of right now, OpenAI hasn’t provided a more thorough and thorough introduction to this.