Following a tech winter or slowdown, the term “AI spring” first appeared five years ago, and it is still in bloom today. The most well-known technology is ChatGPT, which in a couple of months drastically altered the norms for learning and assessment at work and in schools.
Technologist Simon Elisha told AAP that AI “has struggled with being over-promised, versus what it can actually do, particularly in the earlier versions.” “The potential for these technologies to be useful has really been the focus of the (AI) winters, rather than the market investments in these technologies,” he stated.
Over the past 50 years, smaller rule-based models have been used to target marketing campaigns, operate email and search engines, recognize faces, and detect bank account theft. That may sound more banal than the robots taking over the world depicted in movies, but persuasive digital content has the power to alter people’s perceptions.
Generative AI uses enormous amounts of Internet data to produce essays, computer code, music, and graphics. Mr. Elisha stated, “This generative AI approach is interesting because it has a fundamental difference—you don’t have to train it to do what you need it to do.”
All of a sudden, we have this more all-purpose strategy. It has far more potential than anything we’ve ever seen. With decades of experience in the field, Mr. Elisha is not only the originator and host of the AWS Podcast but also serves as the top technologist for Amazon Web Services in Australia, New Zealand, and Oceania.
I’ve spent more than 30 years doing this. He remarked, “I’ve seen things come and go.” “I believe there is a chance that this will significantly alter the course of many things.”
He added that responsible use of it was the responsibility of individuals, governments, and organizations. “I believe that when our cognitive processes are questioned, humans find it difficult and need to adjust our behavior.”
The Australian Human Rights Commission states that ethical AI use is crucial to preserving trust and human rights. The committee is worried about discrimination and bias based on algorithms, privacy, and the dissemination of misleading information that could affect decisions that could change people’s lives in the banking, healthcare, and other sectors.
They have informed the government that modernizing Australia’s approach to AI would not be simple, but it is necessary. Parents and educators frequently grapple with how to handle generative AI that solves difficult arithmetic problems and generates assignments. However, Mr. Elisha is aware of one school that used an alternative strategy.
He explained, “It instructed students to use generative AI for their homework so they could learn what it can do and how it works.” It’s also well-known as a “codebot,” with programmers utilizing Copilot and other versions from competitor company GitHub to write code considerably faster.
However, in order to employ code whisperers properly, a certain amount of responsibility needs to be instilled from the beginning, according to Mr. Elisha. Anthropic, a start-up company based in the United States, promotes responsible usage.
“Constitutional training is discussed; they have drafted a constitution that outlines expectations for the model’s response and provides guidelines for self-policing and bias avoidance,” Mr. Elisha stated. Although it’s still early, the research being done on ethical and responsible AI applications is already highly relevant.