Brad Smith, president and vice chair of Microsoft (MSFT), said on CBS’ “Face the Nation” Sunday that the government needs to move more quickly to regulate AI because it has more potential to benefit humanity than any previous creation up to that point.
“In medicine, drug discovery, and disease diagnosis, scrambling the resources of, say, the Red Cross or others in a disaster to find those who are most vulnerable where buildings have collapsed is almost ubiquitous,” the CEO noted.”
Smith added that AI is becoming more powerful and isn’t as “mysterious” as some people believe.
“If you have a Roomba at home, it finds its way around your kitchen using artificial intelligence to learn what to bump into and how to get around it,” Smith explained.
Smith said that any technology that exists today appears hazardous to those who lived in a previous era, in response to worries about AI’s potential power.
According to Smith, a safety break should be in place.
AI-related job losses will develop over years, not months, according to Smith.
The majority of us will change how we work, according to Smith. To be honest, we’ll need to build and acquire a new set of skills.
Smith suggested using the capacity of AI to “detect when that happens” in order to avert situations like the phoney photo of the explosion near the Pentagon.
“You insert what we refer to as metadata; it’s a component of the file, and we can identify its removal if it’s done. If there is a modified version, we essentially generate a hash. Think of it like a fingerprint, and then we can search for that fingerprint across the internet,” Smith said, adding that a new strategy should be developed to strike a compromise between the regulation of deepfakes and deceptive advertising and the right to free speech.
Smith stated that the IT industry must join forces with governments in an international campaign as the year of the US presidential election and the continued threat of foreign cyber influence operations draw near.
Smith is in favour of creating a new government body to oversee AI technology.
Something that would guarantee not only that these models are generated in a secure manner but also that they are used in huge data centres, for example, where they can be secured from threats to national security, physical security, and cybersecurity, according to Smith.
As Elon Musk and Apple co-founder Steve Wozniak have stated, a six-month moratorium on AI systems that are more potent than GPT4 is not “the answer,” according to Smith.
Smith stated, “I don’t think China will hop on that bandwagon rather than slowing down the rate of technology, which I think is very difficult. Let’s move more quickly by using six months.
Smith proposed an executive order in which the federal government declares it will only purchase AI services from businesses that are putting AI safety standards into place.
Smith declared that “the world is progressing.” “Let’s make sure that at the very least, the United States keeps up with the rest of the world.”