Using self-learning machine learning algorithms, over 85 US airstrikes were able to target specific targets on February 2, according to Schuyler Moore, chief technology officer of US Central Command, which is in charge of US military operations in the Middle East. According to the Pentagon, US bombers and fighter planes attacked seven facilities in Syria and Iraq with those attacks.
Moore stated in an interview with Bloomberg News that “we’ve been using computer vision to identify where there might be threats.” “In the last 60 to 90 days, we’ve definitely had more opportunities to target,” she stated, noting that the US is presently searching the area for “an awful lot” of rocket launchers that are being held by enemy forces.
It has previously been acknowledged by the military that it uses computer vision algorithms for information gathering. However, Moore’s remarks provide the clearest evidence yet that the US military is employing the technology to locate enemy sites that have been struck by gunfire.
The US strikes were part of the Biden administration’s response to the three US service personnel killed in an attack against a base in Jordan on January 28. According to the Pentagon, the strikes destroyed or damaged rockets, missiles, drone storage, and militia operations centres, among other targets. The US blamed militias backed by Iran for the attack.
According to Moore, artificial intelligence (AI) technologies have also assisted in locating surface ships in the Red Sea and rocket launchers in Yemen, some of which Central Command, or Centcom, claimed to have destroyed in numerous weaponry strikes in February. The Houthi forces in Yemen, who are backed by Iran, have frequently launched rocket attacks on commercial ships in the Red Sea.
Maven Project
The targeting algorithms were created as part of Project Maven, a Pentagon programme launched in 2017 with the goal of supporting defence intelligence and expediting the adoption of AI and machine learning across the Defence Department. Prototypes of the programme placed a strong emphasis on the US military’s battle against militants affiliated with the Islamic State.
US forces in the Middle East have been testing computer vision algorithms that can find and identify targets from imagery gathered by satellite and other data sources throughout the past year, according to Moore, who is based at Centcom headquarters in Tampa, Florida.
Later, in the wake of Hamas’s attack on Israel on October 7 and the military response that followed in Gaza, which heightened tensions in the region and prompted strikes by terrorists with Iranian support, they started deploying them in real operations. Hamas has been classified as a terrorist organisation by the US and the EU.
Moore stated, “Everything changed on October 7th.” She said that after a year of digital drills, US forces were able to make “a pretty seamless shift” into utilising Maven. “We immediately shifted into high gear and a much higher operational tempo than we had previously,” she said.
Moore made it clear that Maven’s AI powers are only being utilised to assist in locating possible targets—not to confirm them or employ force against them.
She claimed that tests conducted by Centcom with an AI recommendation engine at the end of the previous year revealed that these systems “frequently fell short” of people when it came to recommending the best course of action or weapon to deploy.
According to her, humans review the AI targeting suggestions on a regular basis. According to her, American operators take their obligations and the possibility that AI could make mistakes seriously, and “it tends to be pretty obvious when something is off.”
According to her, “an algorithm never just runs, reaches a conclusion, and then moves on to the next step.” “Human verification occurs at the conclusion of each AI-related step.”