In a 1966 science fiction novel by D.F. Jones, a gigantic supercomputer called Colossus is built by the United States to take complete control of the nation’s defences… removing emotional and unreliable human beings from the process of making decisions about war and peace. The computer is meant to strictly follow the rules programmed into it and act only defensively.
But within hours of being switched on, Colossus becomes sentient and decides that it is far superior to humans. When the Soviets unexpectedly switch on their own, similar, supercomputer at almost the same time, the two machines put their heads together and decide to take over the world… with the threat of nuclear annihilation if anyone tries to stop them. (Indeed, a few cities do get blown up.)
It’s an extreme imagining of what could happen if too much control is put into the hands of computers. But artificial intelligence and machine learning systems promise to provide enormous benefits to us all. And of course they’re already doing so, in medical research and climate analysis and numerous other fields.
So how about the geospatial industry? How big a role does AI play in this sector at present, and where will it take us in the next five to ten years? That’s the very subject of a recently released report, Geospatial AI/ML (GeoAI) Applications and Policies – A Global Perspective, prepared by the World Geospatial Industry Council (WGIC).
Catching the wave
The report synthesises input from companies, government bodies, academics and research agencies across the globe, while assessing the state of AI/ML in the geospatial sector across 12 specific countries or regions — Australia, Brazil, China, the EU, India, Israel, Qatar, Saudi Arabia, Singapore, South Korea, the United Arab Emirates, the UK and the USA.
Australia’s inclusion is an acknowledgement that the country is generally recognised as being among the leaders in AI/ML research, development and application… something that appears to be increasingly appreciated by those who hold the national purse strings. In its most recent annual budget the federal government announced a major digital drive, including $124.1 million for AI initiatives such as a National Artificial Intelligence Centre led by Data 61 and supported by AI and Digital Capability Centres. This is in addition to current widespread AI development within government bodies (such as the Australian Defence Force), universities and research centres, plus Australia’s world-leading work in quantum computing.
The report outlines efforts many countries are making “to ensure they are well prepared for taking advantage of the AI revolution,” noting that nation states “understand that riding this AI wave underprepared will potentially hurt their prospects and cause upheavals in the lives of their citizens”.
“At the same time, multiple incidents have occurred to caution governments about AI being a double-edged sword,” the report says, adding that many nations have declared their intent “to regulate AI and its applications by capping potential harms”.
It all depends on what AI/ML is used for, and to what extent such systems can be trusted to produce the right results. The authors note that for the next couple of years AI will be mainly focused on analysing data and performing statistical analysis. From three to five years from now we will see it being able to make predictions or forecasts; and from then onwards, AI will be able to autonomously recommend specific solutions and actions.
The good, the bad and the possible
To get some more insight into how AI is currently being used by the geospatial industry and where it’s heading next, we decided to do our own research by canvassing the views of a number of leading specialists and companies involved in the field.
As far as AI being able to make predictions or recommend solutions is concerned, we’re sort of there right now — but challenges remain, says 1Spatial’s head of product management, Seb Lessware. “We already have AI making predictions and recommending specific solutions — whether that’s a self-driving car or playing a game of Go,” he says. “The commonality with these is that they are only possible for relatively narrow scenarios such as driving or playing a game, both of which have defined rules and fairly limited scope (admittedly driving is at the extreme end of this spectrum).”
“What we will see is those usage scenarios (analysis, predictions, recommendations) be applied to more and more generalised problems. For example, we can create an AI to detect the presence and location of buildings in a photo, but what about detecting the age, condition, materials and style of a building? Or estimate its value or weight?” he adds.
What about help versus harm? Can we achieve the former while avoiding the latter? Dr Zaffar Sadiq Mohamed-Ghouse is a member of the WGIC Policy Development and Advocacy Committee and served on the steering committee for the report. He’s also Executive Director, Strategic Consulting & International Relations with Spatial Vision. “Without strong policy, governance and clear objectives, we face a series of challenges in which AI could do harm,” he says, adding however that “trust in the field and adaptability will come.”
“AI in geospatial is often understood superficially, but it has much deeper, significant applications and effects on not only the profession but the industries to which geospatial contributes,” he adds. That’s why, he says, the report “suggests a need for further skills development, capacity building and knowledge transfer to achieve maturity of AI in geospatial, and to build a comprehensive understanding of the concepts, implications and limitations of the technology across industry.”
Data and security breeds success
“There are several potential directions for AI in geospatial,” says James Brown, ICT manager with Geospatial Intelligence. “The first is moving the AI closer to the sensor (airplane, drone, satellite) so that instead of transmitting raw data from the sensor to the user, only the final analysis is transferred. There are already companies experimenting with this concept, and for time-critical applications it could prove highly effective.”
“The second is more to address a problem with AI; that is, the rare or difficult domain problem,” he adds. “AI requires examples to train on. This is a problem for rare events such as new types of planes, rare weather conditions etc. To address this, there is currently a lot of work being done on synthetic data creation, where examples of rare events can be simulated to generate training data. It’s a very exciting area of development but does pose some potential problems in terms of data verification and authenticity.”
Houtan Emad, senior AI consultant with Esri Australia, agrees. “The training of models in all verticals of AI and machine learning requires unfettered access to high-quality, labelled data samples,” he says. “We’re lucky to work in a field that is such a big proponent of open data, and I’ve found myself time and again reaching for open sources of information to train my machine learning models with.”
“Geographically speaking, having a strong open data network in Australia is also critical to the success of AI here, since most spatial imagery models developed in North America and Europe are not always directly transferrable to the Australian context,” he adds.
Having an open data ecosystem is vital, agrees Hong Tran, chief technology officer with ScanX. “The more data the ecosystem has access to, the greater the chances of deep innovations. Large corporations are limited by bureaucracy and red tape; we need rich open datasets, so developers and innovators have the necessary datasets to test and set benchmarks for the industry,” he says.
And that data isn’t restricted to 2D. “While AI is widely applied on 2D geospatial data, the next improvements will come from applying AI to 3D data such as LiDAR and 3D mesh models,” says Fabrice Marre, geospatial innovation manager with Aerometrex. “AI will be used to enhance 3D data by detecting and replacing specific objects in a 3D model. We are already seeing those developments applied to geospatial data used in game-engines.”
And we mustn’t forget security. The geospatial industry has the same responsibilities as all other industries when implementing AI, says Tran. “When it comes to advancing technology, built-in security and privacy settings must progress along with it. Growing security must be set as a priority in the ideation and execution of new technology.”
“I also think it’s part of our responsibility as companies in tech to ensure that our technologies are secure, and that our data — especially those entrusted to us by our stakeholders — adhere to strict security guidelines,” he adds.
The human factor
So given all of the foregoing discussion on the strengths and weaknesses of AI, now and into the future, will the geospatial sector ever be in danger of reaching the Colossus stage of handing over too much control to the AIs?
“It is crucially important to remember that any successful AI project demands human expert supervision on multiple levels, beginning from data collection and preparation all the way to model development and deployment,” says Alireza Abedin, computer vision engineer with Aerometrex.
“Furthermore, once the AI system goes into production, the model needs to be frequently monitored and improved within a feedback loop; this constitutes a crucial step in the AI model lifecycle.”
“I think we’re a long way from being able to entirely remove a human from the analysis of geospatial data,” agrees Brown. “AI is an incredibly useful tool that can make the processing of data faster and easier, but it isn’t yet able to think creatively.”
“In most cases, the objective of using geospatial data is to solve a problem. AI can turn data into information, but at least for the foreseeable future, a human needs to choose what information they need and how they want to use it to solve that problem.”
Australia, AI and expectations
Australia was one of the countries chosen for a deeper look by the WGIC report’s authors. They assessed the nation’s regulatory framework, and noted in particular the research done by the Human Rights Commission on the challenge AI poses to “people, society and the economy”. The Commission found that regulation must seek to protect human rights, be “clear and enforceable” and foster “ethical decision making”.
The report also noted a 2019 federal government discussion paper on an ethics framework for AI, which proposed several core principles, including that “companies using such systems are held accountable for any harm inflicted on people”.
Ross Lewin, CEO of Australian geospatial and location-based AI specialist firm Outline Global, says that the industry is “evolving very fast as the tools to support rapid model development and execution evolve,” but adds that the sector, “especially that [part of it] applied to location, is in its early stages of development.”
“The challenge usually lies with the end users not having a realistic view of AI and its practical application. The sensationalism and the ever-popular use of the term ‘AI’ mean that setting the record straight with the user/customer is often a vital first step in the process,” he said.
Outline Global applies its location-based AI skills to wide range of use cases, from mining to identifying invasive species. Indeed the company won a 2020 regional APSEA award for using AI to spot fire ant mounds in Queensland. And Lewin says AI models are getting smarter all the time.
“These platforms are being promoted and supported by cloud services such as Azure and AWS, which also serves to broaden the user base — what I call the ‘democratisation of AI,’ he said.
“Outline has plans afoot to serve model interaction between the end user and model through cloud-based platforms — a sort of ‘drag, drop and review the results’ type of interface. We are working with Microsoft Azure on proof of concepts — it’s a very exciting time to be in AI!”
Delivering on the data promise
The WGIC AI report sought the views of a range of experts to identify the most effective ways for employing AI for the good of the geospatial industry and society as a whole. These are the prime observations and recommendations:
- Increased access to government data. Everyone involved in the study agreed that government-owned data should be accessible to all in order for the benefits of GeoAI to be realised.
- Metadata standards and labelling. Universally accepted, clear and comprehensive metadata standards are vital, as is proper metadata labelling.
- Test datasets and benchmarks. There was majority agreement for creating a body of labelled geospatial data for training, testing and benchmarking models.
- Incentivising private data access. A lot of data is hidden in proprietary silos, which stymies innovation. Governments should consider incentivising private organisations to share such data, perhaps through data trusts/exchanges, with rules that mean companies can still obtain their rightful benefits.
- Shared AI models. Some study participants felt strongly that sharing algorithms/models is important for everyone’s benefit, and indeed a number of AI-driven firms have released models and algorithms under open source. The idea is to “crowdsource the best ideas from everywhere”.
- Traceability and veracity of data. It is important to ensure data is trustworthy and unmodified, which means being able to track data along the value chain. This could be done via technology or agreements (eg. embedding digital signatures at each stage).
- Right to self-determination for privacy. Geospatial data is often considered ‘sensitive,’ as it can be used to derive personally identifiable information. Many governments and citizens agree that data privacy is important, with proponents saying that individuals should be the owners of their data.
- Multilateral data exchanges and standards. Universal standards should be encouraged on data exchanges between countries.
Source: spatialsource.com.au