The application of artificial intelligence (AI) expands an organization’s attack surface and threat vectors. According to a Gartner report, 41% of surveyed firms had encountered a security or privacy incident involving AI. Unfortunately, a lot of businesses aren’t ready to handle the hazards associated with AI.
Unknown risks are impossible to mitigate. The majority of enterprises have not taken into account the new security and business threats posed by AI or the additional controls they must implement to manage those risks, according to a recent Gartner poll of chief information security officers (CISOs). A framework for risk mitigation and new sorts of risk and security management procedures are required by AI.
To manage AI risk and security within their enterprises, Gartner advises security and risk leaders to concentrate on five critical areas.
- Measure the level of exposure to AI
Contrary to typical software systems, machine learning (ML) models are opaque to the majority of users, and frequently even to the most knowledgeable professionals. Although data scientists and model developers typically understand what their machine learning (ML) models are trying to accomplish, they are not always able to understand the internal organisation or the algorithmic techniques used to handle the data.
The ability of a business to handle AI risk is greatly constrained by this lack of comprehension. Inventorying all AI models utilised by the company, regardless of whether they are a part of proprietary software, internal creations, or SaaS apps, is the first step in AI risk management. This should include determining how different models are interdependent on one another. The models should then be ranked according to their operational impact, with the understanding that risk management measures can be implemented over time based on the prioritisations made.
The next step after inventorying is to make the AI models as comprehensible or understandable as possible. “Explainability” refers to the capacity to generate information, justifications, or interpretations that make a model’s operation clear to a particular audience. In order to manage and reduce the economic, social, liability, and security risks brought on by model outputs, risk and security managers will have this context.
- Promote awareness via a campaign to educate people about AI risk
A key element of AI risk management is employee awareness. Get everyone involved, including the chief information security officer (CISO), the chief privacy officer, the chief data officer, and the legal and compliance officials, on board, and then reset everyone’s expectations for AI. They should be aware that AI has distinct hazards and necessitates particular controls to reduce those risks, making it clear that it is not “like any other programme.” Next, inform corporate stakeholders on the hazards associated with AI that need to be managed.
Determine the most effective method for accumulating AI knowledge across time and across teams with the help of these stakeholders. For instance, check to see whether you can include a course on the foundations of AI in the company’s learning management system. Work together with colleagues in application and data security to promote AI awareness among all organisational constituents.
- By means of a privacy programme, stop AI data from being exposed.
A recent Gartner survey revealed that security and privacy concerns are thought to be the main obstacles to the application of AI. The exposure of shared and internal AI data can be effectively eliminated by implementing data protection and privacy measures.
A variety of methods can be utilised to access and distribute crucial data while still adhering to privacy and data protection regulations. Analyze the organization’s specific use cases to determine which data privacy solution, or combination of strategies, makes the most sense. Look into methods like data masking, creating fake data, or differential privacy, for instance.
When importing or exporting data to and from external organisations, take data privacy requirements into account. These scenarios should make the usage of methods like fully homomorphic encryption (FHE) and secure multiparty computation (SMPC) more advantageous than when securing data from internal users and data scientists.
- Make risk management an integral part of model operations.
For AI to be dependable and effective, special-purpose processes are required as part of model operations (ModelOps). As environmental elements are constantly changing, AI models must be continuously checked for business value leakage and unexpected, sometimes harmful, outcomes.
Understanding AI models is necessary for efficient monitoring. To make AI more reliable, accurate, fair, and resistant to hostile assaults or innocent mistakes, specialised risk management processes must be a crucial part of ModelOps.
Continuous application of controls is recommended, for instance throughout model development, testing, deployment, and ongoing operations. Effective controls will be able to identify malicious activities, innocent errors, and unexpected modifications to AI data or models that have the potential to cause harm, unfairness, inaccuracy, poor model performance and forecasts, as well as other unforeseen effects.
- Use AI security methods to protect against hostile attacks
New methods are needed to identify and thwart AI attacks. A large amount of organisational suffering and loss, including financial, reputational, or losses linked to intellectual property, sensitive consumer data, or proprietary data, can result from malicious assaults against AI. Application leaders must integrate controls that can identify suspicious data inputs, malicious attacks, and innocent input errors into their AI applications in collaboration with their security counterparts.
Implement a complete set of traditional business security controls around AI models and data, in addition to additional integrity measures tailored specifically to AI, like training models to tolerate aggressive AI. Utilize fraud, anomaly, and bot detection techniques to avoid AI data tampering or input error detection.