1. August 2024 By Oliver Kling
AI security and compliance, why you should start now
AI, and GenAI in particular, is currently a key topic in the IT industry. Regardless of how you evaluate this hype, GenAI will increasingly be used in a wide variety of companies.
What do we know about this topic from a security and compliance perspective and what should we be focussing on right now?
First of all, the perhaps somewhat banal realisation that a specific AI - both machine learning and GenAI - is first and foremost an IT application that is essentially based on standard technologies and stacks. And so it is certainly not wrong to apply your standard security and information security methods to these new software artefacts.
But the exciting thing, of course, is precisely the delta. What is new, what do we need to be prepared for and what new measures, methods and tools might be useful?
"New" from the perspective of AI safety
One key difference with regard to AI safety is that we can only maintain the classic "OK / Not OK" dichotomy to a very limited extent. Examples of this dichotomy are authorisations, login processes, input validation, network filters, etc. However, an AI works with probabilities, which means that the final statement as to whether a result is correct cannot be assessed exactly. We are, so to speak, partially entering a semantic level that we have previously only addressed to a very limited extent - for example via permissible value ranges.
Irrespective of this qualitative difference, there are also a number of new types of attacks and vulnerabilities in AI solutions. Without claiming to be exhaustive, here are a few examples:
- Prompt injection: A generative AI is tricked into revealing confidential data, for example, through skilful input.
- Poisoining attacks: Using manipulated inputs to undermine security, quality or ethical principles.
- Model theft: By "rebuilding" a model through a large number of queries or through direct access to the model or the training data.
- Model denial of service: Similar to the classic denial of service, blocking the AI through many or specifically complex requests.
The motto here is "danger recognised, danger averted"; if these possible attacks and vulnerabilities are considered at the design stage, the security of AI applications can be significantly increased.
There are also additional tools (such as the LLM gateway) for ensuring operational security, which can be very helpful in limiting possible misuse, but which are in turn partly based on AI models.
News from the "AI compliance" corner
The AI Act has been in force in Europe for some time now. This regulates the use of AI in a similar way to the data protection regulations (GDPR, DSGVO). For example, risk classes are defined that allow the use of AI, but are linked to defined additional measures. For example, AI solutions for social scoring or the remote identification of individuals are currently almost impossible to realise.
Although it remains to be seen exactly how the legal requirements are to be interpreted or how they may be interpreted in an audit, the general direction is already clearly recognisable. In addition to traditional requirements such as quality and risk management, new topics such as "traceability" (not an easy task for AI experts) or transparency obligations towards data subjects (e.g. use of their data, use of AI) are coming to the table.
With ISO 42001, there is already an initial standardisation for an AI management system, which is based on existing standards such as ISO 27001.
What can I already do today?
On the technical side, there are direct AI-specific threat modelling approaches that address potential vulnerabilities and attack scenarios right from the design stage. In addition, based on DevSecOps approaches, there are already specific designs for corresponding processes such as ML SecOps, which already embed the topic of security in the development process.
For regulatory aspects, it makes a lot of sense to integrate AI solutions directly into asset and risk management and to describe, implement and monitor new requirements in accordance with regulatory requirements.
Conclusion
The use of AI, be it generative AI or machine learning, definitely brings with it new aspects in terms of security and compliance. Without claiming that all relevant aspects are fully understood or defined, waiting is not an option. We know very well from experience that security and compliance as an "afterthought" is not a good idea. The following still applies: security and information security are processes, which means that it is much easier to refine something technically and organisationally than to integrate these topics retrospectively into an environment that may not be well controlled. So the motto is: "Act now!"
Would you like to find out more about security topics at adesso and what services we offer to support companies? Then take a look at our website.
You can find more exciting topics from the world of adesso in our previous blog posts.