AI

How AI models secure AI models

Share this post:

The fact that IBM has been named one of the world’s most ethical companies by Ethisphere for the third year in a row is based on our work to advocate and enable the implementation of ethical and trustworthy AI. But what is meant by the term “ethical AI”? At IBM, we believe in clarity, fairness, technical robustness, transparency and integrity. In other words, it is about finding ways to make AI systems transparent and explanatory, to reduce and detect distortions and databias that otherwise lead to discrimination as well as to clarify the strengths and weaknesses of AI systems. It is also about preventing security risks such as hacker attacks to ensure that AI models and the data used to create the models are protected. To protect AI systems from security breaches, we use security solutions, which often use AI models to detect security threats.

Additional security risk with AI?

There are known risks with AI systems, risks associated with the five areas we at IBM believe are crucial for ethical and reliable AI. Given this you might rightfully ask: what does it mean that we protect business-critical AI systems with the help of security solutions that in turn use AI models? Have we protected the AI ​​system, or have we exposed it to additional risks? Confusing? No problem, let’s clarify why security solutions use AI but also what risks there would be with relying solely on AI in security solutions.

Advantages of AI in security solutions

It has become increasingly common to work from places other than the office. In 2020, after the pandemic made its entrance, the office was perhaps even one of the more unusual places to work from. The transition to the home office has placed new demands on IT security for many organizations. It is no longer enough to just monitor a firewall after intrusion attempts. Now as our data and resources are spread out in a completely different way, we must have new methods of detecting, for example, IT attacks. This is a contributing reason why AI and machine learning have become a larger and more important component in IT security work.

AI models can be used, among other things, to detect deviations and behavioral patterns that indicate security breaches. For example, AI models are effective in detecting abnormal traffic patterns in a network, such as when someone scans our network for vulnerable systems or uploads large amounts of data to the Internet. AI can also be used to detect if specific users’ logins and passwords have been hijacked, by analyzing the behavior patterns over time for each user. Furthermore, security solutions can use text analysis to review sites with security news in real time, to present the most relevant threats and trends for the IT departments that use these security solutions. AI thus helps to solve a large part of the complexity problem that IT security departments around the world are currently struggling with, by performing tasks that for a human being would take a very long time and be costly.

How do we manage the risks then?

Undoubtedly, there are many benefits of AI in IT security, but as we noted earlier, there are also risks associated with using AI. The risks apply regardless of whether the AI ​​model is used in a self-driving car, in a recommendation engine, in a healthcare system or in a safety solution. Let us investigate what the risks and challenges are when AI ​​models are used in security solutions.

Complement with traditional security

One challenge is to enable understanding of how the AI ​​model works and what behavior triggers a security alarm from the model. Understanding what triggers an alarm can be crucial to being able to fix the problem. If the provider of the security solution has pre-trained the model, they cannot be transparent with information about the types of behaviors that give rise to an alarm. Going out with that information would be like publishing a manual on how the system works which would consequently provide information on what is required to avoid being discovered by the system. One solution to this would be that the supplier does not train the model, but that each security department that implements the solution trains the model on its organization’s network data. Other challenge then arises if the organization has already been exposed to an IT attack without knowing about it. This would mean that the model is trained to believe that the communication that is taking place in the exposed network is normal, and it would therefore not sound the alarm. It will therefore be important to supplement AI models with another type of security so that each security department can, for example, ensure which communication is approved.

Avoid alarm fatigue

It is not only when AI models alarm too infrequently that creates a problem. It can also be a problem if they sound the alarm too often. The authors were recently in contact with a municipality whose AI network monitoring alerted every time students in the municipality’s schools returned from summer vacation. The alarm was triggered due to the AI model perceived a large change in network traffic between summer holidays and school days. If the IT security department receives too many false alarms, there is a risk that they will suffer from alarm fatigue, which leads to the risk of alarms being completely ignored. Alert fatigue can also be a well thought out strategy that hackers can use. By gradually introducing noise into a system, the hacker could influence a security AI model.

AI model complements security solutions

We previously asked ourselves whether AI in security solutions contributes to a reduced or increased risk for our organization. What we can state is that AI in security solutions introduces new types of risks, but that these risks are managed by traditional IT security. These risks are similar to the challenges we generally see in AI, such as explainability and technical robustness. The risks that are introduced can however be managed by using AI as a complement to conventional security methods, rather than replacing them. By using traditional security methods in combination with AI, we can both mitigate the new risks that arise, while AI also increases the efficiency of the security solutions that protect our business-critical systems.

Find out more on AI ethics and trust

Watch the webinar: AI you can trust – from experiment to scalability (in Swedish)

How IBM works with AI ethics

IBM’s Principles for Trust and Transparency

Read this blog in Swedish

Data Science & AI Ethics Solution Specialist, IBM Sweden

Victor Grane

IT Security Solution Specialist, IBM Sweden

More stories

New enablement materials for IBM Ecosystem Partners

On October 4th, IBM announced a revamped skilling program available for partners. The skilling and badging program is now available to our partners in the same way that it is available for IBMers, at no cost. This is something that our partners have shared, they want more expertise – more opportunities to sharpen their technical […]

Continue reading

Data Democratization – making data available

One of the trending buzzwords of the last years in my world is “Data Democratization”. Which this year seems to have been complemented by “Data Fabric” and “Data Mesh”. What it is really about the long-standing challenge of making data available. It is another one of these topics that often gets the reaction “How hard […]

Continue reading

How to act in the new regulation of financial sector

Our world is changing. Because of that regulators around the world are taking ambitious steps to improve the sustainability of the financial sector and guide capital towards sustainable economic activity. Especially in EU we are seeing a high level of regulations. These regulatory interventions present complex and sensitive legal challenges for financial sector firms, which […]

Continue reading