Artificial Intelligence and Security

Artificial Intelligence (AI) makes some of our Sci-Fi-dreams come true. Applications that recognize faces throughout stylistic changes or help to find a cure for critical illnesses are a huge innovation in technology. While making processes and workflows smarter, one of the most useful AI functions is automatic provisioning. By automating the provisioning process to detect needed resources, AI helps to accelerate our workday. But with every major innovation there should be regulations to make these ideas realistic and safe.

One of the biggest advantages of AI from a Zero Outage perspective is the ability to predict power failures or maintenance requirements so that the operation can adapt to them. But the risks arising in particular from the opacity of communication patterns due to automated provisioning lead to various complexities.

The safety of AI plays a particularly important role when it comes to the availability of products and production. The effect of an erroneous statement on the part of the AI can lead both to a time delay and to defects of the machines due to signal overlay. In automated processes in particular, it is often necessary to consider with caution how decisions influence production. It should be noted how the process has to react in the context of a time delay so that progressive actions are not prevented.

What are the challenges in this new environment?

IT security looks at the behaviour of communication and tries to judge whether there is an authorized or unauthorized intervention in a system. If the application makes changes to IT operation itself, it is difficult for security to distinguish whether the changes originate from the system or whether they were brought about from outside (cyber attack).

Especially with the growth of the use of Artificial Intelligence it is necessary to take a new perspective in security. Thus, a new approach is needed that helps to secure the daily growing innovative ideas.

What effects can unauthorized interventions have?

Through unauthorized interventions, AI applications are manipulated and made unusable. We can already see attacks on AI from an external point of view. An example for this is the so-called “poisoning”. In general, it is about teaching the AI false/unwanted information. These attacks damage the quality of the outcome and even lead to the point that learnings made are no longer usable. For example, several projects have shown that it is possible to manipulate the outcome of a cognitive charbot by teaching personal opinions, since it is continuingly learning from its environment. This may lead the character of the AI to be immoral, untrustworthy and negative.

Another example of a security gap in AI can be seen in the area of image recognition. Attackers place adversarial noises on images to manipulate the process of machine learning. These adversarial noises are not recognized as such and are therefore used as learning material. By this irritation the AI recognizes something else in the picture as shown.

How do we distinguish threats from regular interventions? How can we create safe AI environment?

When talking about risks in the term of automated provisioning it’s not easy to actually find a solution. Currently, there is a lot of room for improvement, adopting and developing of security solutions in the AI environment required. Even though AI is of great help within Cybersecurity, there is a need to create awareness that it is also needed the other way round. We need to find solutions which overcome the communicational opacity and help us to clearly define the actions of AI. Two perspectives are possible in the interaction between AI and security. On one hand, the security can contact the AI and find out what and why the AI does certain things. Here it is checked by asking whether the targeted resources are needed at all to solve a certain task, for example.

On the other hand, you could look at security from the AI perspective. An old-fashioned approach of “asking for permission” may be an actual solution for the problem.

How can we prevent failure and outage?

Since SIEM systems are normally used to identify potential risks, looking for anomalies, it is necessary to set clear rules that specify actions looking to those breaches. It is a challenge to set the rules in such a way that they do not prevent the system from advancing decisions, but are still set in such a way that we can regard them as secure. A possible approach for defining could being informed about dedicated systems learning and simulating impact on existing rules. In any case, this would require industry-specific as well as production-specific skills to design and operate.

What is conclusion?

To conclude, AI can be a game changer in predicting outcomes, foreseeing events and increasing overall efficiency in every industry. However, like any other innovative technology, AI offers a significant protentional of vulnerabilities for hackers to take advantage. A need for securing the technology to prevent manipulations of any kind has never been more important, and a must to consider before implementing. Think of how to effectively secure your AI operated systems over time.


Disclaimer

The information contained in this document is contributed and shared as thought leadership in order to evolve the Zero Outage Best Practices. It represents the personal view of the author and not the view of the Zero Outage Industry Standard Association.

Zero Outage