AI ethics, in conjunction with regulations, secures the infrastructure

This was the CIS Compliance Summit 2024 for decision-makers

Picture from left: Harald Erkinger (CIS), Andreas Tomek (KPMG) © Anna Rauchenberger

On 10 October, the CIS Compliance Summit was all about how AI ethics can protect Europe from cyberattacks in accordance with guidelines and certification standards. More than 250 representatives of Austria’s leading companies accepted the invitation from Harald Erkinger, Managing Director of CIS – Certification & Information Security Services GmbH. Experts such as Prof Sarah Spiekermann and sports legend Toni Polster illustrated the complex topic with practical examples.

In a networked world, the threat of cybercrime is also increasing. Against the backdrop of the Russian war of aggression in Ukraine, the Middle East conflict and possible political changes in the USA, Europe must question its defence strategy. After all, cyber attacks are no longer just about fraudulent scams, but about jeopardising our infrastructure through acts of terrorism. “The circle of cyber attack targets has expanded massively in recent years.

Not least due to the rapid development of new AI technologies. Attacks no longer only affect individual companies, but jeopardise the entire system in a networked world. This is why regulations and reporting obligations must apply to more and more companies,” said Harald Erkinger. In his opening speech, the expert emphasised the importance of joint action by all stakeholders. Targeted attacks could damage food safety, security of supply or the healthcare system, which would have a massive impact on the entire infrastructure.

With the AI Act and the NIS-2 Directive, the European Union has put good templates in place, but must constantly orientate itself to further developments and react quickly. “Both the European AI Act and ISO 42001 create a framework that attempts to ensure that AI systems proactively identify and mitigate cyber threats. While the AI Act sets requirements at EU level, ISO 42001 offers companies a global approach, decoupled from legal space and technology, to develop and operate secure and trustworthy AI solutions,” says Erkinger.

Ethics are crucial in all attempts to regulate the use of AI. “The growing importance of AI ethics is due to the fact alone that artificial intelligence and robotics will have a significant impact on the development of humanity in the near future,” said Prof. Sarah Spiekermann, Head of the Institute for Information Systems & Society at WU Vienna.

AI values are more important than compliance exercises

The values of an AI, i.e. the ethics we build into it, confront us with the question of what we actually expect from these systems and what risks we are prepared to accept. Aspects such as truth, transparency, data protection, fairness, security, control and, above all, power consumption should become the focus of development in order to gain the justified trust of users. “If we want to use AI sensibly, we need to be able to trust in its ethical design. From the perspective of customers and society, AI ethics therefore play an even greater role than purely legal compliance on paper,” says Spiekermann. According to the professor, one step towards trust is practical application: “AI can create added value for society and organisations if it is designed and embedded in an ethical way. However, this is only possible if we turn away from the current feature mania and American standard solutions and instead ask: What value do I actually want from this powerful technology and at what price?” This question can be answered with development processes such as “Value-based Engineering with ISO/IEC/IEEE 24748-7000” (VbE), the first globally standardised method for value-based system design. Human and social values are integrated into IT design on the basis of core values and value qualities. “Value qualities are, for example, the way we address people, the consideration of social status or the non-acceptance of rude behaviour,” explains Spiekermann. The VbE can be used, for example, to create a value monitoring system that shows organisations how socially acceptable their AI is acting and thus either promotes or undermines the trust of users.

Success is always a question of team spirit

For the establishment and development of AI ethics and thus effective cybersecurity to succeed, all players are needed. Football legend Toni Polster drew an analogy to sport: “You don’t win a match by leaving the defence out in the rain. Victory is always a joint effort and only possible if the team plays together.

Everyone has to know their role and at the same time keep an eye on what’s happening as a whole, be aware of the movements of the other team members and be able to react to them.” Just like on the football pitch, the different players in economic systems also have to work together. Technology companies and all those who operate technologies must orientate themselves towards the needs of users while keeping an eye on potential dangers.

Legislators are creating the necessary framework with regulations such as the AI Act or the NIS 2 Directive, while users should utilise new technologies responsibly. Chief Information Security Officers (CISOs) play a special role in this context. “Those responsible in companies often achieve remarkable things with great personal commitment under the most difficult conditions. CISOs and their teams not only secure internal data, but above all work for the good of a more secure society,” said Andreas Tomek, Partner IT Advisory, KPMG. In order to protect the infrastructure sustainably, there needs to be mutual understanding and, above all, cooperation between all stakeholders.

 


Save the date

Next year, the CIS Compliance Summit will take place again in Vienna on 16 September 2025.