Artificial intelligence threats: A cyber security perspective
AI and (cyber) security
AI is gaining an increasing role in cyber security. It enables large amounts of data (big data) to be analysed and patterns to be recognised. This allows threats to be identified and responded to at an early stage. AI-based systems recognise unusual activities and anomalies in real time, which increases the detection rate of cyberattacks significantly.
A major advantage of AI in cyber security is that it reduces the workload of human analysts. With thousands of security-related events occurring every day, manual review is becoming almost impossible. Intelligent systems help to prioritise and automatically process these events. Only the most critical incidents are forwarded to human experts.
In addition, AI enables a (partially) autonomous response to threats. Automatic adjustments to firewall and email rules minimise the attack surface and protect ongoing operations. Cyber resilience is increased by these automations, leading to a more robust design of a company’s security infrastructure.
Advanced methods, such as anomaly detection and predictive analytics, are used to optimise cyber security solutions through AI applications. These behaviour-based methods enable more precise identification of threats compared to traditional, signature-based approaches. In addition, identity and access management systems benefit from the automatic analysis of user data to prevent unauthorised access and ensure the security of IT systems.
Potential threats of artificial intelligence
The increasing use of intelligent systems in a variety of private and professional applications also entails considerable risks. Inappropriate or excessive use can lead to ineffective investments and misapplications, which emphasises the need for targeted control and regulation.
Social and ethical concerns
A central problem is the possible reinforcement of prejudice and discrimination by AI systems. By training AI on biased data, these systems are able to discriminate against disadvantaged groups. The possibility of AI being used for surveillance and suppression cannot be eliminated either.
Automation through AI also bears the risk of significant job losses. Replacing activities carried out by humans with machines can lead to social inequality and economic insecurity. Another challenge is the threat to privacy and data security. AI systems analyse large amounts of data, which can result in misuse and unauthorised surveillance.
Exclusively data-based decision-making and the lack of human morality can lead to ethically questionable decisions. This raises the question of liability in the event of damage caused by AI. Damage can be caused by accidents involving self-driving vehicles, for example. Clear responsibilities are essential in order to create trust in AI and not hinder innovation.
Political effects
Democratic processes, especially elections, are not immune to the effects of new technologies. According to the World Economic Forum’s global risk report for 2024, deliberately disseminated false reports using AI can undermine the legitimacy of democratic governments. Generative AI is already being used today to manipulate elections by creating fake photos, texts, voice recordings or videos.
Microtargeting, another AI method, uses large amounts of data to spread personalised political messages. The Cambridge Analytica scandal and the use of microtargeting in the Brexit campaign are prominent examples.
Impact on cyber security
AI technologies open up new and effective attack opportunities for cyber criminals. Attackers use machine learning to analyse large data sets and identify vulnerabilities in computer systems. Automated cyberattacks can be developed on this basis and quickly and precisely target a large number of victims. AI-supported, personalised attacks bypass the defence mechanisms of their targets and thus increase the chances of success.
One example of this are phishing emails that appear convincingly real with the help of AI. Cyber criminals use programmes such as ChatGPT to create flawless and convincing messages in order to steal personal data or spread malware. A single click on a malicious link in a phishing email can infect an entire corporate network.
The automated collection of data such as names, email addresses and telephone numbers, which are crucial for targeted attacks, is facilitated by AI. Deepfakes make it possible to create realistic yet fake images, videos and audio. These are used to spread false information or deceive unsuspecting victims.
Hackers, even without programming skills, can use AI programmes to generate malware and ransomware. This can be used to paralyse IT systems and extort ransom money. In consideration of these threats, comprehensive security measures and rapid responses to attacks are essential.
Prevent threats, minimise risks and exploit opportunities
Dangers can be proactively prevented and risks minimised by companies taking a variety of measures.
‘Closed API’ models
One effective method is the use of closed API models. It prevents data from being used to further develop the AI, but requires strict monitoring and secure authentication of users.
Developing own AI solutions
Another approach is to develop own AI models. Smaller models such as Meta’s LLaMA or Alpaca AI from Stanford University are cost-effective and can be customised to the company’s specific needs. This reduces the risk of data leakage and enables better control over AI development.
Robust review processes
Robust review processes in software development are also crucial. Pair programming and rigorous code reviews help to identify and eliminate vulnerabilities. Continuous Integration and Continuous Delivery (CI/CD) are best practices to minimise security vulnerabilities from AI-generated code. These measures are a starting point for the secure integration of AI applications in companies.
Opportunities for companies
According to a study by the German Federal Ministry for Economic Affairs and Energy, expenditure on AI tools rose to 4.8 billion euros in 2019, while the turnover generated by new AI technologies was 60 billion euros. By using AI systems, companies can take advantage of a wide range of benefits to optimise operational processes, in particular to increase efficiency and reduce costs.
- Reduction of production costs by enabling automated processes and maintenance
- Improving quality through comprehensive quality assurance measures and precise market analyses of customer requirements
- Minimising errors by allowing AI systems to take over repetitive tasks
- Safety in the workplace is increased by AI systems taking over high-risk tasks
- Reduced workload for employees, with AI systems in customer service reducing the workload and increasing customer satisfaction through faster response times.
- Efficient recruiting, by support of AI systems in the pre-selection of applicants
- Real-time analyses of market changes and stock levels enable quick reactions to changes
- Predictive maintenance enables predictive and efficient maintenance of machines and reduces downtimes
AI regulations and legislation
Both Europe and the US have taken significant steps to regulate and ensure the use of AI technologies through legal frameworks. Both regulatory approaches aim to maximise the opportunities of AI technologies. At the same time, the potential risks should be minimised in order to protect the rights and safety of the population. The aim is also to strengthen trust in these new technologies.
USA
In the USA, federal authorities will only be allowed to use AI applications that can be proven to protect the rights and safety of the population. These new rules include facial recognition at airports and AI software for controlling the electricity grid and determining mortgages and insurance policies. Department managers must implement these requirements by December. In addition, authorities must appoint an AI officer and publish an annual risk analysis of the AI programmes used. These measures are part of an executive order by President Joe Biden that also affects the private sector.
Europa
In Europe, the AI Act is the first comprehensive legal framework that addresses the risks of AI and secures a leading role for Europe. The AI Act sets out clear requirements and obligations for developers and users and aims to reduce administrative and financial burdens for companies, especially small and medium-sized enterprises. The law also ensures that AI systems respect ethical principles and minimise the risks of high-performance AI models. It includes measures for risk analysis, the prohibition of unacceptable practices, clear requirements for high-risk applications and a governance structure at European and national level.
Conclusion
Artificial intelligence offers immense benefits in areas such as cyber security, economics and business processes. At the same time, it comes with significant risks, including data breaches, job losses and ethical challenges. To optimally exploit the opportunities of AI and minimise the risks, responsible application and strict regulations are essential. Only through targeted control, transparency and ethical standards can trust in this technology be strengthened and its full potential safely realised.