EFS Consulting

AI and compliance: Implementing legal requirements securely.

AI Compliance

The use of artificial intelligence can give companies a decisive competitive advantage in today’s world. However, this can only succeed if employees’ AI skills are promoted not only from a technical perspective, but also from a compliance perspective. Employees can only develop AI skills if the organization provides certain guidelines and support based on the legal requirements.

The Most Important EU Requirements for AI

The most relevant laws within the EU regarding artificial intelligence are the General Data Protection Regulation (GDPR) and the national data protection laws that supplement this European regulation, as well as the AI Act.

While the GDPR contains general provisions on the protection of personal data, the AI Act relates to the placing on the market and operation of AI systems. Other regulations that also govern the handling of data within the EU are the Data Act and the Data Governance Act. However, these two regulations do not focus on artificial intelligence.

Risks Associated with the Use of AI

The provisions of the AI Act aim to reduce the risks associated with the use of artificial intelligence. However, not all risks can be minimized by legal regulations, which is why it is recommended to identify, assess & define measures for further risks in addition to the requirements of the AI Act. Risks associated with the use of AI include:

  • Data protection violations: The processing of personal data always requires a legal basis (e.g. consent, contract, etc.). It is also possible for AI systems to collect personal data without users being aware of it.
  • Lack of transparency: In most cases, decisions and results made by AI systems are difficult to understand. The implementation of data subject rights under the GDPR (e.g. right to erasure) may also not be fully implemented by all AI systems.
  • Discrimination and bias: If AI systems are trained with biased data, they can reinforce prejudices and discrimination.
  • Malfunctions and errors: AI systems are not error-free, so results should not be accepted without verification.
  • Security risks: AI systems can both be used for cyberattacks and be affected by cyberattacks, so data and infrastructure security may be at risk.
  • Ethics and morality: Decision-making in critical situations in particular raises ethical and moral questions.

The AI Act: A Risk-Based Approach to the Use of AI

The AI Act, which came into force on August 1, 2024, applies both to developers and providers of AI systems in the European Union and to deployers of AI systems within the EU. The term “deployer” refers to any natural or legal person who uses an AI system under their own responsibility. This does not include the use of AI systems as part of a personal and non-professional activity.

The requirements and specifications of the AI Act take a risk-based approach by classifying AI systems into four risk categories.

Article 5 of the AI Act explicitly lists prohibited practices in the AI sector, such as AI systems that exploit a natural person’s vulnerability or need for protection due to their age, disability or certain social or economic situation.

High-risk AI systems are described in Article 6 of the AI Act and detailed in Annex 3. An example of a high-risk AI system that could be relevant to many organizations in the future is an AI system used to make decisions regarding terms of employment, promotions, or terminations. For this type of AI system, the AI Act imposes specific requirements to both monitor risk and ensure documentation and human monitoring.

The third level of AI systems with special transparency obligations mainly concerns the direct interaction of AI systems with natural persons (e.g. chatbots) and the generation of audio, image, video, and text content (Art. 50 AI Act).

The lowest level of general-purpose AI models poses a lower risk, which is why there are no special obligations for the deployers (users) of these AI models. However, there are also obligations for providers of these AI models, which are listed in Article 51 of the AI Act.

AI Compliance with EFS Consulting: Your Partner for Legally Compliant AI

EFS Consulting ensures the efficient and legally compliant use of AI systems in companies and works with clients to establish AI governance. This involves defining an organization-specific AI strategy, deriving compliance requirements, and defining processes and responsibilities. In addition, EFS Consulting helps to raise awareness within the company while always considering the latest technological and legal developments. Let’s build a tailored AI governance framework together!

People

Wolfgang Walter, Engagement Manager bei EFS Consulting

Wolfgang Walter