AI and data protection: what do companies need to know?
SMEs are currently facing one of the biggest technological transformations of our time: artificial intelligence (AI). While this technology offers immense opportunities, it also comes with considerable challenges and uncertainties, particularly with regard to data protection. As a partner to SMEs, we want to help you overcome these challenges and fully exploit the potential of AI without jeopardizing the security and protection of sensitive data. The first step is to take a detailed look at the requirements of the General Data Protection Regulation and how compliance with the GDPR can be ensured.
Legal framework for the use of AI
Companies that want to use AI must deal intensively with the topic of data protection law in order to both comply with legal requirements and minimize risks. This includes a comprehensive risk assessment and the implementation of protective measures to ensure confidentiality, datasecurity and data integrity.
Introduction to the AI Regulation, GDPR and the Supply Chain Due Diligence Act
The AI Regulation (AI Regulation) and the General Data Protection Regulation (GDPR) are key legal acts of the European Union. Both regulate different but interlinked aspects of the use of artificial intelligence (AI). While the GDPR focuses on the protection of personal data and the privacy of individuals within the EU, the AI Regulation creates a legal framework for the safe and ethical use of AI systems. Companies that fall under the Supply Chain Due Diligence Act (LkSG) have had to comply with their due diligence obligations since January 1, 2023, including monitoring risk management and setting up a complaints mechanism. Small and medium-sized enterprises (SMEs) may also be indirectly affected by their role as suppliers to larger companies covered by AI legislation.
Objectives and scope of application: GDPR, AI Regulation and LkSG
The GDPR focuses on personal data and its protection. It regulates the use of data: in plain language, data storage, collection and processing. This regulation applies worldwide to all organizations that process the data of EU citizens. The AI Regulation, on the other hand, takes a risk-based approach to AI regulation. This is intended to ensure its safe and ethical use. The regulation divides AI systems into four main risk categories: prohibited AI applications, high-risk AI systems, limited risk, minimal risk. Companies must ensure that their AI systems meet the specific requirements, depending on the risk assessment and the established AI guidelines.
When negotiating contracts with larger companies affected by the LkSG, SMEs should ensure that the obligations assigned to them are realistic and feasible. It is important to negotiate implementation cooperatively.
The risk categories of the AI Regulation
The EU’s AI Regulation (AI Regulation) divides AI systems into different risk categories to ensure that regulation is appropriate to the potential level of risk. These categories are presented in a pyramid structure ranging from minimal risk to unacceptable risk.
Unacceptable risk (prohibited AI applications)
This top category includes AI systems that are considered a threat to people’s safety or fundamental rights and are therefore banned in the EU. Examples of this are
Social scoring by governments: In which the behavior of citizens is assessed and monitored in order to control access to services or rights.
Biometric facial recognition: In particular, methods that could significantly affect human rights or privacy.
High risk (high-risk AI systems)
This category includes AI systems that are used in safety-critical or fundamental rights-sensitive areas. These systems are subject to strict regulatory requirements to ensure the security and protection of fundamental rights. Examples include:
- Prosecution
- Education
- Medical care
- Critical infrastructure management
Regulatory requirements include comprehensive risk assessments, strict data governance, detailed technical documentation and continuous human oversight.
Low risk (limited risk)
AI systems in this category are subject to less stringent regulatory requirements. As long as they are safe and transparent, these systems may be used. Companies must ensure that:
users are informed about how the AI works.
measures are taken to avoid undesirable distortions.
Minimal risk
This category includes AI systems that pose a very low risk. They are subject to the lowest regulatory requirements. This category includes, for example, many everyday AI applications that have no significant impact on safety or fundamental rights.
Supervision and sanctions: GDPR, AI Regulation and LkSG in comparison
The GDPR is monitored by supervisory authorities in every EU member state. Violations can be punished with fines of up to €20 million or 4% of a company’s global annual turnover. The GDPR also provides for high fines. These can amount to up to 30 million euros or 6% of a company’s global annual turnover. These penalties are provided for in particular in the event of violations of regulations for high-risk AI systems or prohibited practices. The use of suitable data protection tools and the performance of regular data protection audits are also measures that companies should take to avoid sanctions.
The LkSG also provides for sanctions if companies do not fulfill their due diligence obligations. In addition to fines, these sanctions can also include exclusion from public contracts. SMEs should clearly communicate their limits here. They should only take on obligations that they can realistically fulfill.
Timetable and entry into force: Important dates for companies
The AI Regulation will enter into force on August 1, 2024. However, most of the provisions will only be fully applicable after a two-year transition period from August 2, 2026. However, some provisions, such as the ban on AI systems with unacceptable risks, will already apply from February 2, 2025.
The LkSG has been in force for larger companies since January 1, 2023. Companies with 1,000 to 3,000 employees in Germany must comply with their due diligence obligations from January 1, 2024.
AI and data protection: risks and challenges
One of the biggest concerns of many SMEs is the risk of data breaches. AI systems require large amounts of data, which increases the risk of data misuse. It is therefore essential that this data is adequately protected. There is also a risk of AI systems exhibiting biases that can lead to discriminatory results. This can have serious consequences for the company and its reputation. This makes it all the more important to pay attention to the quality of the data and avoid potential distortions.
Another key point is the transparency of AI systems. Companies must ensure that the functioning of AI, especially when processing personal data, is transparent and traceable. This requires clear documentation of data processing procedures and the regular performance of data protection audits to ensure data integrity.
Practical measures for the safe use of AI
To successfully master these challenges, you can rely on proven procedures and our expertise. With our data protection-friendly AI solutions, we integrate the principle of “privacy by design” into your projects right from the start, so that data protection tools and default settings are seamlessly embedded in AI development and implementation.
Careful documentation of data processing activities is also essential. We help you to maintain an overview and check together whether, for example, order processing contracts are required in accordance with Art. 28 GDPR . Through targeted training, we enable your employees to use AI systems securely and in compliance with data protection regulations.
What does that mean in detail?
Uniform requirements for GDPR and AI Regulation compliance
To ensure that your AI systems comply with the requirements of both the General Data Protection Regulation (GDPR) and the General Data Protection Regulation (GDPR), it is essential to take a number of proven measures. This includes strict compliance with legal regulations, in particular for the protection of personal data in accordance with the GDPR. In addition, your AI systems should meet ethical standards such as fairness, AI ethics, transparency, accountability and inclusivity.
An important part of compliance is carrying out a data protection impact assessment before implementing high-risk AI systems. This helps to identify and minimize potential risks at an early stage. In addition, the data used for AI systems should be of high quality and free from bias to ensure fair and non-discriminatory results. Transparency is crucial here: the functioning of your AI systems must be comprehensible in order to avoid the so-called black box effect. Comprehensive documentation of the algorithms and data used is essential to ensure the necessary traceability.
By implementing these requirements, companies can not only minimize legal and financial risks, but also strengthen stakeholder confidence in the data integrity, cybersecurity and confidentiality of their AI systems. This ultimately contributes to the responsible and sustainable use of artificial intelligence.
AI and data protection: our recommendation
Here’s the kicker: far too many companies are still relying on generic Large Language Models (LLMs) from providers such as OpenAI. While these models are powerful, they are often unable to fully meet the specific requirements and high technology standards required in many industries. Standard LLMs often offer limited access to proprietary data, show weaknesses in avoiding hallucinations and lack adaptability to industry-specific processes. These limitations can not only lead to inaccurate results, but also pose significant compliance and data security risks.
Further details on the limits of generic LLMs can be found described here.
Our solutions, on the other hand, offer a customized alternative that closes precisely these gaps. They are able to meet the specific requirements of companies by accessing specific knowledge and individual data without the typical weaknesses of generic models. This not only ensures greater accuracy and reliability, but also significantly improves integration into existing business processes. With our solutions, companies can ensure that their AI systems are both technically and regulatory up to date, which ultimately makes them more competitive and better prepared for the challenges of the future.
An overview of the concrete comparison between our solution and ChatGPT, for example, can be found here.
Your partner for safe and successful AI use
As your partner, we are happy to support you in the development and implementation of AI solutions that are both data protection compliant and efficient. Together, we can securely and successfully exploit the opportunities offered by AI while overcoming legal challenges.