Why AI Ethics, AI & Data Literacy, and the EU AI Act must work together for Responsible AI
By Dr. Hanan ElNaghy
Artificial Intelligence is no longer a distant technological concept. It is already embedded in our daily lives, helping us in filling common tasks such as writing emails to more advanced processes such as screening job applicants, detecting fraud, promoting products, and even assisting medical professionals in diagnosis. However, as AI becomes further integrated into our social and professional systems, an important question emerges upon which we must consciously reflect:
How do we ensure that AI is used responsibly?
Frankly, there is no single answer or simple solution to this question. Responsible AI naturally asks us to combine AI literacy, data literacy, ethical awareness, and regulatory frameworks such as the EU AI Act. Amidst this rapid AI integration, these elements must work together to ensure AI systems will continue to support and aid our society rather than harm it.
In this article, I will share key insights from our recent webinar at Amsterdam Data Academy on how organizations and individuals can move toward responsible AI adoption and ethically utilise AI tools to improve efficiency, organisation and simplify complex tasks, all while furthering their unique quality of work.
AI Literacy is more than just using ChatGPT
When people first hear the term AI literacy, many immediately think about tools like ChatGPT, assuming this phrase simply refers to knowing how to effectively prompt AI.
However, AI literacy is much deeper than that. True AI literacy includes forming a fundamental understanding of the basics behind how AI models function, delving into technicalities such as the difference between automation and learning systems as well as the way in which AI systems learn patterns from historical data.
Moreover, a critical component of learning about AI literacy is being able to actively evaluate these models and their use, understanding the risks of bias, hallucinations and incorrect inputs, generally identifying the limitations of these AI models.
Many people assume AI simply automates tasks. Quite the contrary, AI systems are fundamentally different: they learn from past data to predict future outcomes. Once we understand this principle, it becomes easier to explain phenomena such as bias or hallucinations.
For example, if an AI system is trained on biased or incomplete data, the output will also be biased. This is why the famous principle “Garbage In, Garbage Out” still applies strongly in AI systems.
This is why AI literacy must be built on top of data literacy: Before people can responsibly use AI, they must first be able to answer the following questions:
- What does high-quality data look like?
- What does fair representation in data mean?
- How do we identify bias in datasets?
- How does outdated or incomplete data lead to unreliable predictions?
AI literacy is built on data literacy; therefore, without it, AI literacy remains unclear and incomplete.
The EU AI Act: A Risk-based Approach to AI
To address the growing social, societal, and technological impact of AI systems, the European Union introduced the EU AI Act, which builds on the foundation of earlier regulations such as GDPR. This AI Act introduces a risk-based regulatory framework, categorizing AI systems based on their potential risk and general impact.
The main categories include:
-
Unacceptable Risk
These systems are completely banned for their unethical human simulation amongst other negative social impacts. Examples include social scoring systems as well as certain forms of emotion recognition used for manipulation or surveillance.
-
High Risk
These systems are allowed but strictly regulated to avoid negative societal consequences across professional fields such as education and even the medical field as multiple AI systems are currently used in candidate recruitment, informative and engaging educational systems, medical diagnosis, and other critical societal infrastructures.
-
Limited Risk
These systems require transparency obligations, meaning users must be informed that AI is being used to avoid raising ethical concerns surrounding users’ right to informed consent. Examples include AI chatbots and AI-powered customer service systems.
-
Minimal Risk
These systems require little to no regulation as they do not pose any direct ethical threats to us. This includes common spam filters and casual AI use in media such as advertisements and video games.
An extremely important and common misconception about the EU AI Act is that its rules only apply to companies located in Europe. This is completely false.
Any company placing AI systems on the European market must comply with these new regulations, regardless of the company’s location, whether the corporation is based in Europe, the US, Asia, or the Middle East.
This Act is being gradually implemented through a phased timeline. The first major milestone came into effect in February 2025, which is the ban on unacceptable-risk AI systems. High-risk AI system regulations are planned to be put into effect as of August 2026 in addition to other rules regarding AI literacy obligation, meaning companies deploying or developing these systems must ensure their employees have received adequate AI literacy training by that date. Other obligations, such as those covering general-purpose AI models, follow their own separate timelines.
To summarise, AI literacy will no longer be optional by this summer, it will become a legal obligation, one that all companies must abide by.
To help professionals and organizations prepare for these new obligations, Amsterdam Data Academy offers an AI Literacy & Ethics course focused on responsible AI use, governance, and EU AI Act compliance.
AI Governance: Moving beyond “tick-the-box” compliance
Many organizations interpret the aforementioned AI literacy requirements as a simple training exercise:
“Let’s send all employees to any course to get this over with and consider ourselves legally compliant.”
However, responsible AI requires more than a single training session. This is where AI governance becomes essential, a term that refers to the rules, policies, processes, and responsibilities that guide how AI systems are developed, deployed, and monitored within an organization.
Key governance components include:
- Fairness & Ethics
- Transparency & Accountability
- Data Privacy
- Risk Monitoring
- Redress mechanisms
In other words, organizations must not only deploy AI systems, but also define who is responsible for monitoring and managing the risks throughout the entire AI lifecycle.
The challenge: Regulation vs Innovation
Furthermore, one of the biggest challenges in AI regulation is speed. Legislation is inherently slow while AI innovation is faster than ever. New AI developments and innovations appear almost every week, sometimes even daily. Meanwhile, laws may take years to develop and implement internationally.
This creates a difficult balance: If regulation is too slow, it becomes obsolete while if regulation is too strict, companies may simply avoid regulated markets altogether. We have already seen this happen with GDPR: Some companies decided not to enter the European market as they considered their compliance would be too complex or costly.
This is why regulators must find the “regulatory sweet spot”: A framework that protects society while still encouraging innovation, also applying within organizations. Internal AI governance should be flexible, regularly reviewed, and responsive to change to keep up with modern AI ethical and technological developments alike.
Keeping Humans in the Loop
Another important principle in responsible AI is Human-in-the-Loop (HITL). AI systems should not operate entirely without human oversight, especially when decisions affect people’s lives.
A useful distinction here is between two modes of AI use:
-
Generative AI
This essential term refers to the use of AI that creates full outputs for you, for example asking ChatGPT to write an entire report.
-
Assistive AI
Contrary to generative AI, this term refers to the use of AI to support human thinking rather than replace it, for instance asking AI to help you brainstorm ideas, improve the clarity of a text you wrote, or identify mistakes in other documents and such.
Assistive AI keeps human thinking active, while purely generative use reduces critical thinking and engagement, slowly but surely diminishing the value of the human minds after which these systems have been modelled in the first place. It is highly important that students and professionals consider this as our over-reliance on generative AI may lead to the gradual decay of our fundamental cognitive skills, our brain deeming analytical thinking unnecessary when it is underused.
Taking this significant issue into consideration, human involvement should ideally exist throughout the entire AI lifecycle: during system design, model development, decision-making, and especially after the production of AI outputs.
All in all, humans should still be able to review, challenge, and override AI decisions when necessary.
AI Literacy Must Be Context-Specific
Another misconception is that AI literacy can be taught as a single universal course. In reality, AI literacy must be customized for different fields.
For example:
- Doctors need to understand AI in medical diagnosis
- HR professionals need to understand AI in professional recruitment
- Engineers require deeper technical knowledge
- Managers need strategic and ethical understanding
Even concepts like Explainable AI (XAI) must be taught differently depending on the intended audience studying it. A developer might need more technical explanations involving neural networks while a patient or student would require a much simpler explanation of how a decision was made.
Effective AI literacy training therefore requires contextualized learning and industry-specific examples.
Responsible AI in Talent Acquisition: A Practical Example
Recruitment and talent management are strong candidates for AI support because they involve large volumes of data and time-consuming processes, such as CV screening, interview scheduling, candidate assessments, and performance predictions. However, this also makes recruitment a high-risk application.
If AI models are trained on biased previous data, they may reproduce those biases themselves. For example, if past hiring managers favored certain groups unfairly, the AI system might learn and replicate those discriminative and unjust patterns.
Responsible AI use in recruitment should therefore include:
- Bias detection in historical data
- Transparency with candidates about AI use
- Data minimization (collecting only necessary information)
- Consent from applicants
- Human oversight in final hiring decisions
- Continuous monitoring of model performance
While AI can assist in filtering or analyzing candidates, final decisions should remain under human responsibility.
Responsible AI Is a Shared Responsibility
Responsible AI is not only the responsibility of AI developers or regulators. It is a shared responsibility between organizations, developers, policymakers, educators, and end users alike. Even users interacting with AI systems should provide constructive feedback when outputs are incorrect.
Responsibility goes both ways.
The Future of Responsible AI
To conclude, the question is no longer whether AI will reshape our world, it already has. The real crux lies behind whether we are prepared to shape it responsibly.
As AI will continue evolving rapidly, regulations will also evolve in turn, and organizations will need to continuously and quickly adapt to these unfamiliar changes. However, one thing is already clear: Responsible AI cannot exist without the active integration of AI literacy, ethical awareness, data literacy, and governance frameworks.
By educating people, designing responsible systems, and building adaptive regulations, we can ensure that AI remains a tool that supports human decision-making rather than fully replacing it.
Most importantly, to protect the unique thought and nuance behind human capabilities while still keeping up with and incorporating modern technological advancement in our favour, we as a society must always remember:
AI should augment human intelligence, not replace it.