CertAI Trusted Responsible AI Mark
Use of Responsible AI – Summary & Quiz
Responsible AI is the foundation for safe, trustworthy, and human-centric technology. The EU defines 7 key requirements that guide the development and use of trustworthy AI systems. These come from the ALTAI framework (Assessment List for Trustworthy AI).
Human Agency and Oversight
AI must respect human autonomy and allow for human control. Users should know when they are interacting with AI and not be misled or overruled by it.
Examples: A medical diagnosis support system that lets doctors confirm or override AI suggestions; A recruitment tool that allows HR staff to review and adjust automated candidate rankings.
Technical Robustness and Safety
AI systems must be safe, resilient, and reliable under normal and unexpected conditions. They must handle errors, attacks, or environmental changes.
Examples: An autonomous drone that safely aborts a mission if it loses GPS signal; A banking fraud detection system that continues to function during cyberattacks.
Privacy and Data Governance
AI must comply with GDPR and other data protection laws. It should collect only necessary data, use secure processing, and maintain user privacy.
Examples: A health app that uses anonymized data for AI training; A chatbot that does not store sensitive user inputs without consent.
Transparency
AI systems must be understandable and traceable. People should know how decisions are made, and organizations must document how the AI works.
Examples: A loan application system that shows users the reason for rejection; A smart thermostat that explains energy usage predictions.
Diversity, Non-discrimination, and Fairness
AI should avoid unfair bias and be inclusive. This includes using diverse datasets and evaluating the impact of algorithms on different groups.
Examples: An AI hiring tool designed to reduce gender and ethnicity bias; A voice assistant that works accurately across accents and dialects.
Societal and Environmental Well-being
AI should benefit society and avoid harm to democracy, labor, and the environment. The broader impact of AI must be considered.
Examples: A climate prediction model used to prepare for extreme weather; An AI-based content moderation system that protects against hate speech without suppressing free expression.
Accountability
Clear responsibility must be assigned for AI outcomes. AI systems must be auditable, with mechanisms for redress, complaint handling, and legal traceability.
Examples: A public sector algorithm that includes appeal procedures for citizens; An AI-powered pricing system subject to regular ethical audits.