Artificial Intelligence
  1. Help Center
  2. Artificial Intelligence

Code Institute Responsible AI Use Policy

1. Purpose and Scope

This policy outlines Code Institute’s commitment to the ethical, responsible, and lawful use of Artificial Intelligence (AI) technologies in our educational platforms, business operations, and engagements with students, staff, and clients. It applies to all AI systems developed, deployed, or used by or on behalf of Code Institute.

2. Definitions

    • AI System - For the purposes of this policy, an “AI system” refers to any software or hardware solution that performs tasks typically requiring human intelligence, such as learning, reasoning, problem-solving, perception, or language understanding. This includes, but is not limited to, machine learning models, natural language processing tools, automated decision-making systems, recommendation engines, and generative AI technologies used, developed, or deployed by Code Institute.
  • High Risk - In this policy, a high-risk AI system is any AI application that, if it malfunctions or is misused, could significantly impact the rights, safety, or well-being of individuals. This includes systems used for student admissions, student care, personal data processing, or any function that could affect access to education, employment, or legal rights, as defined by the EU AI Act.
  • Human-in-the-Loop – A process where human judgment is integrated at key stages in the operation of an AI system, allowing people to review, approve, or override automated outputs before any significant action is taken.
  • Bias – Systematic and unfair favoritism or prejudice in AI outputs or processes, often resulting from unrepresentative data or flawed algorithms, that leads to unjust outcomes for individuals or groups.
  • Automated Decision-Making – The process where an AI system makes decisions or recommendations with minimal or no human intervention, especially in cases that can affect individuals’ rights or opportunities.

3. Guiding Principles

3.1 Transparency

  • All AI systems used by Code Institute must be explainable and interpretable to end-users and stakeholders.
  • We commit to providing clear documentation and, where appropriate, interfaces that describe how decisions are made, especially in cases affecting students or clients.
  • We ensure that users are notified when they are interacting with AI systems, in line with the EU AI Act’s transparency obligations.
  • Documentation and AI disclosures will be made available in accessible, plain-language formats for student and staff understanding

3.2 Accountability

  • Code Institute assigns clear roles and responsibilities for the oversight of each AI system.
  • Mechanisms will be in place to monitor outcomes, report issues, and take corrective action if harms or unintended consequences arise.
  • A designated AI Ethics Lead will oversee implementation, supported by relevant stakeholders as needed
  • The AI Ethics Lead or Committee will report to the Senior Leadership Team and include cross-functional representation from product, engineering, data, and education departments.

3.3 Fairness and Non-Discrimination

  • We assess all AI systems for bias and discriminatory impacts, including via fairness audits and inclusive testing procedures.
  • Special attention is paid to protected characteristics (e.g., gender, race, age) to ensure that AI systems do not disadvantage any individual or group.
  • We follow the EU AI Act’s requirements on high-risk systems to ensure fairness is demonstrably embedded in their design and function.

3.4 Privacy and Data Protection

  • Code Institute complies fully with the General Data Protection Regulation (GDPR) and local data protection laws.
  • All AI systems are designed with data minimisation, purpose limitation, and privacy-by-design principles.
  • Data subjects will retain their rights to access, rectification, erasure, and objection regarding automated decision-making.
  • Where automated decision-making is used, Code Institute will ensure data subjects are provided with meaningful information about the logic involved and the significance and consequences of such processing, per GDPR Article 22.

3.5 Safety and Security

  • We will take all reasonable steps to ensure our AI systems are robust, resilient, and secure from adversarial attacks, data breaches, or misuse.
  • AI development includes risk assessments and threat modelling, particularly for systems handling sensitive student or operational data.
  • In accordance with the EU AI Act, Code Institute implements post-deployment monitoring for safety-critical or high-risk applications.

3.6 Human Oversight (Human-in-the-Loop)

  • We strive to design AI systems with a ‘human-in-the-loop’ approach ensuring they are supporting and not replacing human decision-making in educational and support contexts.
  • Human supervisors will be trained to understand the AI's role, interpret its outputs, and intervene or override where necessary.
  • Final decisions that significantly affect individuals (e.g., admissions or grading) will include human judgment and review.

3.7 Beneficence and Purpose-Driven Use

  • We aim to use AI to enhance the educational experience, empower learners, and optimise institutional performance without compromising ethical values.
  • Code Institute prohibits the use of AI in ways that deceive, manipulate, exploit, or harm users, especially vulnerable populations.
  • All AI projects must demonstrate a clear and positive benefit to students, staff, or society.

3.8 Inclusiveness and Participation

  • Where feasible, we seek input from relevant stakeholders, such as educators and students, to inform the design and evaluation of AI systems.
  • Feedback from these groups is used to ensure representative, relevant, and inclusive AI tools.
  • We may summarise internal feedback themes where they influence key AI system changes.

4. Compliance and Review

  • This policy is aligned with the EU AI Act, GDPR, and relevant international ethical AI frameworks (e.g., OECD AI Principles, UNESCO’s AI Recommendations).
  • Regular compliance reviews and impact assessments will be conducted for all AI systems, especially those identified as high-risk.
  • Training will be provided to relevant staff to ensure awareness and competence in ethical AI use.

5. Reporting and Redress

    • Any user, staff member, or third party can report concerns or complaints about an AI system via a designated contact point (e.g., aiethics@codeinstitute.net).
    • We aim to maintain a transparent grievance mechanism and will investigate and resolve issues related to harm, bias, or improper use of AI systems.
  • We aim to acknowledge reports promptly and resolve issues in a reasonable timeframe, typically within 30 days.

6. Updates and Amendments

This policy will be reviewed annually and updated in response to:

  • Changes in law or regulation (e.g. updates to the EU AI Act)
  • Technological advancements
  • Stakeholder feedback
  • Internal audit findings

We recognise that AI governance is evolving. This policy will adapt as new best practices and regulatory guidance emerge.