AI Student Manifesto: Championing Agency and Ethics in Educational AI

Published on April 20, 2025 by AIxponential Research Team & Student Advisory Council

Thumbnail for AI Student Manifesto: Championing Agency and Ethics in Educational AI

Student AI Manifesto: Fostering Agency, Ethics, and Collaboration in Education

I. Preamble: Reclaiming Agency in the Age of AI

The Imperative for Student Agency

Education, at its core, aims to empower individuals not only with knowledge but also with the capacity to navigate the complexities of life and contribute meaningfully to society. Central to this empowerment is the concept of student agency – the autonomy, influence, and power students possess over their own learning processes and educational experiences.1 Agency is far more than a desirable attribute; it is a cornerstone of effective learning.3 It encompasses the ability to make purposeful choices, shape one's educational journey, and take ownership of learning.1

The benefits of fostering student agency are well-documented and profound. It enhances educational engagement, directly contributing to improved academic achievements and bolstering both student and educator well-being.1 Furthermore, agency nurtures critical qualities such as self-efficacy (the belief in one's own capabilities), adaptability, intrinsic motivation, critical thinking, and creativity.1 These attributes are not merely advantageous for academic success; they are essential for personal growth, lifelong learning, navigating work-life changes, and developing meaningful careers, particularly in demanding fields like healthcare.1 Agency is not an inherent trait possessed solely by the student; rather, it is cultivated or constrained by the educational environment, its structures, and the relationships within it.2 It thrives where students have a voice and can influence their learning pathways.2

AI's Disruptive Entry

Into this landscape enters Artificial Intelligence (AI), a force rapidly reshaping nearly every sector of society, including education.5 Generative AI tools, in particular, have seen a dramatic surge in adoption, becoming deeply embedded in the daily lives and academic practices of students.7 Almost all students now report using AI in some form, a significant increase from previous years, with a large majority utilizing generative AI for assessment-related tasks.7 This technology holds transformative potential, promising to personalize learning, enhance content delivery, improve teaching methods, and provide scalable support.3

However, AI's integration into education presents a complex duality. While offering unprecedented opportunities, it simultaneously introduces significant challenges that intersect directly with student agency and ethical considerations.7 Concerns surrounding academic integrity, data privacy, algorithmic bias, the potential deskilling of critical faculties, and the erosion of student autonomy demand careful navigation.7 The very tools designed to empower students could inadvertently diminish their control and critical engagement if not implemented thoughtfully.

This manifesto seeks to address this critical juncture. Its purpose is to articulate a vision and chart a course for integrating AI into education in a manner that deliberately amplifies student agency, upholds fundamental ethical principles, and fosters a collaborative approach to evolving the educational experience. It advocates for a human-centered approach, ensuring that technology serves to enhance human capabilities and promote core values like social justice, equity, and human dignity, rather than undermining them.12

II. Understanding Student Agency and AI's Dual Role

Deepening the Definition of Student Agency

Student agency is a multifaceted construct that extends beyond simple choice-making.2 It represents an individual's capacity to act purposefully and effect change within their learning environment.4 Drawing from social cognitive theory, agency involves intentionality, forethought (planning and goal setting), self-reactiveness (monitoring and regulating actions), and self-reflectiveness (evaluating experiences and outcomes).2 It is fundamentally about personal influence – recognizing one's role in affecting outcomes and engaging in self-defined, meaningful actions, even within contextual constraints.2

Key dimensions operationalize this concept. Goal Setting involves students identifying achievable academic objectives.2 Initiative refers to taking proactive steps towards these goals.2 Self-Regulation encompasses monitoring progress and making necessary adjustments.2 Crucially, Self-Efficacy, the belief in one's capability to succeed, underpins the willingness to exercise agency.2 Agency flourishes in environments where meaningful participation is expected, allowing learners to influence classroom dynamics and even curricular directions.2 It is distinct from, though related to, self-regulated learning; agency places a stronger emphasis on the student's active role in enhancing the learning environment itself, contributing to knowledge development, or engaging in innovation.4 Ultimately, fostering agency involves cultivating a strong sense of ownership over the learning experience, empowering students to voice opinions and make informed choices.2

How AI Can Enhance Agency

When implemented thoughtfully, AI possesses the potential to significantly enhance student agency in several ways:

  1. Personalized Learning Pathways: AI-driven adaptive learning platforms can tailor educational content, pace, and style to individual student needs and interests.6 By analyzing performance data, AI can help create differentiated instruction, addressing diverse learning needs effectively.5 This personalization can bridge learning gaps 3 and give students greater control over the how, what, and when of their learning, aligning with the core tenets of agency.
  2. Access to Information and Support: AI tools, such as chatbots and generative models, can provide students with instant support, explain complex concepts, summarize lengthy texts, and suggest research avenues.3 This support is often available outside traditional study hours, empowering students to seek assistance independently and overcome obstacles without solely relying on instructor availability.7
  3. Development of New Skills: Engaging with AI tools necessitates and fosters the development of AI literacy – understanding how these tools work, their capabilities, and limitations.11 These skills, along with the critical thinking required to evaluate AI outputs, are increasingly seen as essential for future careers and societal participation.7 Learning with AI, therefore, can itself be an exercise in developing agency for navigating a technologically advanced world.17
  4. Tools for Self-Directed Learning: AI can power various tools that promote independent learning and cognitive organization. For instance, AI-assisted mind mapping can help students visualize and organize ideas, breaking down complex topics and making informed decisions about study focus.2 Similarly, AI can aid in creating flow charts for step-by-step problem-solving or fishbone diagrams for analyzing cause and effect, fostering autonomy and critical thinking.2 Tools like the Diamond 9 template, potentially enhanced by AI, encourage students to prioritize concepts, practice decision-making, and build confidence in setting their own learning goals.2

How AI Can Diminish Agency

Despite its potential benefits, AI integration also carries risks that could undermine student agency:

  1. Over-Reliance and Deskilling: A significant concern is that students might become overly dependent on AI for tasks requiring critical thought, analysis, and original creation.7 If AI is used merely to generate answers or complete assignments without deep engagement, it could hinder the development of essential cognitive skills and lead to a passive learning stance.11 The ease of obtaining AI-generated content might discourage the intellectual effort necessary for true learning and mastery.11
  2. "Black Box" Algorithms: Many AI systems operate opaquely, making it difficult for users to understand the reasoning behind their outputs or recommendations.15 This lack of transparency can erode trust and prevent students from critically evaluating AI suggestions, questioning potential biases, or understanding why a certain learning path is proposed. This opacity fundamentally limits a student's ability to exercise informed control over their AI-mediated learning experiences.26
  3. Bias and Inequity: AI systems learn from data, and if that data reflects existing societal biases (related to race, gender, socioeconomic status, language, etc.), the AI can perpetuate or even amplify these inequities.10 Biased algorithms in educational tools could lead to unfair assessments, disadvantage certain student groups in personalized learning pathways, or provide skewed information, thereby limiting opportunities and undermining the agency of affected students.18 Even AI detection tools have shown potential bias against non-native English speakers.18
  4. Surveillance and Reduced Privacy: The deployment of AI in education often involves collecting substantial amounts of student data, leading to increased monitoring and surveillance.3 When students feel their actions, explorations, and interests are constantly tracked, it can create a chilling effect, discouraging intellectual risk-taking, creative exploration, and the development of identity – all crucial aspects of exercising agency.3 The fear of judgment or negative consequences can inhibit autonomy and experimentation.3
  5. Homogenization of Thought: If students rely heavily on AI for brainstorming, outlining, or drafting without sufficient critical input or original development, their work may become less diverse and more reflective of the patterns in the AI's training data.25 This could stifle individual creativity, unique perspectives, and the development of a distinct intellectual voice, thereby diminishing a core aspect of personal agency.

The potential for AI to enhance learning through personalization is frequently highlighted.3 This tailoring of educational experiences seems inherently aligned with promoting agency by catering to individual needs. However, this promise carries inherent tension. The very algorithms driving personalization are trained on data that may embed societal biases.10 Consequently, adaptive systems might inadvertently steer students down pathways influenced by these biases, potentially limiting their exposure to diverse viewpoints or unfairly evaluating their potential based on flawed data proxies.18 Compounding this issue is the opacity of many AI systems.15 When students cannot understand why specific content is recommended or how their learning path is being shaped, their ability to exert genuine control is compromised. Thus, the mechanism intended to empower agency—personalization—could paradoxically constrain it if not implemented with rigorous attention to bias mitigation, transparency, and mechanisms for student input and control over the personalization process itself. True agency in this context requires not just receiving tailored content, but also understanding and influencing the tailoring mechanism.

III. Navigating the Ethical Landscape: Student Concerns and Responsibilities

Mapping the Ethical Terrain

The integration of AI into education brings forth a complex web of ethical considerations that demand careful attention from all stakeholders. These challenges are not merely technical; they touch upon fundamental values of fairness, privacy, integrity, and equity. Key ethical areas include:

  • Academic Integrity vs. Misconduct: The capacity of generative AI to produce human-like text, solve problems, and even simulate test answers poses a significant challenge to traditional notions of authorship and academic honesty.7 Plagiarism and cheating are top concerns for educators and administrators 14, and a notable percentage of students admit to incorporating AI-generated text directly into assignments.7 Simultaneously, many students express uncertainty about acceptable AI use and fear wrongful accusations of misconduct.14 Establishing clear institutional guidelines on permissible use, emphasizing critical engagement over replacement of student effort, and mandating proper attribution and citation of AI tools are crucial steps.11
  • Data Privacy and Security: AI systems in education often require access to and analysis of vast quantities of sensitive student data, including academic records, personally identifiable information (PII), communications, and learning behaviors.26 This raises critical concerns about how data is collected, stored, used, and protected from unauthorized access or breaches.3 Ensuring compliance with data protection regulations like FERPA in the US and GDPR in Europe is paramount.10 Institutions must implement robust security measures and hold EdTech vendors accountable through strict contractual agreements that limit data use and ensure institutional control.3
  • Bias and Fairness: AI algorithms are susceptible to inheriting and amplifying biases present in their training data.10 This can lead to discriminatory outcomes in areas like personalized learning recommendations, automated grading, or even AI-driven admissions processes, potentially disadvantaging students based on demographic factors.15 The potential for AI detection tools to unfairly flag the work of non-native English speakers further highlights equity concerns.18 Addressing bias requires careful auditing of algorithms, diverse training data, and ongoing monitoring for unfair impacts.18
  • Transparency and Accountability: The opaque nature of many AI decision-making processes—often referred to as the "black box" problem—hinders trust and makes it difficult to identify errors, biases, or the rationale behind AI outputs.15 This lack of transparency complicates efforts to ensure fairness and assign accountability when AI systems produce flawed or harmful results.10 Promoting transparency involves demanding clearer explanations from developers about how systems work, their data sources, and limitations, and being open with students and educators about how AI is being used institutionally.3
  • Informed Consent: Ethical AI implementation requires that students, parents (especially in K-12), and educators are fully informed about the deployment of AI tools.10 This includes clear communication about what data is collected, how it will be used, the potential benefits and risks, and the purpose of the AI application. Meaningful consent should be obtained before implementing tools that process personal data, respecting individual autonomy and privacy concerns.10
  • Digital Divide and Equity: Disparities in access to reliable internet, personal learning devices, and advanced AI tools can exacerbate existing educational inequalities.3 Research indicates that wealthier students, male students, and those in STEM fields tend to use AI more frequently and enthusiastically.7 Ensuring equitable access to both the technology and the skills needed to use it effectively is a critical ethical imperative to prevent AI from widening achievement gaps.3
  • Accuracy and Misinformation: Generative AI models are known to produce outputs that sound plausible but are factually incorrect, biased, or nonsensical – phenomena often termed "hallucinations".7 Students relying on AI without critical evaluation risk incorporating misinformation into their work or developing flawed understandings. This underscores the need for educational approaches that emphasize critical thinking, source verification, and media literacy skills.11
  • Environmental Impact: The significant computational resources required to train and run large AI models raise concerns about their energy consumption and environmental footprint, an emerging ethical consideration in technology adoption.7
  • Human Interaction and Well-being: Concerns exist that an over-reliance on AI for tutoring or support might diminish the crucial role of human interaction in learning, potentially impacting the development of social skills and the quality of teacher-student relationships.26 However, some evidence suggests students may use AI tools to build confidence 32, indicating a complex relationship between AI use and student well-being.

Student Perspectives: Fears, Expectations, and Needs

Understanding student attitudes and experiences is crucial for navigating the ethical landscape of AI in education. Recent surveys reveal a complex picture:

  • Widespread and Growing Use: AI adoption among students is no longer nascent; it is widespread and accelerating rapidly. A vast majority of students report using AI, often on a weekly basis, with generative AI being particularly prevalent for assessment-related tasks like explaining concepts, summarizing articles, and generating research ideas.7
  • Pragmatic Motivations: The primary reasons students turn to AI are pragmatic: saving time and improving the perceived quality of their work are the most cited benefits.7 Access to instant, personalized support, especially outside standard academic hours, is also a significant driver.7 Developing AI skills for future careers is another motivator, particularly among male students.7 Some students also report using AI to boost their confidence.32
  • Significant Concerns and Fears: Despite high usage, students harbor significant anxieties. The most prominent fear is being accused of cheating or academic misconduct by their institution.7 Concerns about receiving false or biased information ("hallucinations") from AI tools are also widespread.7 Data privacy and security are major worries 7, alongside institutional discouragement or outright bans on AI use.7 Ethical considerations, such as fairness to peers who don't use AI and the use of data for training models without consent, also factor in.7
  • Unmet Needs for Guidance and Skills: There is a strong consensus among students that possessing AI skills is essential for their future.7 However, a large gap exists between this perceived need and the support received. Only a minority of students feel their institution has provided adequate support or training to develop these skills.7 Many do not feel "AI ready" for an AI-enabled workplace.9 Furthermore, there's a disparity between students expecting institutions to provide AI tools and the current provision levels.7
  • Ambivalence and Mixed Feelings: Student attitudes towards AI are not monolithic. While appreciating the benefits, many worry about becoming over-reliant on the technology 9 and express reservations about its ethical implications, with significant proportions viewing AI use as "lazy" or akin to "cheating".23 Opinions on the fairness and desirability of AI-assessed exams are divided.7 This ambivalence reflects the ongoing societal and institutional struggle to define the appropriate role of AI in learning.

An examination of student concerns reveals a noteworthy pattern regarding academic integrity. The fear of being accused of misconduct (cited by 53% of students in one survey 7) appears more prevalent than the intrinsic belief that using AI is necessarily cheating (41% in another study 23) or lazy (50% 23). This discrepancy arises even as a non-trivial number of students (18%) admit to directly incorporating AI-generated text into their work 7, and institutions are indeed penalizing improper use.23 This situation points towards a significant zone of uncertainty. While most students agree their institution has an AI policy 7, the perception of these policies is inconsistent, with many feeling discouraged or banned rather than encouraged 8, and very few feeling fully aware of comprehensive guidelines.9 Students seem to be leveraging AI for its perceived advantages in efficiency and quality 7 but remain anxious about navigating poorly defined boundaries or falling foul of detection systems, which they largely believe are effective.7 This suggests the fear may stem less from deliberate dishonesty and more from ambiguity surrounding acceptable use and the perceived risks associated with detection and judgment. This underscores an urgent need for institutions to move beyond simply having policies towards developing clearer, more nuanced guidelines, ideally through collaborative processes involving students themselves 3, to delineate ethical boundaries effectively.

Furthermore, the data reveals a concerning equity dimension. Usage patterns and enthusiasm for AI are not uniform across student populations; disparities exist based on socioeconomic background, gender, and field of study, with wealthier students, males, and those in STEM fields generally showing higher engagement.7 International students also report higher usage rates.32 When juxtaposed with the widespread belief that AI skills are crucial for future success 7 and the commonly reported lack of institutional support for developing these competencies 7, a potential pathway for widening inequality emerges. If access to AI tools and, critically, the literacy to use them effectively and ethically, varies significantly across demographic groups, and if institutions fail to provide universal, equitable training, then AI integration risks creating a two-tiered system. Some students will gain vital future-ready skills, while others, potentially those already facing disadvantages, may be left behind.3 Addressing this requires proactive institutional strategies that encompass not only equitable access to technology but also inclusive AI literacy education for all students and staff.3

Table: Ethical Considerations in Educational AI

Ethical Challenge Description & Key Risks Student Concerns (Evidence) Mitigation Strategies Key Stakeholder Responsibilities
Academic Integrity AI enables easy generation of text/solutions, risking plagiarism, cheating, undermining learning & assessment validity.10 Fear of accusation (53% 7); Belief AI use is lazy (50% 23) or cheating (41% 23); Uncertainty about rules 14; 18% admit direct inclusion.7 Clear guidelines & policies on acceptable use/attribution 11; Redesign assessments to focus on critical thinking/process 8; AI literacy education 11; Transparency in use 11; Use AI detection tools cautiously.18 Student: Uphold honesty, cite properly.14 Educator: Set clear expectations, design robust assessments, teach ethical use.11 Institution: Develop/communicate clear policies, support faculty.8 Developer: Provide guidance on appropriate use.
Data Privacy & Security AI tools collect vast student data (PII, academic, behavioral) 26; Risks of breaches, misuse, unauthorized access, non-compliance (FERPA/GDPR).3 Concern about data privacy (23% 7, high concern 9); Fear of surveillance.3 Privacy-by-design 3; Data minimization 3; Strong encryption 3; Clear data use policies & vendor contracts (FERPA/GDPR clauses, no training use, data control, deletion rights) 3; Security audits (SOC 2 Type II) 28; Transparency 10; User controls.28 Student: Be aware of data rights, use privacy settings. Educator: Choose tools vetted for privacy, be mindful of data shared. Institution: Implement strong data governance, vet/contract vendors rigorously, ensure compliance.3 Developer: Adhere to privacy laws, secure data, be transparent.3
Bias & Fairness Algorithms trained on biased data perpetuate/amplify societal inequities (race, gender, SES, language) 10; Leads to unfair outcomes in personalization, grading, assessment 15; Detection tools may penalize non-native speakers.18 Concern about biased results (37% 7); Worry about fairness of AI evaluations (60% 9). Diverse training data; Algorithm audits for bias 18; Transparency in how decisions are made 10; Human oversight of AI decisions 15; Critical evaluation of tools before adoption 18; Provide recourse mechanisms.17 Student: Critically evaluate AI outputs for bias. Educator: Be aware of potential bias, choose tools carefully, use AI outputs critically.18 Institution: Promote equity, vet tools for bias, monitor impacts.18 Developer: Actively work to identify/mitigate bias in data/algorithms.10
Transparency & Accountability "Black box" nature of AI hinders understanding of decision-making 15; Difficult to identify errors/bias or assign responsibility 10; Erodes trust.10 Lack of trust in AI-generated content.9 Demand transparency from vendors 3; Explainable AI (XAI) methods; Clear institutional policies on AI use and oversight 3; Open dialogue about AI use 10; Documentation of AI use.10 Student: Ask questions about how AI tools work. Educator: Seek understanding of tools used, communicate AI use to students.10 Institution: Prioritize transparent systems, establish accountability frameworks.10 Developer: Provide clear documentation, work towards explainability.10
Informed Consent Need for students, parents, educators to understand how AI is used, data collected, risks involved before implementation.10 Parental worry about privacy invasion (80% 10). Clear communication about AI tools, data practices, goals 10; Obtain explicit consent where required by law/policy 10; Provide opt-out options where feasible; Regularly update policies/consent.10 Student: Understand terms of use. Educator: Ensure students/parents are informed about classroom AI use.10 Institution: Develop clear consent protocols, communicate transparently.10 Developer: Provide clear information for consent processes.10
Digital Divide & Equity Unequal access to devices, internet, AI tools exacerbates inequality 3; Disparities in usage/comfort based on SES, gender, discipline.7 Concern about fairness to non-users (21% 7). Ensure equitable access to tech/broadband 3; Provide access to AI tools institutionally 7; Offer universal AI literacy training 18; Consider access needs in assignments 18; Re-evaluate restrictive filters.3 Student: Be aware of equity issues. Educator: Design inclusive activities, advocate for resources.18 Institution: Implement equity-focused tech policies, provide resources/training.3 Policymaker: Fund initiatives for equitable access/literacy.3
Accuracy & Misinformation AI can generate false/biased info ("hallucinations").7 Concern about getting false results (51% 7). Teach critical evaluation skills 11; Emphasize source verification 14; Use AI as a starting point, not final source 14; Fact-checking practices. Student: Verify AI outputs, think critically.25 Educator: Teach verification skills, model critical use.11 Institution: Promote information literacy. Developer: Improve model factuality, indicate confidence levels.
Environmental Impact Training/running large AI models consumes significant energy.7 Concern about environmental impact (15% 7). Choose energy-efficient models/providers where possible; Optimize AI use; Raise awareness.18 Student: Be mindful of usage. Educator: Discuss environmental impact.18 Institution: Consider sustainability in procurement.18 Developer: Research/implement energy-efficient AI.
Human Interaction & Well-being Over-reliance on AI may erode teacher-student relationships and social skills 26; Potential impact on student confidence (positive 32 or negative). Some use AI for confidence (25% 32). Balance AI use with human interaction; Use AI to free up teacher time for meaningful engagement 11; Foster supportive classroom culture 33; Monitor student well-being. Student: Seek human connection, manage AI reliance. Educator: Prioritize relationships, use AI thoughtfully.12 Institution: Promote holistic development, support well-being initiatives.33 Developer: Design tools that support, not replace, human connection.

IV. Forging the Educational Alliance: Collaboration as the Cornerstone

The Necessity of Trust and Partnership

Successfully navigating the complexities of AI in education, particularly in ways that enhance rather than hinder student agency, necessitates a fundamental shift towards partnership.1 Affording students genuine agency requires moving away from traditional hierarchical models towards an 'educational alliance' built on trust, mutual respect, and bidirectional communication.1 This involves recognizing students as experts in their own experiences and priorities, while educators contribute their disciplinary expertise.1 Such an alliance is characterized by shared goals, agreement on tasks (including how AI is used), and a strong relational bond between students and educators.1

Trust is the bedrock of this partnership. As educational institutions increasingly adopt AI and other technologies, cultivating trust becomes paramount.3 Protecting student privacy, for instance, is not merely a compliance issue; it is foundational to building the trust necessary for students and their families to embrace technology as a genuine learning tool rather than a source of anxiety or risk.3 Transparency in how AI systems work, how data is used, and how decisions are made is crucial for fostering this trust.10 Without trust, collaborative efforts are unlikely to succeed, and the potential benefits of AI may remain unrealized or be overshadowed by suspicion and resistance.

Models for Collaboration

Building this educational alliance requires concrete structures and practices that embed collaboration into the fabric of institutional operations regarding AI:

  1. Co-designing Policies: A critical starting point is the collaborative development of policies governing AI use and data privacy. Processes should be established where students, educators, parents/families (especially in K-12), and administrators work together to create, review, and adapt these policies.3 This ensures policies are grounded in real-world classroom needs, are clear and understandable, remain adaptable to evolving technology, and reflect shared values.3 Student-led workshops can be an effective way to introduce and discuss these co-created policies.3 Case studies, such as schools forming dedicated student AI task forces, demonstrate the value of incorporating the student voice directly into policy formation.27
  2. Bidirectional Feedback Mechanisms: Collaboration extends into the classroom through normalized bidirectional feedback loops.1 Students should feel empowered to provide feedback on the effectiveness and ethical implications of AI tools used in their learning, and educators should be open to receiving and acting on this input. This requires actively working to make power dynamics transparent and creating a safe environment for open dialogue.1
  3. Shared Governance and Oversight: Institutions should consider establishing formal structures, such as AI ethics committees or technology task forces, that include meaningful student representation.27 These bodies can play a role in overseeing AI integration strategies, reviewing ethical concerns, guiding procurement decisions, and ensuring policies remain relevant. Collaboration should also extend outwards, encouraging EdTech developers to include students and educators on their design teams to create tools that genuinely meet user needs and foster trust.3
  4. Collaborative Learning Environments: The classroom itself can become a site of collaboration involving teachers, students, and AI.12 In this model, teachers shift towards roles as coaches and facilitators, guiding students in using AI tools effectively and ethically, encouraging peer review, and integrating reflection on the learning process.12 AI can provide data insights to help teachers offer tailored support, fostering a more responsive and collaborative learning dynamic.12

Ensuring Equity and Inclusion in Collaboration

For collaboration to be meaningful and effective, it must be equitable and inclusive. This requires deliberate effort:

  • Diverse Representation: Actively seek out and include a diverse range of student voices in all collaborative processes. Representation should span different demographic backgrounds, academic disciplines, levels of prior AI experience and comfort, and socioeconomic statuses.31 Utilizing engagement methods designed to mitigate bias and support multi-language participation can help ensure all voices are heard.31
  • Addressing Access Barriers: The collaborative agenda must explicitly address the digital divide.3 Discussions about AI tools and policies must consider potential access barriers for marginalized groups. When designing collaborative activities or assignments involving AI, educators and institutions must ensure that requirements do not inadvertently disadvantage students lacking necessary devices, reliable internet, or access to specific software.18 Providing access through campus resources like device loans or computer labs should be considered.18
  • Cultural Responsiveness: AI integration strategies and policy development should be culturally responsive, acknowledging that perspectives on technology, ethics, and learning may vary across different cultural contexts.33 The potential impacts of AI need to be considered within diverse societal frameworks.24

The development of AI policies within educational institutions highlights a potential disconnect between creation and implementation. While a large majority of students acknowledge that their institution has an AI policy 7, their experiences often reflect confusion or discouragement rather than clear guidance and support. Data indicates that significantly more students feel discouraged or banned from using AI than feel encouraged.8 Furthermore, despite the recognized importance of AI skills, most students report receiving inadequate institutional support to develop them 7, and very few feel fully aware of comprehensive guidelines governing AI use.9 This suggests that the mere existence of a policy document is insufficient. The gap between policy on paper and effective practice likely stems from several factors: policies may be developed without sufficient input from the students and faculty who must navigate them daily, leading to misalignment with practical needs 3; communication and training efforts may be inadequate, resulting in confusion, anxiety, and inconsistent application 7; and institutions may not allocate the necessary resources to support ethical AI adoption and widespread literacy development. This points to the conclusion that effective AI governance requires more than a top-down mandate; it demands a collaborative ecosystem involving co-creation, clear communication, robust training, and ongoing dialogue to bridge the gap between policy intent and lived experience.

V. Principles for a Student-Centric AI Future (The Manifesto Core)

To guide the integration of AI in education towards outcomes that empower students and uphold ethical standards, the following core principles should form the foundation of policy and practice:

Principle 1: Prioritize Human Agency and Critical Thinking.
AI's role in education must be fundamentally supportive, designed to augment human intellect, creativity, and decision-making, not to replace or usurp them.16 Educational goals and practices must continue to prioritize the development of uniquely human capacities such as critical thinking, complex problem-solving, ethical reasoning, creativity, and collaboration – skills that AI currently cannot replicate.2 Assessment strategies must evolve beyond rote memorization or tasks easily automated by AI, focusing instead on evaluating higher-order thinking and application of knowledge.8 Crucially, educational environments should actively foster student ownership and voice in their learning journey, empowering them to be active agents in their education.1
Principle 2: Uphold Ethical Standards and Demand Transparency.
A non-negotiable commitment to ethical AI use must underpin all integration efforts. This includes proactively addressing concerns related to fairness, accountability, data privacy, information security, and inclusivity.10 Institutions and educators must demand transparency from AI developers regarding algorithmic processes, data sources, potential biases, and limitations.3 Implementing privacy-by-design and ethics-by-design methodologies in tool selection and deployment is essential.3 Clear, accessible, and consistently enforced guidelines for the ethical use of AI, including proper attribution and citation, must be established and widely communicated.11
Principle 3: Ensure Equitable Access and Support.
The potential benefits of AI in education must not be allowed to exacerbate existing inequalities. Institutions must actively work to bridge the digital divide by ensuring all students have reliable access to the necessary digital infrastructure, personal learning devices, and institutionally supported AI tools.3 Beyond access to tools, robust, ongoing support and training programs are needed to develop AI literacy for all members of the educational community – students, faculty, and staff – paying particular attention to addressing disparities in prior experience, comfort levels, and usage patterns across different demographic groups.7 Accessibility for students with disabilities 29 and considerations for students in diverse global contexts 18 must also be integral to equity efforts.
Principle 4: Foster AI Literacy for Responsible Engagement.
Meaningful engagement with AI requires more than just technical proficiency; it demands critical AI literacy. This literacy should be integrated across the curriculum, not siloed into specialized courses.3 Students must be equipped with the knowledge and skills to understand how AI systems work (at an appropriate level), critically evaluate AI-generated information for accuracy and bias, recognize ethical implications, protect their personal data, and utilize AI tools responsibly, ethically, and effectively as aids to their own learning and creation.11
Principle 5: Champion Collaborative Development and Oversight.
The development, implementation, and ongoing governance of AI in education should be a shared endeavor. Institutions must establish and sustain participatory structures that facilitate ongoing collaboration among students, educators, administrators, technical staff, families, and potentially AI developers.1 Policies related to AI should not be static pronouncements but rather living documents, continuously informed and adapted through feedback loops and shared decision-making processes.3 This collaborative approach fosters ownership, ensures relevance, and builds the trust necessary for successful integration.
Principle 6: Commit to Continuous Dialogue, Research, and Adaptation.
The field of AI is characterized by rapid evolution, and its long-term impacts on education are still unfolding.3 Institutions must therefore commit to fostering continuous dialogue, critical reflection, and rigorous research to understand the effects of AI on teaching, learning, student agency, equity, and well-being. This includes addressing identified research gaps, such as the effective use of learning analytics to support agency 4, and supporting the development of robust AIED policy frameworks.37 A culture of inquiry and adaptation is essential, enabling institutions to proactively adjust policies, pedagogical practices, and curricula in response to new evidence, technological advancements, and emerging ethical challenges.3
While deploying AI tools and crafting policies are necessary steps, a truly student-centric approach must also consider the affective and motivational dimensions of learning. Research highlights the critical role of a student's mindset—particularly a growth mindset, the belief that abilities can be developed—along with related attributes like resilience, grit, and a sense of belonging, in fostering academic engagement and success.1 Interventions designed to cultivate these qualities have shown promise in improving outcomes, although their effectiveness can vary depending on the context and student population.38 This is particularly relevant given that student engagement sometimes follows a downward trajectory, especially during adolescence.34 Integrating AI requires considering its potential interaction with these factors. Could AI inadvertently exacerbate disengagement for some students, or could it be ethically leveraged as part of broader interventions? For example, could AI-powered personalized support, informed by learning analytics 4, carefully identify students potentially struggling or losing motivation, enabling timely, human-centered support before they enter a negative cycle?34 Such approaches must be pursued with extreme caution, prioritizing ethical considerations, data privacy, and awareness of the inconsistent efficacy of purely technological interventions.38 This suggests that optimizing AI's role involves embedding it within a holistic strategy focused on pedagogy, student well-being, and fostering the underlying psychological factors, like growth mindset and self-efficacy 2, that enable students to exercise agency effectively.

VI. Call to Action: Building the Future, Together

Translating these principles into practice requires concerted effort from all stakeholders within the educational ecosystem. The following recommendations outline key actions for students, educators, institutions, developers, and policymakers:

Recommendations for Students:

  • Engage Critically: Use AI tools as aids, not replacements for your own thinking. Actively question, verify, and evaluate AI outputs for accuracy, bias, and relevance.25 Avoid over-reliance and develop your own understanding.14
  • Cultivate AI Literacy: Take initiative to understand the basics of how AI works, its capabilities, limitations, and the ethical issues involved.11 Seek out learning opportunities provided by your institution or independently.
  • Uphold Academic Integrity: Familiarize yourself with and adhere to institutional policies regarding acceptable AI use in coursework. Cite AI assistance appropriately and ensure your submitted work reflects your own effort and learning.11 Consider keeping a log of AI use for transparency.14
  • Advocate and Participate: Engage in discussions about AI on campus. Voice your concerns regarding ethics, privacy, bias, and equity. Participate in student task forces or feedback sessions related to AI policy and implementation.3 Understand and assert your data privacy rights.
  • Practice Self-Reflection: Regularly reflect on how you are using AI tools and how they are impacting your learning process, critical thinking skills, and overall educational experience.12

Recommendations for Educators:

  • Develop AI Literacy: Invest time in understanding AI capabilities, limitations, and ethical implications relevant to your discipline. Model responsible, critical, and ethical AI use for your students.7 Seek out and participate in professional development opportunities.8
  • Foster Student Agency: Intentionally design learning experiences and assessments that promote student autonomy, critical thinking, and ownership.1 Create opportunities for self-directed learning and normalize bidirectional feedback conversations about learning processes, including AI use.1
  • Teach Critical AI Literacy: Explicitly integrate discussions and activities related to AI literacy, ethics, bias detection, and responsible use within your subject matter.11 Facilitate open conversations about the ethical dimensions of AI.10
  • Establish Clear Classroom Guidelines: Communicate clear expectations for AI use in assignments and assessments, aligning with institutional policies but providing specific context for your course.11 Emphasize the value of originality, process, and critical engagement over AI-generated outputs.11
  • Evaluate Tools Critically: Before integrating any AI tool, carefully evaluate its pedagogical suitability, potential biases, data privacy implications, and accessibility for all students.18 Prioritize tools that support learning goals ethically and effectively.
  • Balance Technology and Human Connection: Leverage AI to enhance teaching practice (e.g., automating routine tasks, providing personalized feedback scaffolds) but ensure it complements, rather than replaces, meaningful human interaction, coaching, and mentorship.5

Recommendations for Institutions (Universities, K-12 Districts):

  • Collaborative Policy Development: Develop, implement, and regularly review clear, nuanced, and adaptable AI policies through collaborative processes involving students, faculty, staff, administrators, and families/community members.3 Ensure policies focus on responsible use and learning, not just restriction.8
  • Invest in Universal AI Literacy: Commit significant resources to comprehensive AI literacy training and ongoing professional development for all students, faculty, and staff.7 Ensure these programs address ethical considerations and critical evaluation skills.
  • Prioritize Equity and Access: Implement strategies to ensure equitable access to AI tools, digital devices, and reliable internet connectivity for all students.3 Actively monitor and mitigate potential biases in AI systems and policies.18 Re-evaluate overly restrictive content filtering or monitoring policies that may disproportionately limit educational opportunities or student agency.3
  • Establish Robust Data Governance: Adopt a "privacy-first" approach to procuring and implementing AI tools.3 Enforce stringent vendor contracts that ensure FERPA/GDPR compliance, clearly define data usage limitations (prohibiting use for model training unless explicitly agreed upon), mandate strong security measures (encryption, MFA), grant institutions control over data, specify data deletion protocols, and require regular security audits (e.g., SOC 2 Type II certification).3
  • Cultivate an Ethical Culture: Foster an institutional culture that encourages open dialogue, critical reflection, and ethical deliberation regarding AI's role in education.1 Support research and evaluation of AI's impact on learning and equity.4
  • Adapt Assessment Practices: Continuously review and adapt assessment methods to maintain academic integrity while acknowledging the reality of AI tools. Focus on assessing higher-order thinking, process, and application.8 Stress-test assessments against AI capabilities.8

Recommendations for Developers & Policymakers:

  • Developers: Embrace "privacy-by-design" and "ethics-by-design" principles in tool development.3 Minimize data collection, be transparent about data usage policies and algorithmic functioning, and invest in robust methods to identify and mitigate bias.3 Actively involve educators and students in the design and testing process.3 Provide clear documentation regarding intended use, limitations, and ethical considerations.
  • Policymakers (Government, International Organizations): Develop comprehensive national and international AI education strategies that prioritize human-centered values, ethics, equity, and agency.16 Strengthen regulations concerning student data privacy and vendor accountability in the EdTech sector.3 Fund independent research into AI's educational impacts, ethical challenges, and effective pedagogical integration.4 Promote initiatives aimed at ensuring equitable access to technology and universal AI literacy.3 Strive for regulatory frameworks that balance fostering innovation with safeguarding human rights, privacy, and student well-being.13 Support the creation and adoption of AI competency frameworks for both students and teachers.16 Recognize that AI is not a panacea for systemic educational challenges like funding gaps or teacher shortages, which require sustained policy attention and investment.16

By embracing these principles and undertaking these actions collaboratively, the educational community can navigate the complexities of the AI era, harnessing its potential while safeguarding the core values of human agency, ethical responsibility, and equitable opportunity for all learners.

Works cited

  1. Full article: Twelve tips to afford students agency in programmatic assessment, accessed April 25, 2025, https://www.tandfonline.com/doi/full/10.1080/0142159X.2025.2459362?src=
  2. The Role of Student Agency in Fostering Lifelong Learners - Structural Learning, accessed April 25, 2025, https://www.structural-learning.com/post/student-agency
  3. Empowering Student Agency in the Digital Age: The Role of Privacy in EdTech, accessed April 25, 2025, https://www.newamerica.org/education-policy/briefs/empowering-student-agency-in-the-digital-age-the-role-of-privacy-in-edtech/
  4. Learning Analytics in Supporting Student Agency: A Systematic Review - MDPI, accessed April 25, 2025, https://www.mdpi.com/2071-1050/15/18/13662
  5. Promoting Agency Among Upper Elementary School Teachers and Students with an Artificial Intelligence Machine Learning System to Score Performance-Based Science Assessments - MDPI, accessed April 25, 2025, https://www.mdpi.com/2227-7102/15/1/54
  6. Key Drivers of Artificial Intelligence Influencing Student Retention in UAE HE - Biomedical Journal of Scientific & Technical Research, accessed April 25, 2025, https://biomedres.us/pdfs/BJSTR.MS.ID.009246.pdf
  7. Student Generative AI Survey 2025 | HEPI, accessed April 25, 2025, https://www.hepi.ac.uk/wp-content/uploads/2025/02/HEPI-Kortext-Student-Generative-AI-Survey-2025.pdf
  8. Student Generative AI Survey - AI Pioneers, accessed April 25, 2025, https://aipioneers.org/student-generative-ai-survey/
  9. What Students Want: Key Results from DEC Global AI Student Survey 2024, accessed April 25, 2025, https://www.digitaleducationcouncil.com/post/what-students-want-key-results-from-dec-global-ai-student-survey-2024
  10. Generative AI in Education: The Impact, Ethical Considerations, and Use Cases - Litslink, accessed April 25, 2025, https://litslink.com/blog/generative-ai-in-education-the-impact-ethical-considerations-and-use-cases
  11. How to Embrace AI Without Compromising Academic Integrity - QuadC, accessed April 25, 2025, https://www.quadc.io/blog/how-to-embrace-ai-without-compromising-academic-integrity
  12. A Framework for Human-Centric AI-First Teaching - AACSB, accessed April 25, 2025, https://www.aacsb.edu/insights/articles/2025/02/a-framework-for-human-centric-ai-first-teaching
  13. A/79/520 General Assembly - Right to Education Initiative |, accessed April 25, 2025, https://www.right-to-education.org/sites/right-to-education.org/files/resource-attachments/UNSR_Report%20on%20Artificial%20intelligence%20in%20education_2024_en.pdf
  14. Top 3 ethical considerations in using AI in education: is my data safe? - Aristek Systems, accessed April 25, 2025, https://aristeksystems.com/blog/top-3-ethical-considerations-in-using-ai-in-education-is-my-data-safe/
  15. Artificial Intelligence: Ethical Considerations In Academia - MDPI Blog, accessed April 25, 2025, https://blog.mdpi.com/2024/02/01/ethical-considerations-artificial-intelligence/
  16. What you need to know about UNESCO's new AI competency frameworks for students and teachers, accessed April 25, 2025, https://www.unesco.org/en/articles/what-you-need-know-about-unescos-new-ai-competency-frameworks-students-and-teachers
  17. Augmented Education in the Global Age; Artificial Intelligence and the Future of Learning and Work - UCL Discovery, accessed April 25, 2025, https://discovery.ucl.ac.uk/id/eprint/10168356/2/Holmes_10.4324_9781003230762-11_chapterpdf.pdf
  18. Striking a Balance: Navigating the Ethical Dilemmas of AI in Higher Education, accessed April 25, 2025, https://er.educause.edu/articles/2024/12/striking-a-balance-navigating-the-ethical-dilemmas-of-ai-in-higher-education
  19. Guidance for generative AI in education and research - UNESCO Digital Library, accessed April 25, 2025, https://unesdoc.unesco.org/ark:/48223/pf0000386693
  20. Artificial intelligence in education | UNESCO, accessed April 25, 2025, https://www.unesco.org/en/digital-education/artificial-intelligence
  21. Introducing Claude for education - Anthropic, accessed April 25, 2025, https://www.anthropic.com/news/introducing-claude-for-education
  22. Assessing Computer Science Student Attitudes Towards AI Ethics and Policy - arXiv, accessed April 25, 2025, https://arxiv.org/html/2504.06296
  23. How Students Use AI: Currys Study Reveals Top EdTech Trends and Ethical Concerns, accessed April 25, 2025, https://www.edtechinnovationhub.com/news/currys-study-reveals-student-use-of-ai-in-education-and-divided-opinions-on-its-ethics
  24. Investigation of the Long-Term Impacts of AI Education on Business Processes: A Case Study of Norway - Bergen Open Research Archive, accessed April 25, 2025, https://bora.uib.no/bora-xmlui/bitstream/handle/11250/3187884/127600224.pdf?sequence=1&isAllowed=y
  25. Do Your Students Know How to Analyze a Case with AI—and Still Learn the Right Skills?, accessed April 25, 2025, https://hbsp.harvard.edu/inspiring-minds/framework-analyze-cases-using-ai-enhance-decision-making-skills
  26. Artificial Intelligence (AI) in Education: AI and Ethics - Research Guides, accessed April 25, 2025, https://guides.lib.jmu.edu/AI-in-education/ethics
  27. AI Policy for Schools — Library of Examples - Flint - AI, accessed April 25, 2025, https://www.flintk12.com/ai-policies
  28. AI, Student Data, and FERPA Compliance: Why Element451 is the Trusted Choice for Higher Education, accessed April 25, 2025, https://element451.com/blog/ai-student-data-and-ferpa-compliance
  29. Principles | AI Guidance for Schools Toolkit - TeachAI, accessed April 25, 2025, https://www.teachai.org/toolkit-principles
  30. Demystifying AI: A Guide to AI Literacy in Higher Education Enrollment Management, accessed April 25, 2025, https://www.liaisonedu.com/resources/blog/demystifying-ai-a-guide-to-ai-literacy-in-higher-education-enrollment-management/
  31. Navigating Responsible AI in Education - ACSA Resource Hub, accessed April 25, 2025, https://content.acsa.org/navigating-responsible-ai-in-education/
  32. How can evolving student attitudes inform institutional Gen-AI initiatives? - HEPI, accessed April 25, 2025, https://www.hepi.ac.uk/2025/03/13/how-can-evolving-student-attitudes-inform-institutional-gen-ai-initiatives/
  33. Final Report of the Ad Hoc Committee to Plan Next Steps to Redesign Entry-Level Education for Speech-Language Pathologists - ASHA, accessed April 25, 2025, https://www.asha.org/siteassets/reports/ahc-next-steps-to-redesign-entry-level-education-for-slps.pdf
  34. Full article: The Relationship Between Study Engagement and Critical Thinking Among Higher Vocational College Students in China: A Longitudinal Study - Taylor & Francis Online, accessed April 25, 2025, https://www.tandfonline.com/doi/full/10.2147/PRBM.S386780
  35. Artificial Intelligence for human-centric society: The future is here - The European Liberal Forum, accessed April 25, 2025, https://liberalforum.eu/wp-content/uploads/2023/12/Artificial-Intelligence-for-human-centric-society.pdf
  36. K-12 AI curricula: a mapping of government-endorsed AI curricula - UNESCO Digital Library, accessed April 25, 2025, https://unesdoc.unesco.org/ark:/48223/pf0000380602
  37. Full article: Artificial Intelligence in Education (AIED) Policies in School Context: A Mixed Approach Research, accessed April 25, 2025, https://www.tandfonline.com/doi/full/10.1080/15700763.2024.2443675?src=
  38. Brief Mindset Intervention Changes Attitudes but Does Not Improve Working Memory Capacity or Standardized Test Performance - MDPI, accessed April 25, 2025, https://www.mdpi.com/2227-7102/14/3/227
  39. The Meaning of School Program: A Before-After Controlled Study Enhancing Growth Mindset in Priority Education Schools. | Request PDF - ResearchGate, accessed April 25, 2025, https://www.researchgate.net/publication/377434315_The_Meaning_of_School_Program_A_Before-After_Controlled_Study_Enhancing_Growth_Mindset_in_Priority_Education_Schools
  40. AI and education: guidance for policy-makers - UNESCO Digital Library, accessed April 25, 2025, https://unesdoc.unesco.org/ark:/48223/pf0000376709