Ten Considerations for Generative Artificial Intelligence Adoption in Irish Higher Education

The following ‘ten considerations for generative AI adoption’ outline key issues identified through dialogue with stakeholders across Ireland’s higher education sector, industry, and associated agencies between September and December 2024. They serve as a starting point for academics, researchers, support staff, students, and senior leaders in shaping institutional policies and practices around generative AI.

These considerations will be further refined through stakeholder focus groups, case studies, and research throughout 2025, culminating in a national report with policy guidance and recommendations. A feedback form is available at the end of this page, and we encourage stakeholders to share insights, experiences, and relevant research to inform the next phase of this initiative.

Ten Considerations for Generative Artificial Intelligence Adoption in Irish Higher Education

1. AI Literacy

Equitable access to generative AI is essential to ensuring that both staff and students possess the skills and knowledge needed to use AI responsibly, effectively, and in alignment with disciplinary values. Providing AI literacy training is essential to ensure that all staff and students are equipped with the skills and knowledge they need to use AI responsibly and effectively, regardless of technological, disciplinary, or socio-cultural contexts. AI is not one single tool but a collection of technologies, each with its own strengths, limitations, and potential impacts on teaching, learning, and research. Developing clear frameworks and taxonomies can support staff and students in preparing for AI-enhanced education and work placements, but in ways that align with disciplinary values.


2. Allowable AI

The line between acceptable and unacceptable uses of AI, particularly in educational settings, becomes increasingly complex as AI becomes ubiquitous across systems and platforms. As AI functionalities, including generative capabilities, are embedded into tools like word processors, learning management systems, and research databases, the challenge is to discern where the use of AI enhances learning and where it undermines ethical standards, such as compromising academic integrity.

Purpose: Is the AI being used to enhance the student’s own learning process (e.g., by providing feedback or suggestions) or to replace the student’s role in creating the essential content?

Transparency: Is the use of AI disclosed? Students should be encouraged to be transparent about when and how they use AI tools, enabling educators to assess the student’s engagement with the material.

Extent: To what extent is AI contributing to the final product? Small grammatical or stylistic improvements might be allowable, but when AI starts to produce large portions of content or ideas, the line between assistance and authorship becomes blurred.

Disciplinary norms: Different disciplines may have different expectations. Fields like computer science might view AI-generated code differently than fields like literary studies or philosophy view AI-generated essays or arguments.


3. Academic Integrity

Clear discipline-specific guidelines should aim to distinguish appropriate uses of AI from academic dishonesty, recognising the distinct values and methodologies that underpin disparate academic fields and practices. Assessment practices may need to be redesigned to prioritise critical engagement, originality of thought, and the ability to interpret and evaluate AI-generated outputs. Such measures ensure that students learn not only to produce work, but also to understand and critique the tools they use.


4. Critical AI

Generative AI can unintentionally perpetuate biases related to race, gender, class, and other inequalities. Evaluating AI outputs and encouraging staff and students to approach generative AI critically can foster awareness of the limitations and potential harms embedded in the technology. Tools and frameworks for detecting bias, as well as coursework that fosters critical digital literacy, can help students navigate AI-generated materials with nuance and social awareness.


5. Second-degree Plagiarism

Generative AI models trained on vast and unknown datasets to generate content introduce complex concerns regarding second-degree plagiarism. Using AI tools with limited transparency could risk violating intellectual property rights. Institutions should be cautious to ensure that both staff and students are aware of these challenges in the context of professional and moral standards.


6. Privacy

The use of generative AI tools can raise privacy concerns. Clear privacy policies can ensure that staff and students are fully informed about how their data will be used by any third-party service providers. Transparency and accountability in the handling of staff and student data are essential to maintaining trust in AI integration.


7. Equitable Access

Generative AI tools hold great potential for enhancing learning but may not be universally available or accessible to all students. Access often depends on a reliable internet connection, modern devices, and sometimes premium subscriptions can create barriers for those from lower socioeconomic backgrounds or those in regions with limited technological infrastructure. There must be concerted effort to ensure that all staff and students can benefit equally from AI tools.


8. Sustainability

AI technology has a significant environmental impact, requiring vast energy and frequent hardware updates. Incorporating sustainability benchmarks into vendor selection, favouring energy-efficient technologies, and promoting responsible resource use can help align AI adoption with broader institutional commitments to environmental stewardship.


9. AI Sovereignty

Commercial generative AI tools, while powerful and intuitive, may present challenges related to data transparency, ethical use, and intellectual property, often related to datasets from unknown sources, training methods, or operational mechanisms. This creates potential challenges for privacy, bias, second-degree plagiarism, academic integrity, and the appropriateness of AI-generated content in academic contexts. Localised, shared, or centralised AI models, supported at institutional or national level, may allow for fuller transparency for training data, data privacy, and intended use cases.


10. Enhancement

Generative AI holds considerable potential for enhancing teaching and learning in higher education. A possible framework approach might revolve around five key pillars: Personalisation, Engagement, Efficiency, Equity, and Sustainability. These pillars may address the needs for an adaptable, fair, and transparent use of AI, aligning with the broader goals of higher education in Ireland.

Personalisation of Learning Experiences: Generative AI enables the creation of highly personalised learning environments that respond to the unique needs of each student. By leveraging AI-powered tools that adapt to individual learning styles, educators can offer tailored instructional materials, assessments, and feedback.

Engagement through Interactive and Creative Content: Generative AI fosters greater student engagement by generating dynamic and creative educational content. AI-generated simulations, visualisations, or interactive problem-solving tasks can provide new ways for students to engage with material beyond traditional lectures and textbooks.

Efficiency in Teaching and Administrative Workflows: Generative AI reduces the administrative burden on staff by automating the creation of instructional materials, assessments, and grading processes. This allows staff to spend more time on qualitative interactions with students, such as mentoring and personalised feedback.

Equity in Access and Assessment: Generative AI offers the potential to create more equitable learning environments by providing universal access to educational resources and minimising bias in assessment.

Sustainability and Long-Term Viability: AI systems that require large computational resources should be balanced with energy-efficient technologies and practices to minimise environmental impact.