artificial intelligence

Sort by

Filter by


Student interns explaining how to use Artificial Intelligence in a way that upholds Academic Integrity.
This suite of short videos was create by our student interns in 2023. The videos are a useful resources for students and staff.
The topics covered include Academic Integrity resources at DkIT, Academic Integrity and Group Work, and Academic Integrity and Artificial Intelligence.

UNESCO’s first global guidance on GenAI in education aims to support countries to implement immediate actions, plan long-term policies and develop human capacity to ensure a human-centred vision of these new technologies. The Guidance presents an assessment of potential risks GenAI could pose to core humanistic values that promote human agency, inclusion, equity, gender equality, and linguistic and cultural diversities, as well as plural opinions and expressions. It proposes key steps for governmental agencies to regulate the use of GenAI tools including mandating the protection of data privacy and considering an age limit for their use. It outlines requirements for GenAI providers to enable their ethical and effective use in education.

This is a short introduction to ChatGPT for people teaching in higher education, created in January 2023 and updated until this version was saved in February 2023. The resource is a slide deck which you are free to modify and update (since this is a fast-moving topic). No prior knowledge of AI or chatbots is necessary to use the slides.

Artificial Intelligence (AI) technologies and related automated decision-making processes are becoming increasingly embedded in the tissue of digital societies. Their impact cuts across different political, social, economic, cultural, and environmental aspects of our lives. On the one hand, AI can be used to drive economic growth, enable smart and low-carbon cities, and optimize the management of scarce resources such as food, water and energy. On the other hand, AI can also be used in a manner that infringes on human rights and fundamental freedoms, such as freedom of expression and privacy, and risks exacerbating existing socioeconomic and gender inequalities. Furthermore, the implementation of AI systems may lead to values-driven dilemmas and complex problems, often requiring trade-offs that can only be addressed through broad societal consensus.

This guide focuses on the question of how the development of AI policies can be made inclusive. Multistakeholder approaches to policymaking are part of the answer because they create the space for learning, deliberation, and the development of informed solutions. They help decision makers consider diverse viewpoints and expertise, prevent capture by vested interests, and counteract polarization of policy discourse. A multistakeholder approach to AI policy development and the consultation of stakeholders from different backgrounds and expertise are necessary to be able to develop a relevant and applicable policy for the national context.

The objective of this guide is to support policymakers in ministries and parliaments in the design and implementation of inclusive AI policies, while empowering stakeholders including civil society, businesses, technical community, academia, media, and citizens, to participate in and influence these policy processes