Artificial Intelligence (AI) technologies and related automated decision-making processes are becoming increasingly embedded in the tissue of digital societies. Their impact cuts across different political, social, economic, cultural, and environmental aspects of our lives. On the one hand, AI can be used to drive economic growth, enable smart and low-carbon cities, and optimize the management of scarce resources such as food, water and energy. On the other hand, AI can also be used in a manner that infringes on human rights and fundamental freedoms, such as freedom of expression and privacy, and risks exacerbating existing socioeconomic and gender inequalities. Furthermore, the implementation of AI systems may lead to values-driven dilemmas and complex problems, often requiring trade-offs that can only be addressed through broad societal consensus.
This guide focuses on the question of how the development of AI policies can be made inclusive. Multistakeholder approaches to policymaking are part of the answer because they create the space for learning, deliberation, and the development of informed solutions. They help decision makers consider diverse viewpoints and expertise, prevent capture by vested interests, and counteract polarization of policy discourse. A multistakeholder approach to AI policy development and the consultation of stakeholders from different backgrounds and expertise are necessary to be able to develop a relevant and applicable policy for the national context.
The objective of this guide is to support policymakers in ministries and parliaments in the design and implementation of inclusive AI policies, while empowering stakeholders including civil society, businesses, technical community, academia, media, and citizens, to participate in and influence these policy processes