With the ever-expanding influence of AI, the conversation around algorithmic fairness has become a cornerstone of responsible technology development. The Third European Workshop on Algorithmic Fairness intends to advance this discourse with a unique focus on the European experience.
The broad concept of EWAF is to showcase the proactive efforts and innovative research emerging from Europe, emphasizing its potential to guide global standards. The workshop welcomes interdisciplinary submissions dealing with the fields of computer science, law, sociology and philosophy and especially submissions which deal with issues of European specificity.
The workshop will be held from July the 1st to July the 3rd at the Alte Mensa of Johannes Gutenberg-Universität in Mainz, Germany.
De-biased, diverse, divisive - On ethical perspectives regarding the de-biasing of GenAI and their actionability
AI tech companies cannot seem to get it right. After years of evidence-based criticism of biases in AI, in particular decision models, LLMs and other generative AI, after years of research and toolbox provision for de-biasing, many companies have implemented such safeguards into their services. However, ridicule and protests have recently erupted when users discovered generated images that were “(overly?) diversified” with respect to gender and ethnicity and answers to ethical questions that were “(overly?) balanced” with regard to moral stances. Is this seeming contradiction just a backlash, or does it point to deeper issues? In this talk, I will analyse instances of recent discourse on too little or too much “diversification” of (Gen)AI and relate this to methodological criticism of “de-biasing”. A second aim is to contribute to the broadening and deepening of answers that computer science and engineering can and should give to enhance fairness and justice.
Beyond the AI hype: Balancing Innovation and Social Responsibility
AI can extend human capabilities but requires addressing challenges in education, jobs, and biases. Taking a responsible approach involves understanding AI's nature, design choices, societal role, and ethical considerations. Recent AI developments, including foundational models, transformer models, generative models, and large language models (LLMs), raise questions about whether they are changing the paradigm of AI, and about the responsibility of those that are developing and deploying AI systems. In all these developments, is vital to understand that AI is not an autonomous entity but rather dependent on human responsibility and decision-making.
In this talk, I will further discuss the need for a responsible approach to AI that emphasize trust, cooperation, and the common good. Taking responsibility involves regulation, governance, and awareness. Ethics and dilemmas are ongoing considerations, but require understanding that trade-offs must be made and that decision processes are always contextual. Taking responsibility requires designing AI systems with values in mind, implementing regulations, governance, monitoring, agreements, and norms. Rather than viewing regulation as a constraint, it should be seen as a stepping stone for innovation, ensuring public acceptance, driving transformation, and promoting business differentiation. Responsible Artificial Intelligence (AI) is not an option but the only possible way to go forward in AI.
Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations. She has a PHD in Artificial Intelligence from Utrecht University in 2004, is member of the Royal Swedish Academy of Engineering Sciences (IVA), and Fellow of the European Artificial Intelligence Association (EURAI). She is a member of the United Nations Advisory Body on AI, the Global Partnership on AI (GPAI), UNESCO’s expert group on the implementation of AI recommendations, OECD’s Expert group on AI, founder of ALLAI, the Dutch AI Alliance, and co-chair of the WEF’s Global Future Council on AI. She was a member of EU’s High Level Expert Group on Artificial Intelligence and leader of UNICEF's guidance for AI and children. Her new book “The AI Paradox” is planned for publication in late 2024.
Papers from Lightning Rounds 1&2, Papers from In-Depth Session 1
Various Authors
Salon 3SEIN
Große Bleiche 60-62, 55116 Mainz
https://maps.app.goo.gl/6woSXPSg1rfNRXTPA
Building Bridges from and Beyond the EU Artificial Intelligence Act. Regulating AI-based Discrimination in the European Scenario
Marilisa D'Amico, Ernesto Damiani, Costanza Nardocci, Paolo Ceravolo, Samira Maghool, Marta Annamaria Tamborini, Paolo Gambatesa and Fatemeh Mohammadi
Abstract:
The panel aims to discuss the implications following the approval of the EU Artificial Intelligence Act also in light of the additional initiatives ongoing in the European and global scenario (e.g. Council of Europe, UNESCO, United Nations) from the perspective of their adequacy to ensure the fairness of algorithms and to tackle discriminations resulting from the massive resort to AI technologies. By bringing together constitutional law and computer science expertise, the discussion has a twofold aim. On the one hand, the panel aims to illustrate the criticisms that underlie the risks of AI systems from a constitutional and human rights perspective, supported by the analysis of computer science; on the other hand, it aims to explore innovative strategies to promote an inclusive use of AI technologies by designers and implementers in order to enhance their positive potential.
Moral Exercises for Human Oversight of AI Systems
Teresa Scantamburlo and Silvia Crafa
Abstract:
The interactive session addresses the challenge of ethical reflection in AI research and practice. Human judgement is crucial in balancing accuracy and fairness trade-offs and overseeing AI system behaviour. To support ethical reflection and judgement we propose the experience of moral exercises: structured activities aimed at engaging AI actors in realistic ethical problems involving the development or use of an AI system. Participants undergo guided exercises involving scenario analysis, individual and group work, highlighting consensus and divergences on the problem at stake. The initiative will promote the development of moral exercises and help refine the methodology for AI ethics education.
How the Digital Services Act can enable researcher access to data of the largest online platforms and search engines - interactive session with the European Commission
EC Joint Research Centre
The Digital Services Act (DSA) entered into full force in February, and aims to create a safer and more trustworthy digital space, where the fundamental rights of all users are protected. As part of the DSA’s transparency obligations, Article 40 of the DSA establishes the obligation of Very
Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to provide researchers with access to data for the purposes of conducting research that
contributes to the detection, understanding and mitigation of systemic risks in the European Union, such as discrimination and the spread of disinformation.
In particular, Article 40(12) of the DSA obliges providers of VLOPs and VLOSEs to give researchers access to data that is publicly available in their interfaces. In addition, Article 40(4) of the DSA establishes a data access mechanism through which researchers who undergo a vetting procedure can obtain access to non-public data for the study of systemic risks in the European Union.
In this workshop, the Commission will present the data access mechanism for vetted researchers and participants will have the opportunity to provide feedback on a detailed proposal for its procedural, technical and operational elements, which is currently being prepared in the form of a delegated act. Participants will get the chance to explore how data access could benefit their research, which challenges they foresee and how these may be overcome. Participants will also hear about how the DSA protects researcher access to publicly available data, and will be able to provide feedback on the data access tools and procedures made available by VLOPs and VLOSEs to this end so far.
Fairness, or not Fairness, That is the Question. Rethinking Virtual Assistants' Responses From an Ethical Perspective
Giulia Teverini, Joy Ciliani and Alessia Nicoletta Marino
Abstract: The workshop aims to delve into the intricate aspects of ethics applied to human-computer interaction. Participants will engage in practical activities in which they will analyze conversations with virtual assistants, distinguishing between appropriate and inappropriate interactions. In this context, the ethical principles proposed by the European Commission will be used as guidelines to ensure favorable conditions for the development of trustworthy AI-based systems. Immediately after participants will be asked to propose scenario-based solutions to tackle ethical issues which have emerged in the previous stage. At the end of the workshop, each group will show the result to the audience. The collaborative nature of the workshop aims at fostering critical thinking, practical understanding, and collective reasoning.
Papers from Lightning Round 3
Various Authors
Schlossbiergarten
Peter-Altmeier-Allee 1, 55116 Mainz
https://maps.app.goo.gl/bRui4ticSuAM1urh7
"Society-centered AI: An Integrative Perspective on Algorithmic Fairness"
Abstract: In this talk, I will share my never-ending learning journey on algorithmic fairness. I will give an overview of fairness in algorithmic decision making, reviewing the progress and wrong assumptions made along the way, which have led to new and fascinating research questions. Most of these questions remain open to this day, and become even more challenging in the era of generative AI. Thus, this talk will provide only few answers but many open challenges to motivate the need for a paradigm shift from owner-centered to society-centered AI. With society-centered AI, I aim to bring the values, goals, and needs of all relevant stakeholders into AI development as first-class citizens to ensure that these new technologies are at the service of society.
Striving for Equity: Navigating Algorithmic Fairness for AI in the Workplace
Thea Radüntz, Martin Brenzke and Dominik Köhler
Artificial Intelligence for Assessment in the Context of Asylum Procedures – Lessons Learned and Reflections Upon Fairness-Related Challenges via Participatory Methods and Science-Policy Dialogue
AI-FORA
Papers from Lightning Round 4, Papers from In-Depth Session 2
Various Authors