Webinar #4 — Gender and Diversity aspects in Hybrid Collective Intelligence

This webinar examines the integration of gender and diversity perspectives in the emerging field of hybrid collective intelligence, where human expertise and artificial intelligence are combined to generate novel forms of knowledge and decision-making. The discussion will address methodological and ethical considerations, highlight risks of bias and exclusion in human–AI interaction, and propose frameworks for embedding inclusivity and equity in the design, development, and deployment of such systems.
When: Monday, September the 29th at 6pm CEST
Register here: https://events.teams.microsoft.com/event/71775f25-0ed9-4904-96bf-a723c21bef65@34c64e9f-d27f-4edd-a1f0-1397f0c84f94
Speakers and talks

Prof. Taha Yasseri, Trinity College Dublin and Technological University Dublin
Taha Yasseri is the Workday Full Professor and Chair of Technology and Society at Trinity College Dublin and Technological University Dublin, where he directs the TCD–TU Dublin Joint Centre for the Sociology of Humans and Machines (SOHAM). He is also an Adjunct Full Professor at the School of Mathematics and Statistics at University College Dublin. Previously, he served as Professor and Deputy Head of the School of Sociology at University College Dublin and was a Geary Fellow at the Geary Institute for Public Policy. Before moving to Ireland, he was Senior Research Fellow in Computational Social Science at the University of Oxford, a Turing Fellow at the Alan Turing Institute for Data Science and Artificial Intelligence, and a Research Fellow in Humanities and Social Sciences at Wolfson College, Oxford. He holds a PhD in Complex Systems Physics from the University of Göttingen, Germany.
Title: How Our Workplace Gender Biases Extend to AI Colleagues
Abstract: As AI evolves from a tool to a teammate or even a manager, we bring our human habits and biases along for the ride. Our research shows that simply giving AI a gender label changes how people trust, judge, and cooperate with it. Gender cues can make AI seem more relatable and improve teamwork, but they also reproduce familiar inequalities. Female-labelled AI is judged more harshly when things go wrong. We tend to cooperate more with female-labelled partners and less with male-labelled ones, even when both are machines. In addition, male-labelled AI often earns more credit when things go well. These findings reveal that adding gender to AI is not a neutral design choice—it can boost collaboration, but it can also carry workplace biases into our relationships with AI colleagues.

Prof Roberta Calegari, University of Bologna
Roberta Calegari is a researcher and professor at the Department of Computer Science and Engineering and at the Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma-AI) at the University of Bologna. Her research focuses on trustworthy AI systems, with particular attention to fairness, explainability, distributed intelligent systems, software engineering, multi-paradigm programming languages, and the intersection of AI and law.
She is the coordinator of the Horizon Europe 2020 project AEQUITAS (Grant Agreement No. 101070363), which addresses the assessment and engineering of equitable, unbiased, impartial, and trustworthy AI systems. The project provides an experimental playground for evaluating and mitigating bias in AI technologies. Professor Calegari also contributed to the EU Horizon 2020 project PrePAI (Grant Agreement No. 101083674), where she works on defining requirements and mechanisms to ensure that resources published on the future AI-on-Demand platform are labeled as trustworthy and comply with forthcoming AI regulations.
Her research interests lie in the broad area of knowledge representation and reasoning, with a specific focus on symbolic AI, including computational logic, logic programming, argumentation theory, logic-based multi-agent systems, and non-monotonic/defeasible reasoning. She serves on the Editorial Board of ACM Computing Surveys in the area of Artificial Intelligence and on the JAIR Social Aspects of AI track. She has authored more than 90 peer-reviewed publications in international journals and conferences. She is actively involved in leading multiple European, national, and regional research projects, and manages several academic-industry collaborations.
Title: From Opportunity to Compliance: Gender and Diversity in the AEQUITAS Framework for Fair AI
Abstract: As AI systems play an increasing role in shaping critical decisions, ensuring fairness—especially regarding gender and diversity—has become both an ethical imperative and a regulatory priority. This talk introduces the AEQUITAS framework, developed within the Horizon Europe project, which provides tools to assess, engineer, and mitigate bias in AI systems, with a focus on fairness by design.
We will highlight how AEQUITAS bridges ethical principles and technical implementation, integrating gender and diversity considerations into practical workflows. Finally, the talk will explore how the framework supports the shift from fairness as an opportunity to fairness as a compliance requirement, aligning with emerging European AI regulations.

E.M. Lewis-Jong, Mozilla Data Collective and Common Voice
E.M. Lewis-Jong is the Founder and VP of Mozilla Data Collective—the community-led data platform for the ethical creation, curation, and control of AI training datasets. They are also the Director at Common Voice, the world’s largest open crowdsourced speech corpus, spanning 300+ languages and 750,000 open community contributors. EM has run AI community data programmes for the US National Science Foundation, NVIDIA, Gates Foundation, and GIZ. Before Mozilla, EM was a founding executive in CivTech: leading, launching and growing product from concept stage to venture-backed Series A. They are Sutton Trust Oxbridge Access Summer Schools Alumna, a Rising Star of Tech Awardee and a SheCodes Alumna. They sit on several multilingual AI advisory boards, including as NLP TAP Advisor for Google-backed Lacuna Fund, Industry Board Advisor for Speech Technologies at the University of Groningen and Data Advisor for the Caribbean Parliaments’ digitisation project. They studied at the University of Oxford, and are—in their spare time—a Doctoral researcher in AI Informatics, focussed on children, data and Intelligent Personal Assistants.
Title: Language, data and cultural identity in AI systems
Abstract: Artificial intelligence systems are only as inclusive as the data they are trained on. Many languages—particularly those with oral traditions, regional dialects, or flexible orthographies—remain under-represented in global datasets. This imbalance not only affects technical performance but also undermines cultural identity, representation and agency. We’ll explore how community-driven initiatives, such as Mozilla Common Voice and Mozilla Data Collective, can help to embed cultural values into data collection, curation and sharing. By re-conceiving of training data as a public cultural good, and inviting a hyper-diversity of language communities into decision-making, we can work towards AI that preserves and revitalises heritage, rather than accelerates cultural homogeneity.