June 3, 2024 Artificial Intelligence and Intersectionality by Inga UlnicaneBehind the Artificial Intelligence (AI) hype about its numerous benefits, uncomfortable questions concerning the problematic social impacts of AI on issues such as justice, fairness and equality are intensifying. While it has been argued that AI has a potential to eliminate human bias, growing evidence suggests quite the opposite – that AI is amplifying and ... more
title
Behind the Artificial Intelligence (AI) hype about its numerous benefits, uncomfortable questions concerning the problematic social impacts of AI on issues such as justice, fairness and equality are intensifying. While it has been argued that AI has a potential to eliminate human bias, growing evidence suggests quite the opposite – that AI is amplifying and exacerbating gender, racial, ethnic and other stereotypes. Some widely discussed biased AI applications include hiring algorithms that discriminate against female candidates, facial recognition that performs poorly on black and female faces as well as obedient and subservient digital female voice assistants. At the same time, it is very difficult to find examples where AI has helped to detect, reduce or eliminate human bias. In two recent articles (Ulnicane 2024; Ulnicane and Aden 2023), I analyse how AI documents frame concerns about bias and inequality in AI and recommendations for tackling it. For this analysis, I use an intersectional lens to highlight the interaction of multiple identities – gender, race, class and others – leading to the marginalization, exclusion and discrimination of certain social groups. Social vs technical framing of bias in AIBias is one of the key concerns in policy, media and public discussions about AI. While bias in AI can be presented as a technical issue, it is a multifaceted phenomenon that includes social, technical, political, cultural and historical dimensions. To make sense of discussions about bias in AI, in our recent article (Ulnicane & Aden 2023) we distinguish two competing frames: technical framing and social (socio-technical) one. According to a technical frame, AI is objective and neutral and can help to detect and eliminate bias. If bias in AI occurs, then it is just a glitch that can be addressed with technical measures. AI is offered as a technical fix to solve human bias. While this technical frame has been quite popular, it has been challenged by an alternative social framing. According to the social frame, AI amplifies and exacerbates human biases and reflects deep rooted historical and systemic inequalities and power asymmetries. It cannot be just fixed with AI but requires a systemic and holistic approach. We suggest approaching bias in AI as a complex and uncertain ‘wicked problem’. To tackle such a problem, a broader strategy is needed that combines technical and social actions based on wide-ranging collaborations including affected communities. Intersectionality and AI: concerns and agenda for tackling them In my recent article on intersectionality and AI (Ulnicane 2024), I examine four high profile reports on AI and gender focusing on how they frame concerns and recommendations for action. The reports highlight the systemic nature of equality issues in AI, where the diversity crisis among AI developers and founders leads to the building of biased AI systems creating a negative feedback loop and vicious cycle. Concerns that AI might offset progress made towards equality during previous decades are growing. Lack of women and minorities in computing is not a new problem. There have been a lot of diversity initiatives in computing during the past decades, but they have not led to positive changes. Sometimes these initiatives have even resulted in decline of diversity because they have not sufficiently addressed underlining culture and structural issues in the tech sector that includes harassment, discrimination, stereotypes, unfair pay and lack of promotion opportunities. Despite the acceptance of diversity rhetoric by tech companies, it is often poorly understood and has even experienced pushback. The reports highlight the urgency of diversity problem in AI. They argue for a broad approach that goes beyond just increasing numbers of women and minorities. Instead, focus should be on shaping culture, power and opportunities to exert influence. Furthermore, it is necessary to involve perspectives from multiple disciplines, sectors and groups. At the same time, it is important to avoid ‘participation washing’ when the participation of a minority representative is supposed to legitimize the project. While intersectionality provides an illuminating perspective on some of the key concerns in AI, in the existing AI landscape dominated by economic issues it can be perceived as a niche perspective mainly concerning women and minorities. It could be enriching to use intersectionality to reimagine AI in more inclusive and participatory ways. References:Ulnicane, I. (2024) Intersectionality in Artificial Intelligence: Framing Concerns and Recommendations for Action. Social Inclusion, 12: 7543 https://doi.org/10.17645/si.7543 Ulnicane, I. & Aden, A. (2023) Power and politics in framing bias in artificial intelligence policy. Review of Policy Research, 40(5): 665–687 https://doi.org/10.1111/ropr.12567 This post was initially published on Europe of Knowledge blog.
May 31, 2024 ECPR SG Knowledge Politics and Policies NewsletterWarm greetings to all our colleagues as the 2024 General Conference is approaching! We are very much looking forward to seeing many of you at the annual SG section at the ECPR Conference in Dublin this August! The Section continues the work on knowledge policy domains from the past 10 ECPR conferences (previously under the titles ‘Politics of Higher Education, ... more
April 26, 2024 Responsible Research and Innovation training by Inga UlnicaneHow to align research and innovation with values, needs and expectations of society? During the past ten years, researchers, policy-makers and funders in Europe have developed and supported the Responsible Research and Innovation (RRI) approach to address societal aspects of research and innovation early on. This approach aims to go beyond risk management and have... more
How to align research and innovation with values, needs and expectations of society? During the past ten years, researchers, policy-makers and funders in Europe have developed and supported the Responsible Research and Innovation (RRI) approach to address societal aspects of research and innovation early on. This approach aims to go beyond risk management and have a broader focus on the purpose of research and innovation. It involves a range of anticipation, reflection, engagement, and action mechanisms to involve society and foster interdisciplinary collaborations to shape research and innovation towards socially beneficial goals. Importantly, in the RRI approach responsibility does not just refer to responsible conduct of individual researchers but aims to facilitate responsible processes and governance arrangements across the whole research and innovation system.
To build such a system, it is important to provide relevant training opportunities for researchers and stakeholders. Some of the major research funders such as the EU Framework programme and UK research councils have supported the development and delivery of RRI training activities, which play a crucial role in raising awareness and developing culture that puts societal aspects at the core of research and innovation. Two recent collaborative publications in the Journal of Responsible Technology share a number of good practices of RRI training.
RRI capacity development in a large-scale EU research project
Researchers in the EU-funded Human Brain Project (HBP) have developed a dedicated RRI capacity development programme (Ogoh et al 2023). The HBP (2013-2023) was one of the largest international collaborations ever that brought together around 500 researchers from over 100 universities and research centres from some 20 countries. Over ten years, the project received approximately half a billion Euros from the EU Framework Programmes. An integrated RRI team of social scientists and humanities researchers in the HBP worked alongside neuroscientists, computer scientists and engineers.
Continuous collaboration in this case allowed the development of the RRI capacity development programme in close consultation with researchers and stakeholders. The programme included 17 modules on a range of topics such as data governance, dual use, and diversity. Moreover, it developed online training resources, lectures, and videos.
Many participants of online and in-person training were eager to learn about and reflect on societal aspects of their work. Often, they told us that this much needed training has been missing during their university education, which typically had covered ethical aspects rather narrowly in terms of ethics approvals. However, assessing the impact of RRI training is far from straightforward. Counting training sessions and participants as well as reading evaluation forms gives some indication of interest and satisfaction. At the same time, it is much more challenging to assess some of the core aspects of RRI such as reflexivity, changing culture and increased sensitivity towards societal expectations.
RRI and doctoral training
In the UK, RRI training is integrated in the centres for doctoral training. A recent editorial (Stahl et al 2023) presents a variety of examples of how RRI training is organized and assessed in the context of these centres. This collaborative publication provides rich information and reflection on aims, content, and challenges of teaching RRI. It addresses questions such as: What kind of skills, attitudes and competencies do researchers need in the context of RRI? Should they be required to have a relatively detailed understanding of methodologies of foresight or public engagement? Or should they rather be willing and able to continuously reflect on and address social and ethical aspects of their own research?
The editorial demonstrates a broad range of approaches and methods to RRI training and assessment across diverse disciplines and universities. While having RRI as part of doctoral training is an important step towards its institutionalization, it is rather limited on its own. To be impactful, it needs to be part of a broader transformation of the research and innovation system including policy, reward system and funding.
References:
Ogoh, G., Akintoye, S., Eke, D., Farisco, M., Fernow, J., Grasenick, K., Guerrero, M., Rosemann, A., Salles, A. & Ulnicane, I. (2023). Developing capabilities for responsible research and innovation (RRI). Journal of Responsible Technology, 15, 100065. https://doi.org/10.1016/j.jrt.2023.100065
Stahl, B. C., Aicardi, C., Brooks, L., Craigon, P. J., Cunden, M., Burton, S. D., De Heaver, M., De Saille, S., Dolby, S., Dowthwaite, L., Eke, D., Hughes, S., Keene, P., Kuh, V., Portillo, V., Shanley, D., Smallman, M., Smith, M., Stilgoe, J., Ulnicane, I., Wagner, C., & Webb, H. (2023). Assessing responsible innovation training. Journal of Responsible Technology, 16, 100063. https://doi.org/10.1016/j.jrt.2023.100063
This post was initially published on Europe of Knowledge blog.
From the Standing Group on Knowledge Politics and Policies.
|
![Loading image](/DXR.axd?r=1_87-9pgWq) | Loading… |
![Loading image](/DXR.axd?r=1_87-9pgWq) | Loading… |
|