AI Weekly: Recognition of bias in AI continues to grow

Hear from the CIO, CTO and other C-level and senior executives on data and AI strategies at the Future of Work Summit on January 12, 2022. Learn more

This week, a partnership with AI (PAI), a non-profit organization committed to responsible AI use, released a paper on how technology – especially AI – can address a wide variety of biases. While most proposals to reduce algorithmic discrimination require the collection of data on so-called sensitive traits – including issues such as race, gender, sexuality and nationality in general – the co-authors of the PAI report argue that these efforts could actually harm marginalized people. . And groups. Instead of trying to dispel historical patterns of discrimination and social inequality with more data and “clever algorithms”, they say, the value assumptions and trade-offs associated with the use of demographic data must be embraced.

“Harmful biases have been found in algorithmic decision-making systems in areas such as health care, recruitment, criminal justice, and education, raising social concerns about the impact of these systems on the well-being and livelihoods of individuals and groups across society,” the report’s authors wrote. “Many current algorithmic fairness techniques [propose] Access to data on ‘sensitive features’ or ‘protected categories’ (such as race, gender or sexuality) to compare and standardize performance across groups. [But] These demographic-based algorithmic justification techniques [remove] Extensive questions of governance and politics from the equation. “

The publication of the PAI paper comes at a time when organizations are taking a broader – and more complex – approach to AI technology, in light of false arrests, racist rethinking, racist recruitment, and erroneous grades perpetuated by AI. Yesterday, AI ethicist Timnit Gabru, who was controversially fired from Google in a study examining the effects of large language models, launched the Distributed Artificial Intelligence Research (DAIR), which aims to answer questions about its intended use. Is. The world is rarely represented in the tech industry. Last week, the United Nations Educational, Scientific and Cultural Organization (UNESCO) approved a series of recommendations for AI ethics, including routine impact assessment and implementation mechanisms for the protection of human rights. Meanwhile, New York University’s AI Now Institute is studying the effects and application of data AI algorithms for Algorithmic Justice League and Black Lives, such as Khipu, Black in AI, Data Science Africa, Masakhane and Deep Learning Indaba.

Legislators, too, are keeping a close eye on AI systems – and the potential for harm. The UK’s Center for Data Ethics and Innovation (CDEI) recently recommended that public sector organizations using algorithms be required to publish information on how algorithms are applied, including the level of human oversight. The European Union has proposed rules that would publicly prohibit the use of biometric identification systems and prohibit AI in social credit scoring in the 27 member states of the bloc. China, too, which is involved in a number of comprehensive, AI-powered surveillance initiatives, has tightened its oversight over the algorithms that companies use to run their businesses.

Difficulties in reducing bias

PAI’s work warns that efforts to reduce bias in AI algorithms will inevitably face obstacles, however, due to the algorithmic decision-making nature. If optimized for a poorly defined goal, it is likely that the system will reproduce historical inequalities – possibly in defiance of purpose. Attempts to ignore social differences across demographic groups will serve to strengthen systems of oppression as demographic information coded in datasets has a profound effect on the representation of marginalized people. But deciding how to classify demographic information is a constant challenge, as demographic categories change and evolve over time.

“The collection of sensitive data by consent requires clear, specific and limited use, as well as strong security and protection after collection. Current consent practices do not meet this standard, ”the co-authors of the PAI report wrote. “Efforts to gather demographic information can strengthen oppressive norms and strengthen the legitimacy of disenfranchised groups … Efforts to be neutral or objective often have the effect of strengthening the status quo.”

At a time when relatively few research papers consider the negative effects of AI, leading ethicists are urging practitioners to determine the biases at the beginning of the development process. For example, a program at Stanford – Ethics and Society Review (ESR) – requires AI researchers to evaluate their grant proposals for any negative effects. NeurIPS, one of the largest machine learning conferences in the world, mandates that co-authors who submit a paper state “the potential broader impact of their work on society.” And in a white paper published by the US National Institute of Standards and Technology (NIST), co-authors advocate a “culturally effective challenge” that seeks to create an environment where developers can question steps in engineering to help identify problems.

AI can encourage practitioners new ways of thinking about the need to defend their techniques and help change attitudes by organizations and industries, say NIST co-authors.

“An AI tool is often developed for one purpose, but then it is used in other very different contexts. Many AI applications have also been inadequately tested, or not tested in the context for which they are intended, “wrote Reva Schwartz, co-author of the NIST paper and NIST scientist. .. [Because] We know that bias is prevalent throughout the AI ​​life cycle … [not] Knowing where [a] It would be dangerous to assume that the model is biased, or that there is no bias. Determining methods for identifying and managing it is an important step.

For AI coverage, send news tips to Kyle Wiggers – and be sure to subscribe to the AI ​​weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer


VentureBeat’s mission is to become a digital town square for technical decision makers to gain knowledge about transformative technology and practices. Our site delivers essential information on data technologies and strategies so you can lead your organizations. We invite you to access, to become a member of our community:

  • Up-to-date information on topics of interest to you
  • Our newsletters
  • Gated idea-leader content and discounted access to our precious events, such as Transform 2021: Learn more
  • Networking features and more

Become a member

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *