Skip to secondary navigation Skip to main content

How to Avoid Bias in Emerging Technologies

Emerging technologies, such as artificial intelligence (AI), machine learning (ML), quantum computing, and blockchain, are gaining attention for their potential to transform our society. Alongside their benefits, there are growing concerns about their negative societal impacts, particularly their bias against those with disabilities. To ensure responsible development, whether creating a Minimum Viable Product (MVP) or an ideal-state solution, it is crucial to mitigate rather than compound biases at every stage.

With greater attention on AI in particular, the National Institute of Standards and Technology (NIST) recommends organizations focus beyond ML to consider societal influences on technology development. Their guidance underscores the importance of understanding biases not just in algorithms and data, but also in the societal context of AI usage.

According to the White House’s Office of Science and Technology Policy (OSTP), bias in technology, specifically algorithmic discrimination, occurs when automated systems unjustifiably disfavor people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Bias can infiltrate algorithms through biased training data or flawed sampling, for example, perpetuating historical or social inequities. The revised NIST publication “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270)” acknowledges computational and statistical biases and stresses the need to address human and systemic biases through a sociotechnical approach involving diverse experts and stakeholders.

The principles outlined by OSTP aim to prevent algorithmic discrimination by promoting equitable design and use of automated systems. Their recommended actions include doing equity assessments during design, using representative data, considering accessibility, and ongoing testing and mitigation. Transparency and accountability are crucial, requiring clear reporting of algorithmic impact assessments and mitigation efforts.

The intention behind diversity and inclusion practices is to empower individuals from marginalized communities. Even executives in government and industry may possess implicit biases, particularly concerning new technology, which can influence decisions related to productivity or competitiveness. Despite believing they’re impartial, research suggests personal beliefs can unconsciously favor novel technologies over older ones, leading to unwarranted risks and investments in unproven or unsafe products. Decision makers should evaluate technology based on objective criteria like error rates, task completion times, and privacy concerns, rather than relying solely on subjective perceptions such as emotional reactions or user satisfaction.

Additional strategies, such as diverse team composition, thorough user research, diverse data sources, bias detection and mitigation, bias training, iterative testing and development, and ethics reviews, help agencies minimize bias in new technologies. These strategies encourage technologies that are more inclusive, fair, and effective for diverse user populations. Summaries of these strategies are below:

  • Diverse Team Composition: Assemble a development team with diverse backgrounds, skills, and viewpoints to detect and address biases present within the team. Include diverse perspectives, such as nonexperts and everyday users, in decision-making teams concerning new technology. Research shows involving nonexperts can expose usability issues experts might overlook. Moreover, incorporating unique voices contributes to a broad spectrum of experiences and perspectives throughout the development phase.
  • Thorough User Research: Perform comprehensive user research to grasp the requirements and preferences of your target demographic. This encourages technology that is designed to be valuable and easily accessible to a wide audience. See our article "Tips for Usability Testing with Individuals with Disabilities" for more insights.
  • User-Centered Design: Emphasize user research and integrate ongoing user feedback throughout development to align the final product with the genuine needs and preferences of diverse user demographics and avoid the biases of the development team. As articulated by the Office of Personnel Management’s (OPM) Lab at OPM, user-centered design places individuals at the forefront, considering their behaviors, thought processes, and aspirations throughout the design process.
  • Diverse Data Sources: Leverage varied data sources to train and test the technology, promoting an accurate representation of the intended user base. Varied data sources mitigate bias and improve the technology's effectiveness across diverse demographic groups.
  • Bias Detection and Mitigation: Proactively identify and resolve biases present in the data, algorithms, and decision-making procedures utilized in the technology. This could entail methods like algorithmic auditing, bias testing, and algorithmic transparency.
  • Bias Training: Offer instruction and guidance to business and development teams regarding the significance of recognizing and addressing biases, along with effective strategies for mitigation. Such training helps cultivate a culture of bias awareness throughout the agency.
  • Iterative Testing and Development: Implement a continuous testing process throughout development, using MVPs as learning tools to gather feedback, pinpoint biases, and make enhancements over time. This iterative approach facilitates ongoing refinement and optimization of the technology.
  • Ethics Review: Integrate an ethics review into the technology development lifecycle to uncover potential ethical dilemmas or biases that may have been overlooked. An ethics review should include experts from diverse disciplines. Ensure ethical guidelines and principles, encompassing fairness, accountability, transparency, and privacy, are embedded in the development process from the start.

The advancement of emerging technologies requires a steadfast dedication to responsible development and equitable outcomes. Confronting biases head-on at every stage of the development process is critical. By embracing diverse perspectives, conducting thorough research, and implementing rigorous testing and mitigation strategies, we can foster technologies that not only push the boundaries of innovation but also uphold principles of fairness, inclusion, and societal benefit.

Reviewed/Updated: April 2024

An official website of the General Services Administration

Looking for U.S. government information and services?