Introduction

Introduction

Many organizations are committed to developing and releasing AI-driven products and features that are both inclusive of a broad base of users and function effectively across a diversity of users. To ensure this, fairness assessments are used to identify and mitigate potential algorithmic bias, especially bias that might be experienced by historically and currently marginalized demographic groups, such as gender non-conforming and transgender individuals, people of color, people with disabilities, and women. Bias in algorithmic systemsSee Appendix 4 for a more detailed definition. can occur for a number of reasons and is often the result of the system making correlations or establishing trends that have the effect of discriminating across groups, even if that is not the intention or purpose.

AI-developing organizations can take steps to test algorithmically driven systems, products, and features for bias before they are released (pre-deployment). However, pre-deployment testing cannot identify all possible issues so it is important to conduct post-deployment analysis to determine if users are experiencing any biased, or otherwise negative, interactions or outcomes. Most current algorithmic fairness techniques, whether pre-deployment or post-deployment, require access to sensitive demographic dataThe term “demographic data” refers to information that attempts to collapse complex social concepts into categorical variables based on observable or self-identifiable characteristics, such as gender, race, or ethnicity. (such as age, ethnicity, gender, and race) to make performance comparisons and standardizations across groups. Post-deployment algorithmic fairness assessments frequently rely on the collection of new user data, as opposed to the use of existing datasets, as it is an opportunity to observe the algorithmic system in use by real people. However, AI practitioners face several challenges trying to procure the data necessary to identify and understand the nature of the bias in their algorithmic systems.

The Challenges of Algorithmic Fairness Assessments

The Challenges of Algorithmic Fairness Assessments

Organizations that develop algorithmic systems are faced with competing consequences. On the one hand, organizations are eager to ensure their products and systems perform fairly and as expected for all users. Collecting and analyzing user data, alongside key demographic characteristics, is necessary to ascertain whether certain groups of people are not receiving a fair and high-fidelity experience, indicating potential algorithmic bias. On the other hand, due to a long history of discriminatory behavior enabled by the collection and use of demographic data, organizations face regulations and other restrictions.

As previous Partnership on AI (PAI) research has highlighted, the collection and use of demographic data is entwined with highly contested social, political, and economic considerations. Individual privacy and anti-discrimination laws (e.g., the Civil Rights Act of 1964 and the Fair Housing Act in the United States and the General Data Protection Regulation in the European Union) have restrictions related to the collection of demographic data. In some instances, organizations may be disincentivized from exploring potential algorithmic bias, as they may face legal consequences if they know of discriminatory or biased behaviors without having a plan to address and mitigate them. However, simply choosing to not collect pertinent demographic data to avoid such responsibility — often referred to as “fairness through unawareness” — obscures the discriminatory impacts of algorithmic systems and can contribute to the perpetuation of social inequities faced by marginalized communities.

An individual’s demographic characteristics could also be used to re-identify a specific individual, revealing their other user data, including behavioral data, resulting in the loss of privacy.The threat of reidentification of specific individuals is particularly relevant for cases in which corporate data is requested by and made available to state agencies. Additionally, the collection and use of sensitive and fine-grained individual user data for advertising has resulted in racially targeted misinformation campaigns, predatory lending, and the loss of public trust. Possible re-identification may result in individuals being targeted or otherwise surveilled based on specific demographic characteristics, further expanding and enabling surveillance infrastructures.See Appendix 4 for a more detailed definition. Inadequately designed data categories and models have contributed to empirical narratives that reify and deepen social stereotypes, which in turn cause harm to socially marginalized communities. Yet socially marginalized communities have also argued for the collection of demographic data, as such data is integral for identifying discriminatory behaviors and outcomes.

The following examples describe how and why marginalized communities collect and leverage demographic data to advance equity: use of data for labor organizing (Bottom-Up Organizing with Tools from On High: Understanding the Data Practices of Labor Organizers), alternative data collection and use methods by Indigenous communities (Indigenous Data Sovereignty), localized data collection efforts to inform city-level policies and practices (Our Data Bodies: Reclaiming Our Data).

Prioritization of Data Privacy: An Incomplete Approach for Demographic Data Collection?

Prioritization of Data Privacy: An Incomplete Approach for Demographic Data Collection?

Several privacy-preserving techniques have been proposed that seek to address some of the concerns posed by demographic data collection and analysis.Techniques that anonymize datasets include, but are not limited to, k-anonymity, p-sensitivity, differential privacy, and secure-multi­party computation (SMPC). In general, these privacy-preserving techniques work to ensure individual privacy through anonymization and limiting how an individual’s information can be accessed and analyzed to reduce re-identification risks.

One such technique is differentially private federated statistics, which combines two approaches: differential privacy and federated statistics (also referred to as federated learningSee Appendix 3 for a more detailed definition.). Differential privacy refers to an approach in which random statistical noise is added to data to enforce privacy constraints. Federated statistics involves running local computations on an individual’s device and only making the composite results (rather than the specific data from a particular device) visible at a central or external level.See the section below titled, “Differentially Private Federated Statistics” for a more detailed explanation of both differential privacy and federated statistics.

It has been shown that these two techniques can be designed and implemented together to ensure the privacy of individuals’ sensitive data. Sensitive user data can be collected and analyzed on an individual’s device to determine whether the individual is experiencing algorithmic bias (federated statistics). Statistical noise can be introduced to the output data and ultimately shared with the organization assessing the algorithm (differential privacy) to ensure sensitive user information is protected against re-identification. While the application of differentially private federated statistics in the context of algorithmic fairness is relatively new, it is viewed as a promising post-deployment technique for overcoming some of the barriers and challenges related to the collection and use of sensitive user data.

However, it is important to acknowledge that data privacy and security are not the only factors that concern individuals whose data is being used and responsible AI advocates. For example, there are broader questions about whether the appropriate features of social identity and interactions with algorithmic systems are being measured and studied for the purposes of algorithmic fairness. It cannot be assumed that privacy-preserving statistical approaches are inherently designed to also grapple with these other fairness questions. In this report, we explore how differentially private federated statistics can be best leveraged to analyze algorithmic systems for potential bias, examining the potential limitations and negative implications of its use.