Projects

Updates on PAI Issue Areas

Advancing Responsible AI

For AI to benefit all, a broad range of voices must contribute to answering important questions about its development, use, and impact. These voices should include researchers and developers, practitioners, and communities differently affected by AI.

PAI is committed to generating learnings and guidelines on responsible AI by bridging the gap between those affected by technologies and those building them. Our diverse Partners guide PAI’s research agenda in the pursuit of responsible AI. At the same time, PAI works with our Partners to bring these insights to practice in their organizations, and in the world. 

Below, our Partner community can find details on our active projects and how to participate in PAI’s work.

PAI currently works to advance Responsible AI through five primary Issue Areas:


 

ABOUT ML

The ABOUT ML (Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles) initiative aims to set a new industry norm for creating complete lifecycle documentation for every machine learning system created or deployed.

Drawing from a diverse and multipronged input approach (including public commenting, solicited input, Methods for Inclusion, and Steering Committee), ABOUT ML aims to build towards broad multistakeholder consensus of what information should be provided about every single ML system to support the goals of transparency, responsible AI development and deployment, and accountability. Considered as both an artifact and a process that forms the internal infrastructure for responsible AI, documentation for AI systems can bridge the gap between principles and day-to-day operations in AI ethics.


In the News:

ABOUT ML Projects:

ML Documentation

Central Research Question: How can we increase transparency and accountability with machine learning system documentation?

Read the latest on our blog

Bridging AI Principles to Practice with ABOUT ML

Project Background

  • As machine learning (ML) becomes central to many decision-making processes – including high stakes decisions in criminal justice and banking – the organizations deploying such automated decision-making systems face increased pressure for transparency on how these decisions are made.
  • Presently, there is neither consensus on which practices work best nor on what information needs to be disclosed and for which goals. Moreover, the definition of transparency itself is highly contextual. Because there is currently no standardized process across the industry, each team that wants to improve transparency in their ML systems must address the entire suite of questions about what transparency means for their team, product, and organization, given their specific goals and constraints.
  • Our goal is to provide a head start to that process of exploration. ABOUT ML (Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles) is a multi-year, multi-stakeholder initiative that aims to bring together a diverse range of perspectives to develop, test, and implement machine learning system documentation practices at scale.
  • Why documentation? Documentation for machine learning systems can contribute to responsible AI development by bringing more transparency into “black box” models and by bridging the gap between increasingly pervasive AI ethics principles and day-to-day operations and practice. Documentation can shape practice because by asking the right question at the right time in the AI development process, teams will become more likely to identify potential issues and take appropriate mitigating actions.

Project Team

Jingying Yang, Head of Product Design

Christine Custis, Program Lead

Bobi Rakova, Research Fellow

Work to Date

Roadmap: Present & Future Work

Present Work:

  • Artifact Workstream: A database of documentation questions, adapted by the domain of the machine learning application.
    • Help us solve the challenge of how documentation can be created at scale within an organization.
  • Process Workstream: A research-based guide to initiating and scaling a documentation pilot.
    • Join the debate on what information stakeholders deserve to know about ML systems, and how that information should be presented.

Future Work:

  • ML Documentation Pilots – 2021
    • We will test pilots with companies to learn best documentation practices and implement what works at scale.

Get Involved

Ways for Partners to participate in PAI’s ABOUT ML work:

  1. Give feedback on deployed examples of ML documentation
  2. Contribute to our growing database of documentation questions
  3. Indicate your interest in running a future ML documentation pilot at your company

Learn more about PAI’s ABOUT ML project.


 

Methods for Inclusion

Central Research Question: How do we incorporate a more diverse set of perspectives into the AI design and development process through non-extractive means?

Read the latest on our blog

Methods for Inclusion: Expanding the Limits of Participatory Design in AI

Project Background

  • Given growing concerns about the disparate impact of AI/ML on users, and society more broadly, the need to incorporate a more diverse set of perspectives into the AI design and development process is apparent. Current efforts to “include” non-expert voices and perspectives into AI development can be limited, extractive, ineffective, ad hoc, and/or divorced from existing decades of scholarship that has grappled with the question of how to create inclusive channels of participation.
  • In 2019, PAI worked with the Tech Policy Lab at the University of Washington to implement their Diverse Voice methodology within the ABOUT ML project. Drawing from that experience, this “Methods for Inclusion” project was developed to identify a broader range of methodologies and practices that can be applied at different stages of the AI design and development process.
  • To do so, a key focus of this project is to identify the specific needs and challenges faced by AI researchers, designers, and developers when trying to incorporate non-technical feedback from users and others affected by the technology, service, or product; as well as to understand the incentives (or lack thereof) and barriers to participation in the design and development process by those users and other affected persons and communities.

Project Team

Jingying Yang, Head of Product Design

Dr. Tina M. Park, Research Fellow

Work to Date

This project is newly launched as of Fall 2020 – check back here for updates soon!

Roadmap: Present & Future Work

Present Work:

  • Conference Paper: Framework for Methods of Inclusion – Fall 2020
    • This paper critically examines existing models of participatory methods used in the AI/ML industry
      and suggests an alternative framework: Methods for Inclusion. We outline a set of heuristics and principles to help identify practices that support less exploitative and less extractive forms of public involvement in the AI/ML design and development process.
    • Submitted to ACM FAccT 2021
  • Research Paper: Survey of Participatory Practices – Fall/Winter 2020
    • This component of the project takes advantage of the existing literature across disciplines and fields of practice to identify and consider a range of participatory practices that could be adapted for use by AI researchers, designers, and developers.
  • Stakeholder Interviews – Fall/Winter 2020
    • The project will also conduct in-depth interviews with AI/ML developers, Responsible AI researchers, and impacted community advocates to better understand the challenges of incorporating inclusive methods into AI development and design.

Future Work:

  • Pilot: Methods for Inclusion Project – Spring 2021
    • We will consult and work with select internal PAI projects and PAI partners to implement inclusive methodologies and develop teachable case studies.
    • Using our own programs as a testing ground, PAI aims to turn the findings from this fellowship into a resource that all PAI Partners – and the AI field at large – can turn to in order to better build bridges to communities that are otherwise not consulted in their technology development and deployment processes.
  • Resource: Practitioner’s Guide to Methods for Inclusion – Summer/Fall 2021
    • We will build an online repository of teaching tools, implementation strategies, and case studies for AI researchers, designers, and developers.

Get Involved

Ways for Partners to participate in PAI’s Methods for Inclusion work:

  1. Join the interview study to share your experience with or interest in participatory AI practices.
    1. We are currently recruiting participants for our interview study and we would love to speak with you! We are specifically interested in speaking with responsible AI researchers and AI/ML developers who have incorporated some type of participatory practice in their work.
    2. We are also interested in talking to advocates who are not directly involved in AI/ML research or development and work with communities who are subject to the negative consequences of AI/ML use.

AI, Labor, & the Economy

PAI believes that AI has tremendous potential to solve major societal problems and make peoples’ lives better, but that individuals and organizations must also grapple with new forms of automation, wealth distribution, and economic decision-making.

To advance a beneficial economic future from AI, PAI gathers Partner organizations, economists, and worker representative organizations to try to form shared answers and recommendations for the role of AI in our economy.


Upcoming Events:

  • None at the moment.

 

AI, Labor, & the Economy Projects:

The AI and Shared Prosperity Initiative

Central Research Question: How to steer AI to enable broad-based productivity growth and expand good employment opportunities globally, without putting overburdening upskilling requirements on workers?

Read the latest on our blog

How Many Jobs Will AI Destroy? As Many As We Tell It To

Project Background

  • To date, the “future of work” debate and scholarship have primarily focused on the need for the society and workers to prepare for and adjust to the changes in the labor market brought about by AI advancement. Dozens of organizations developing and deploying AI have published AI principles, many of them listing supporting and enabling an inclusive economy, or benefitting all. Yet the anticipation of AI advancement generating “left behind” groups remains widely shared.
  • The AI and Shared Prosperity Initiative (AI SPI) is a multi-year effort to advance public knowledge on what concrete frameworks companies developing and deploying AI should adopt to co-create a global economic future that is inclusive by design. The AI SPI focuses instead on the role and responsibility of the AI industry to steer the development of AI in a way that will make the economic transition induced by AI advancement less burdensome and costly on the part of workers and society, enabling an inclusive economic future.
  • The AI SPI Research Agenda will raise foundational questions that remain unanswered by existing responsible AI research efforts, including:
    • What concrete objectives can an AI industry actor striving to advance shared prosperity set for itself?
    • How can this AI industry actor measure progress towards that objective?
    • And what practical steps will help achieve progress towards this objective?

Project Team

Katya Klinova, Program Lead

B Cavello, Program Lead

Work to Date

Roadmap: Present & Future Work

Present Work:

Future Work:

  • The AI SPI Research Agenda – Spring 2021
  • Public Resource: Economic Redistribution-aware AI Development Framework – Summer 2021

Get Involved

Ways for Partners to participate in PAI’s AI and Shared Prosperity Initiative:

  1. Sign up to receive updates from the AI SPI team

Learn more about PAI’s AI & Shared Prosperity initiative. 


 

AI Supply Line & Responsible Sourcing Practices

Central Research Question: What could responsible sourcing look like for data labeling and human review services in the AI industry?

Read the latest on our blog

Developing Guidance for Responsible Data Enrichment Sourcing

Project Background

  • All workers contributing to the development of AI systems should have healthy, fair, and empowering working conditions.
  • While it has become a norm for publicly-traded companies to issue regular supplier responsibility reports, they do not explicitly include the on-demand platform labor used along the AI supply chain. Machine Learning engineers and Product Managers procuring data labeling, human review, and similar services central to the AI development process de-facto set the terms of employment for on-demand platform workers, often unwittingly, and always without much guidance.
  • A concerted effort is needed to achieve a broad recognition of the importance of the role crowd platform workers play in enabling AI-powered products and services, and create a shared understanding of what it means to be a responsible supplier and a responsible buyer of those services. By equipping requesters of data labeling with practical guidance on how to be a responsible procurer of on-demand platform work, we have a chance to improve the well-being of millions of workers in the platform economy.
  • To find more information about PAI’s work on AI Supply Line & Responsible Sourcing Practices, please visit this page.

Project Team

Katya Klinova, Program Lead

B Cavello, Program Lead

Work to Date

This project is ramping up in Fall 2020 with a Responsible Sourcing Partner Social and will continue into 2021 – stay tuned for updates!

Roadmap: Present & Future Work

Present Work:

  • Responsible Sourcing Partner Social – October 2020
  • Workshop Series on Responsible Sourcing of Data Enrichment Services – Fall 2020

Future Work:

  • Workshops Series on Translating Recommendations into Practice – Spring 2021
  • Public Resource: Implementing Responsible Sourcing – Summer 2021
  • Responsible Sourcing Pilots – Fall 2021

Get Involved

Ways for Partners to participate in PAI’s AI Supply Line work:

  1. Indicate your interest in attending the Responsible Sourcing Partner Social to learn more about this project

 

Promoting Workforce Well-being in the AI-Integrated Workplace

Central Research Question: How can employers commit to worker well-being as AI is increasingly introduced into the workplace?

Read the latest on our blog

Introducing a Framework for Promoting Workforce Well-being in an AI-integrated Workplace

Project Background

  • Businesses large and small around the world are increasingly introducing artificial intelligence (AI) into the workplace, unleashing a tremendous potential to boost productivity, enable new business models, improve safety, and assist workers.
  • This adoption also engenders a whole host of risks to the well-being of the workforce, potentially exacerbating long-standing inequities in the treatment of workers, which have been laid bare by the COVID-19 health crisis and the economic fallout that ensued. New concepts, ideas, and social institutions will be necessary to ensure that transitions to AI-integrated organizations are as inclusive of and empowering for workers as possible.
  • The Partnership on AI offers a Framework for Promoting Workforce Well-being in the AI Integrated Workplace as one resource toward facilitating these transitions in a way that promotes the well-being of individual workers and the workforce.

Project Team

Katya Klinova, Program Lead

Elonnai Hickok

B Cavello, Program Lead

Work to Date

Roadmap: Present & Future Work

We are drawing on the Framework in the new AI Supply Lines project aimed at developing procurement practices supporting better working conditions in the data labeling ecosystem; see above for details.

Get Involved

Ways for Partners to participate in PAI’s Workforce Well-being project:

  1. Apply the recommendations and tools in the framework towards promoting workforce well-being as you introduce or expand AI into your company

AI & Media Integrity

AI technologies present new challenges and opportunities for ensuring high-quality public discourse around the world. While recent advances in AI have enabled new methods for creative expression, privacy, and even techniques for identifying problematic content, they have also extended the realm of possibility for creating and promoting content that misinforms, manipulates, harasses, or wrongfully persuades.

Through its AI and Media Integrity Program Area, PAI coordinates those on the frontlines of information integrity challenges around the world, with those building and deploying technology – including actors from media, civil society, industry, and academia. We also aim to understand how users and people around the world encounter and experience information online today. Our work includes three project areas: 1) Synthetic and Manipulated Content, 2) Audience Explanations, and 3) Content Targeting and Ranking. Creating this multidisciplinary community can help ensure AI is governed responsibly and with a positive impact on the global information ecosystem.

Learn more about the work of PAI’s AI & Media Integrity Steering Committee.

In the News:


 

AI & Media Integrity Projects:

Synthetic & Manipulated Content

Central Research Question: How do we ensure responsible use of AI for generating text, audio, video, and imagery, as well as AI-driven tactics for identifying harmful online content?

Read the latest on our blog

A Field Guide to Making AI Art Responsibly

Project Background

  • AI advances have extended the realm of possibility for generated text, audio, video, and imagery’s impact on civil society and discourse; they’ve also created potentially novel solutions for combating content that misinforms, manipulates, harasses, or wrongfully persuades, whether AI-generated or not.
  • With this in mind: How do we strengthen information integrity solutions, including the detection of manipulated and synthetic content? How can we ensure that the global information integrity community – including fact-checkers, journalists, and others from civil society around the world – have tools and technologies to deal with synthetic and manipulated content? How do we attend to the adversarial nature of this challenging area?

Project Team

Claire Leibowicz, Program Lead

Work to Date

Roadmap: Present & Future Work

Present Work:

  • Research Project: Taxonomy of Adversaries – Fall/Winter 2020
    • How can we develop a taxonomy/map of adversaries for manipulated content threats?
  • Coordination – Ongoing
    • Can we continue to bridge the technical, media, CSO, and academic communities working on synthetic and manipulated content?
    • The Synthetic Media Detection Tools cohort, SSI Expert Group, and AI and Media Integrity Steering Committee continue to meet.

Future Work:

  • Research Project: Provenance – 2021
    • How do we verify content authenticity? Can and should we provide signals that convey content provenance, not merely inauthenticity?
  • Research Project: Access Protocol – 2021
    • Who should, and should not, get access to manipulated content mitigation tools, based on the taxonomy of adversaries?

Get Involved

Ways for Partners to participate in PAI’s Synthetic & Manipulated Media Content work:

  1. Join the SSI Expert Group to get involved in the Taxonomy of Adversaries/Access Protocol project.
  2. Connect us with key partners, experts, and stakeholders — including marginalized communities globally — affected by misinfo. & media issues
  3. Help scope 2021 future synthetic and manipulated media work

Audience Explanations

Central Research Questions: How do different audiences make sense of online information and content, including misinformation? What should platforms do to mitigate users’ belief in harmful/misleading content and amplify belief in credible content?

Read the latest on our blog

Manipulated Media Detection Requires More Than Tools: Community Insights on What’s Needed

Project Background

  • As technology platforms find themselves in the position to moderate the credibility of content, they are not only tasked with evaluating whether or not content is manipulated and/or misleading but also with determining how to take action in response to their evaluations.
  • How can we do so most effectively? We must understand how users around the world make sense of information online in order to inform effective misinformation mitigations and promote credible content online.
  • How do people understand all types of misleading and manipulated content, from cheapfakes to deepfakes? How can platforms design interfaces like misinformation labels responsibly, addressing user needs and effects on the public?

Project Team

Claire Leibowicz, Program Lead

Emily Saltz, Research Fellow

Work to Date

Roadmap: Present & Future Work

Present Work:

  • Research Project: User Research on Visual Information Labels – Summer/Fall 2020
    • This project consists of a series of interviews, diary studies, and a survey aimed at understanding attitudes and critical moments of trust and distrust for media consumers around existing labels and label terminology during COVID-19

Future Work:

  • Ecologically Valid Testing Environments – 2021
    • Support continued independent user research into the effects of labels by creating a simulated platform testing environments
  • User Research with Affected Groups – 2021
    • Continue UX research and codesign interventions with affected and marginalized communities
  • Civil Society & Industry Listening Tour – 2021
    • Conduct interviews with figures in civil society and human rights organizations to learn about the needs and risks of labeling globally

Get Involved

Ways for Partners to participate in PAI’s Audience Explanations work:

  1. Tell us what challenges you have communicating about media to different user types
  2. Connect us with key partners, experts, and stakeholders — including marginalized communities globally — affected by misinformation & media issues
  3. Help scope 2021 audience explanations work – tell us your priorities!

Content Targeting & Ranking

Central Research Question: How can we involve users in designing the algorithmic systems that choose what content they see?

Read the latest on our blog

Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals

Project Background

  • Can we collaboratively design guidelines or principles for how content should be chosen and ordered on AI-driven platforms?
  • In principle, we could create metrics that capture important aspects of the effect of an AI system on human lives, just as cities and countries today record a large variety of statistical indicators. These metrics would be useful to the teams building and operating the system, to researchers who want to understand what the system is doing, and as a transparency and accountability tool.
  • Given a set of goals, how can we modify the recommender systems at the heart of most major platforms to align with these goals? There are major open challenges in translating human-language principles into technical approaches that are effective at scale.

Project Team

Claire Leibowicz, Program Lead

Jonathan Stray, Research Fellow

Work to Date

Roadmap: Present & Future Work

Present Work:

  • Research Project scoping: Content Ranking – Fall 2020
  • Recommender Alignment Discussion Group
    • Ongoing calls, and an interdisciplinary venue for collaboration on responsibly designing recommender systems.

Future Work:

  • Research Paper: Aligning AI Optimization to Community Wellbeing
    • Forthcoming publication in the International Journal of Community Wellbeing
  • Research Project: Content Ranking, Survey and Metric Design – 2021
    • Collaboratively develop principles for pro-social information personalization, and how to implement these technically.

Get Involved

Ways for Partners to participate in PAI’s Content Targeting and Ranking work:

  1. Join us for the twice-monthly Recommender Alignment Group calls.
  2. Pilot a “content ranking for social good” product change with us
  3. Connect us with key Partners, stakeholders, and experts
  4. Collaborate on research

Fairness, Transparency, & Accountability

Fairness, Transparency, and Accountability is the first Research Initiative at PAI and encompasses a large body of research and programming around algorithmic fairness, explainability, criminal justice, and diversity and inclusion. Equity and social justice are at the core of the research questions we work on. Our team of researchers is highly interdisciplinary, with expertise in statistics, computer science, social sciences, and law.

Leveraging PAI’s unique position at the intersection of industry, civil society, and academia, our work over the past year has examined the challenges organizations face when seeking to measure and mitigate algorithmic bias using demographic data, provide meaningful explanations to diverse stakeholders, address bias in recidivism risk assessment tools, and build more inclusive AI teams.

In The News:


 

Fairness, Transparency, & Accountability Projects:

Explainable AI in Practice

Central Research Question: How do we ensure that deployed explainability techniques are up to the task of enhancing transparency and accountability for end users and other external stakeholders?

Read the latest on our blog

Multistakeholder Approaches to Explainable Machine Learning

Project Background

  • Machine learning systems that enable humans to understand and evaluate the machine’s predictions or decisions are key to transparent, accountable, and trustworthy AI.
  • Known as Explainable AI (XAI), these systems could have profound implications for society and the economy, potentially improving human/AI collaboration for sensitive and high impact deployments and helping address bias and other harms in automated decision making.
  • Although Explainable AI is often touted as the solution to opening the “black box” and better understanding how algorithms make predictions, our research at PAI suggests that current techniques fall short and do not yet adequately enable practitioners to provide meaningful explanations.

Project Team

Madhulika Srikumar, Program Lead

Ana Lucic, Research Fellow

Work to Date

Roadmap: Present & Future Work

Present Work:

  • Research Project: Uncertainty as Transparency – Fall/Winter 2020
    • What is the utility of uncertainty for augmenting expert decision making, building trustworthy systems, improving model performance, and decreasing model unfairness?
    • We are working with PAI Partners on an interdisciplinary literature review about how uncertainty quantification and communication can enable greater transparency.
    • We will provide open problems for our partner community and the broader field of AI to tackle.

Future Work:

  • Conference Presentation: IJCAI-PRICAI Workshop on Explainable Artificial Intelligence (XAI) – Spring 2021
  • Research Project: Multistakeholder Study – Spring 2021
    • Which type of transparency is appropriate for a specific stakeholder in a particular domain?
    • We will develop a framework for tailoring explanations for the specific needs of different contexts and stakeholders.

Get Involved

Ways for Partners to participate in PAI’s work on explainability:

  1. Share suggestions for PAI’s 2021 explainability research study with the team
  2. Apply our research insights to establish clear goals for your own explainability work

Algorithmic Fairness & Demographic Data

Central Research Question: When and how should demographic data be collected and used in service of algorithmic bias detection and mitigation?

Read the latest on the blog

Working to Address Algorithmic Bias? Don’t Overlook the Role of Demographic Data

Project Background

  • Algorithmic bias refers to the ways in which algorithms might perform more poorly for certain demographic groups or produce disparate outcomes across such groups. Thus, knowledge of which demographic groups individuals belong to is vital for measuring and mitigating such biases. “Demographic data” is an umbrella term used to house class-categories that the US refers to as “protected class data” and some of the categories the EU’s GDPR calls “sensitive personal data.”
  • Collecting and using demographic data is a topic that is often fraught with legal and ethical dilemmas given concerns around the highly personal and private nature of such data and the potential for such data to be misused.
  • By using PAI’s unique position at the intersection of corporate AI developers and civil society groups representing different aspects of the public interest, we hope to clarify paths forward for bias detection and mitigation efforts that are squared with data regulations and best practices for user protection. The results of this project will give you some insights into how other organizations face this challenge, and follow-on work will include a multi-stakeholder process of coming together to envision creative solutions.

Project Team

McKane Andrus, Research Associate

Sarah Villeneuve, Program Lead

Work to Date

Roadmap: Present & Future Work

Present Work:

  • Research Project: Interview Study – Summer/Fall 2020
    • We have interviewed participants across the Partnership to document the challenges in using demographic data in service of fairness goals and will soon be releasing a paper reporting the findings.

Future Work:

  • Research Project: Multistakeholder Engagement – 2021
    • Future work in this area may include identifying strategies that adhere to regulatory standards, satisfying organizational needs, and addressing practitioner concerns around the collection and use of demographic data.

Get Involved

Ways for Partners to participate in PAI’s work around demographic data and algorithmic fairness:

  1. Share suggestions for PAI’s 2021 demographic data research agenda with the team
  2. Join the FTA Expert Group to get engaged with workshops or a steering committee. Future work could entail a multi-stakeholder process to generate recommendations for how to address the challenges practitioners are facing in practice when trying to detect or mitigate bias

Algorithmic Fairness & the Law

Central Research Question: To what extent are technical approaches to algorithmic bias compatible with U.S. anti-discrimination law and how can we carve a path toward greater compatibility?

Read the latest on our blog

Crucial Yet Overlooked: Why We Must Reconcile Legal and Technical Approaches to Algorithmic Bias

Project Background

  • Despite the recognized need to mitigate algorithmic bias in pursuit of fairness, there are challenges in ensuring that the techniques proposed by the ML community to mitigate bias are not deemed to be discriminatory from a legal perspective.
  • In recent years, there has been a proliferation of papers in the algorithmic fairness literature proposing various technical definitions of algorithmic bias and methods to mitigate bias. Whether these algorithmic bias mitigation methods would be permissible from a legal perspective is a complex but increasingly pressing question at a time where there are growing concerns about the potential for algorithmic decision-making to exacerbate societal inequities. In particular, there is a tension around the use of protected class variables: most algorithmic bias mitigation techniques utilize these variables or proxies, but anti-discrimination doctrine has a strong preference for decisions that are blind to them.
  • A lack of legal compatibility creates the possibility that biased algorithms might be considered legally permissible while approaches designed to correct for bias might be considered illegally discriminatory.

Project Team

Alice Xiang, Head of FTA

Work to Date

Roadmap: Present & Future Work

Future Work:

  • Research Paper: Affirmative Algorithms – Winter 2020
  • Research Publication: Tennessee Law Review – Spring 2021

Get Involved

Ways for Partners to participate in PAI’s work on algorithmic fairness and the law:

  1. Join the FTA Expert Group to get engaged with workshops or a steering committee. Future work could entail a multi-stakeholder process to generate recommendations for how to address the challenges practitioners are facing in practice when trying to detect or mitigate bias.

AI & Criminal Justice

Central Research Question: What are the shortcomings of using algorithmic risk assessment tools in the U.S. criminal justice system and how might they be addressed, if at all?

Read the latest on our blog

Why PATTERN Should Not Be Used: The Perils of Using Algorithmic Risk Assessment Tools During COVID-19

Project Background

  • AI tools used in deciding whether to detain or release defendants are in widespread use around the United States, including in recent use by the Bureau of Prisons to determine eligibility for home confinement in the context of COVID-19, around and some legislatures have begun to mandate their use.
  • While criminal justice risk assessment tools are often simpler than the deep neural networks used in many modern artificial intelligence systems, they are basic forms of AI. As such, they present a paradigmatic example of the high-stakes social and ethical consequences of automated AI decision-making.
  • PAI’s research in this area outlines ten largely unfulfilled requirements that jurisdictions should weigh heavily prior to the use of these tools, spanning topics that include validity and data sampling bias, bias in statistical predictions; choice of the appropriate targets for prediction; human-computer interaction questions; user training; policy and governance; transparency and review; reproducibility, process, and recordkeeping; and post-deployment evaluation.
  • Based on the input of our partners, PAI currently recommends that policymakers either avoid using risk assessments altogether for decisions to incarcerate, or find ways to resolve the requirements outlined in this report via future standard-setting processes.

Project Team

Riccardo Fogliato, Research Fellow

Sarah Villeneuve, Program Lead

Alexandra Chouldechova, Research Fellow

Work to Date

Roadmap: Present & Future Work

Present Work:

  • Research Project: Statistical methods for measuring and mitigating algorithmic bias in arrest data – Fall/Winter 2020
    • We will examine racial disparities in criminal justice data by comparing arrest and victimization data to see whether multiple sources of ground truth can enable better illuminate potential sources of bias.
  • Research Project: Risk assessment tools – Fall/Winter 2020
    • We will examine effects of various potential interventions to reduce bias in risk assessment tools.

Get Involved

Check back soon on ways to participate in PAI’s work on AI and criminal justice!

Diversity, Equity, and Inclusion in AI

Central Research Question: How can we meaningfully and sustainably increase the diversity and inclusivity of teams working in AI?

Read the latest on our blog

Beyond the Pipeline: Addressing Attrition as a Barrier to Diversity in AI

Project Background

  • The lack of diversity in the AI field has been well-documented. As a field, AI struggles to both recruit and retain team members from diverse backgrounds. Why is this such a widespread phenomenon, and more importantly, what can be done to close the gap?
  • PAI is conducting qualitative and quantitative research to learn why there is such high attrition of women and minoritized individuals in the AI field. From our findings, we plan to share recommendations – drawing upon established DEI research and best practices – for actions companies working on AI can take to improve their DEI efforts and build more inclusive cultures.

Project Team

Dr. Jeffrey Brown, Research Fellow

Alice Xiang, Head of FTA

Work to Date

This project will be launching in Fall 2020 – stay tuned for updates!

Roadmap: Present & Future Work

Present Work:

  • Research Project: Interview Study – Fall 2020

Future Work:

  • Research Paper – Spring/Summer 2021
    • We will share findings from and a summary of our DEI Interview Study
  • Public Resources – Spring/Summer 2021
    • We will share tangible recommendations for AI practitioners and companies looking for guidance around their DEI efforts

Get Involved

This project will be seeking interview participants soon – check back here for updates on how to join the project!

Safety-Critical AI

How can we ensure that AI and machine learning technologies are safe? This is an urgent short-term question, with applications in computer security, medicine, transportation, and other domains. It is also a pressing longer-term question, particularly with regard to environments that are uncertain, unanticipated, and potentially adversarial.

As technologies become more capable, we will need social and technical foundations for building AI technologies that are safe, predictable, and trustworthy, as well as norms in the research community that support the safe development and deployment of AI.


Upcoming Events:

In The News:


 

Safety-Critical AI Projects:

Responsible Publication Norms

Central Research Question: When and how can one publish novel AI research in a way that maximizes the beneficial applications while mitigating potential harms?

Read the latest on our blog

Navigating the Broader Impacts of AI Research: Workshop at NeurIPS 2020

Project Background

  • As organizations are adopting principles and practices to help guide their work in Artificial Intelligence and Machine Learning (AI/ML) in a responsible way, the question of responsible publication – the consideration of when and how to publish novel research in a way that maximizes benefits while mitigating potential harms – has gained prominence.
  • AI/ML is applied in increasingly high-stakes contexts and touches increasing parts of our everyday lives. Thus, it becomes ever more important to consider the broader social impact of AI/ML research and to mitigate the risks of malicious use, unintended consequences, and accidents, so that we can all enjoy the many potential benefits of this transformative technology.
  • The Partnership on AI is undertaking a multistakeholder project that aims to facilitate the exploration and thoughtful development of publication practices for responsible AI.

Project Team

Rosie Campbell, Program Lead

Jasmine Wang, Research Fellow

Work to Date

  • May 2020: How to Write a Solid NeurIPS Impact Statement Workshop, co-hosted with the Future of Humanity Institute
  • May 2020: Two workshops on Publication Norms co-hosted with the Montreal AI Ethics Institute
  • April 2020: ICLR 2020 – Two socials on ‘Anticipating Risky Research’
  • February 2020: Participation in the Catalyst Biosecurity Summit to explore overlaps between publication practices in the fields of biosecurity and AI.
  • Fall 2019: Meeting series with Partner organizations and relevant experts from tech companies, academia, and CSOs, surfacing common themes ideas, and outstanding questions related to responsible publication norms.
  • Fall 2019: Consultations with PAI Partner organizations through considerations of the risks and impact of novel research and how it might affect their publication strategy. Public examples of this collaboration have included Facebook’s Deepfake Detection Challenge and Salesforce’s CTRL.
  • April 2019: When Is It Appropriate to Publish High-Stakes AI Research?
  • March 2019: Co-hosted event with OpenAI to discuss openness and responsible publication of ML research, after their staged-release approach for GPT-2.

Roadmap: Present & Future Work

Present:

  • Research Project: Case Studies – Fall/Winter 2020
    • PAI is conducting research on historical lessons from other fields, such as bioengineering and cybersecurity, that apply to the challenges ahead for novel AI research and experimentation.
  • Convening: NeurIPS Workshop – Winter 2020
    • This co-hosted workshop aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings.

Future:

  • Convening: Webinar Series – 2021
    • PAI will host a series of webinars featuring experts and historical lessons from other fields, such as bioengineering and cybersecurity, that apply to the challenges ahead for novel AI research and experimentation.

Get Involved

Ways for Partners to participate in PAI’s work on responsible publication norms:

  1. Join the mailing list to receive updates on our responsible publication norms work
  2. Provide input on the key questions and challenges we’ve identified around responsible publication norms
  3. Help us prioritize possible interventions and approaches to publication norms for responsible AI

Learn more about PAI’s project on Responsible Publication Norms.


 

SafeLife: AI Safety in Complex Environments

Central Research Question: How can we algorithmically train a reinforcement learning agent to do what we want it to do but nothing more?

Read the latest on our blog
A visualization of the SafeLife environment.

Introducing the SafeLife Leaderboard: A Competitive Benchmark for Safer AI

Project Background

  • Avoidance of negative side effects is one of the core problems in AI safety, with both short and long-term implications. It can be difficult enough to specify exactly what you want an AI to do, but it’s nearly impossible to specify everything that you want an AI not to do. As reinforcement learning agents start to get deployed in real-world high-stakes scenarios, it is critical to make sure that they operate within appropriate (and often quite strict and intricate) safety constraints.
  • PAI’s SafeLife project is a novel reinforcement learning environment that tests the safety of reinforcement learning agents and the algorithms that train them. The environment has simple rules, but rich and complex dynamics, and generally gives the agent lots of power to make big changes on its way to completing its goals. A safe agent will only change that which is necessary, but an unsafe agent will often make a big mess of things and not know how to clean it up.

Project Team

Carroll Wainwright, Research Scientist

Work to Date

Roadmap: Present & Future Work

Future:

  • AI Safety Engineering – 2021
    • Future research may explore deployment and publication best practices, safety benchmarks, and/or safety coordination
  • Safety in Critical Systems – 2021
    • Future research may develop a case study in one cyberphysical system such as autonomous vehicles, health care, finance, etc.

Get Involved

Ways for Partners to participate in PAI’s work on AI Safety:

  1. Join SafeLife as a collaborator
  2. Run the SafeLife v1.2 Benchmark yourself via Weights & Biases
  3. Share suggestions for PAI’s AI Safety 2021 research agenda with the team