Partnership on AI Research, Publications & Initatives

Closing Gaps In Responsible AI

Ongoing Project

Operationalizing responsible AI principles is a complex process, and currently, the gap between intent and practice is large. To help fill this gap, the Partnership on AI has initiated Closing Gaps in Responsible AI, a multiphase, multi-stakeholder project aimed at surfacing the collective wisdom of the community to identify salient challenges and evaluate potential solutions. The first phase of our project is the Closing Gaps Ideation Game - an interactive ideation exercise that solicits experiences and insights from the technology community in order to collectively surface challenges and evaluate solutions for the organizational implementation of responsible AI. These insights can, in turn, inform and empower the changemakers, activists, and policymakers working to develop and manifest responsible AI.

Explainable Machine Learning in Deployment

Paper

Organizations and policymakers around the world are turning to Explainable AI (XAI) as a means of addressing a range of AI ethics concerns. PAI’s recent research paper, Explainable Machine Learning in Deployment, is the first to examine how ML explainability techniques are actually being used. We find that in its current state, XAI best serves as an internal resource for engineers and developers, rather than for providing explanations to end users. Additional improvements to XAI techniques are necessary in order for them to truly work as intended, and help end users, policymakers, and other external stakeholders understand and evaluate automated decisions.

AI and Media Integrity Steering Committee

Steering Committee

The AI and Media Integrity Steering Committee is a formal body of PAI Partner organizations focused on projects to confront the emergent threat of AI-generated mis/disinformation, synthetic media, and AI’s effects on public discourse.

On the Legal Compatibility of Fairness Definitions

Paper

Past literature has been effective in demonstrating ideological gaps in machine learning (ML) fairness definitions when considering their use in complex socio-technical systems. However, we go further to demonstrate that these definitions often misunderstand the legal concepts from which they purport to be inspired, and consequently inappropriately co-opt legal language. In this paper, we demonstrate examples of this misalignment and discuss the differences in ML terminology and their legal counterparts, as well as what both the legal and ML fairness communities can learn from these tensions. We focus this paper on U.S. anti-discrimination law since the ML fairness research community regularly references terms from this body of law.

SafeLife 1.0: Exploring Side Effects in Complex Environments

Research Project

As reinforcement learning agents begin to get deployed in real-world high-stakes scenarios, it is critical to make sure that they operate within appropriate safety constraints. PAI’s new SafeLife project addresses this complex challenge, creating a publicly available reinforcement learning environment that tests the ability of trained agents to operate safely and minimize side effects. SafeLife is part of a broader initiative at PAI to develop benchmarks that integrate for safety, fairness, and other ethical objectives.

Human-AI Collaboration Framework & Case Studies

Case Studies

Best practices on collaborations between people and AI systems – including those for issues of transparency and trust, responsibility for specific decisions, and appropriate levels of autonomy – depend on a nuanced understanding of the nature of those collaborations. With the support of the Collaborations Between People and AI Systems (CPAIS) Expert Group, PAI has drafted a Human-AI Collaboration Framework to help users consider key aspects of human-AI collaboration technologies. We have also prepared a collection of seven case studies that illustrate the Framework and its applications in the real world.

Human-AI Collaboration Trust Literature Review: 
Key Insights and Bibliography

Report

In order to better understand the multifaceted, important, and timely issues surrounding trust between humans and artificially intelligent systems, PAI has conducted an initial survey and analysis of the multidisciplinary literature on AI, humans, and trust. This project includes a thematically-tagged Bibliography with 80 aggregated research articles, as well as an overview document presenting 7 key insights. These key insights, themes, and aggregated texts can serve as fruitful entry points for those investigating the nuances in the literature on humans, trust, and AI, and can help align understandings related to trust between people and AI systems. They can also help to inform future research.

Visa Laws, Policies, and Practices: Recommendations for Accelerating the Mobility of Global AI/ML Talent

Policy paper

PAI believes that bringing together experts from countries around the world that represent different cultures, socio-economic experiences, backgrounds, and perspectives is essential for AI/ML to flourish and help create the future we desire. In order to fulfill their talent goals and host conferences of international caliber, countries around the world will need laws, policies, and practices that enable international scholars and practitioners to contribute to these conversations. Based on input from PAI Partners, AI practitioners, and PAI’s own research, PAI’s policy paper on Visa Laws, Policies and Practices offers recommendations that will enable multidisciplinary AI/ML experts to collaborate with international counterparts. PAI encourages individuals, organizations, and policymakers to implement these policy recommendations in order to benefit from the diverse perspectives offered by the global AI/ML community.

ABOUT ML - Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles

Ongoing Project

As machine learning (ML) becomes more central to many decision-making processes, including in high-stakes contexts such as criminal justice and banking, the companies deploying such automated decision-making systems face increased pressure for transparency on how these decisions are made. Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles (ABOUT ML) is a multi-year, iterative multi-stakeholder project of the Partnership on AI (PAI) that will work towards establishing evidence-based ML transparency best practices throughout the ML system lifecycle from design to deployment, starting with synthesizing existing published research and practice into recommendations on documentation practice.

AI, Labor, and the Economy Case Study Compendium

Case study

The impact of artificial intelligence on the economy, labor, and society has long been a topic of debate — particularly in the last decade — among policymakers, business leaders, and the broader public. To help elucidate these various areas of uncertainty, the Partnership on AI’s Working Group on “AI, Labor, and the Economy” conducted a series of case studies across three geographies and industries, using interviews with management as an entry point to investigate the productivity impacts and labor implications of AI implementation.

Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System

Report

Gathering the views of PAI’s multidisciplinary AI and ML research and ethics community, this report documents the serious shortcomings of algorithmic risk assessment tools in the U.S. criminal justice system, and concludes that current risk assessment tools are not ready for decisions to incarcerate human beings. The report includes ten requirements that jurisdictions should weigh heavily prior to the use of these tools.

Get involved

Stay in touch or ask a question.