Partnership on AI Research & Publications

Human-AI Collaboration Framework & Case Studies

Case Studies

Best practices on collaborations between people and AI systems – including those for issues of transparency and trust, responsibility for specific decisions, and appropriate levels of autonomy – depend on a nuanced understanding of the nature of those collaborations. With the support of the Collaborations Between People and AI Systems (CPAIS) Expert Group, PAI has drafted a Human-AI Collaboration Framework to help users consider key aspects of human-AI collaboration technologies. We have also prepared a collection of seven case studies that illustrate the Framework and its applications in the real world.

Human-AI Collaboration

Trust Literature Review

Key Insights and Bibliography


In order to better understand the multifaceted, important, and timely issues surrounding trust between humans and artificially intelligent systems, PAI has conducted an initial survey and analysis of the multidisciplinary literature on AI, humans, and trust. This project includes a thematically-tagged Bibliography with 80 aggregated research articles, as well as an overview document presenting 7 key insights. These key insights, themes, and aggregated texts can serve as fruitful entry points for those investigating the nuances in the literature on humans, trust, and AI, and can help align understandings related to trust between people and AI systems. They can also help to inform future research.

Visa Laws, Policies, and Practices: Recommendations for Accelerating the Mobility of Global AI/ML Talent

Policy paper

PAI believes that bringing together experts from countries around the world that represent different cultures, socio-economic experiences, backgrounds, and perspectives is essential for AI/ML to flourish and help create the future we desire. In order to fulfill their talent goals and host conferences of international caliber, countries around the world will need laws, policies, and practices that enable international scholars and practitioners to contribute to these conversations. Based on input from PAI Partners, AI practitioners, and PAI’s own research, PAI’s policy paper on Visa Laws, Policies and Practices offers recommendations that will enable multidisciplinary AI/ML experts to collaborate with international counterparts. PAI encourages individuals, organizations, and policymakers to implement these policy recommendations in order to benefit from the diverse perspectives offered by the global AI/ML community.

ABOUT ML - Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles

Ongoing Project

As machine learning (ML) becomes more central to many decision-making processes, including in high-stakes contexts such as criminal justice and banking, the companies deploying such automated decision-making systems face increased pressure for transparency on how these decisions are made. Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles (ABOUT ML) is a multi-year, iterative multi-stakeholder project of the Partnership on AI (PAI) that will work towards establishing evidence-based ML transparency best practices throughout the ML system lifecycle from design to deployment, starting with synthesizing existing published research and practice into recommendations on documentation practice.

AI, Labor, and the Economy Case Study Compendium

Case study

The impact of artificial intelligence on the economy, labor, and society has long been a topic of debate — particularly in the last decade — amongst policymakers, business leaders, and the broader public. To help elucidate these various areas of uncertainty, the Partnership on AI’s Working Group on “AI, Labor, and the Economy” conducted a series of case studies across three geographies and industries, using interviews with management as an entry point to investigate the productivity impacts and labor implications of AI implementation.

Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System


Gathering the views of the Partnership on AI's multidisciplinary artificial intelligence and machine learning research and ethics community, this report documents the serious shortcomings of algorithmic risk assessment tools in the U.S. criminal justice system. Though advocates of such tools suggest that these data-driven AI predictions will produce a reduction in unnecessary detention and provide fairer and less punitive decisions than existing processes, an overwhelming majority of the Partnership’s consulted experts agree that current risk assessment tools are not ready for decisions to incarcerate human beings. This report calls for ten largely unfulfilled requirements that jurisdictions should weigh heavily prior to the use of these tools.

Get involved

Stay in touch or ask a question.