News

The Partnership on AI Launches Multistakeholder Initiative To Enhance Machine Learning Transparency

PAI Staff

April 25, 2019

Ongoing Effort To Shape, Test, and Propagate Best Practices

San Francisco, California, April 25, 2019 — The Partnership on AI (PAI) today announced an initiative to define best practices for transparency in machine learning (ML). This iterative multistakeholder initiative will produce best practices around the considerations, reflections, and documentation necessary to prompt a thoughtful process of creating and understanding ML systems that account for how the technology impacts all parties—including the public at large, differentially affected communities, policymakers, and users. This effort will be called  “Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles” [1] (ABOUT ML). When organizations implement ABOUT ML best practices, the output will include documentation for use by internal ML decisionmakers, [2] external documentation similar to food nutrition labels or drug labels on pharmaceuticals, and a process that organization followed to improve the responsible development and fielding of AI technologies. Among other benefits of transparency, knowing how and on what data an ML system was trained and tested is essential for understanding the uses for which the system is and is not appropriate.

This initiative to improve the transparency of the ML lifecycle can only succeed if ABOUT ML incorporates and channels the concerns of a diverse set of perspectives. ML technology designed and built by a small number of organizations is increasingly impacting global populations. To that end, PAI will be convening its global membership in an open, inclusive process. The ABOUT ML initiative will also seek input from communities historically excluded from technology decision-making by following an adapted version of the University of Washington Tech Policy Lab’s Diverse Voices [3] process, as well as from stakeholders from academia, civil society organizations, and companies designing and deploying ML technology. PAI is committed to investing in this effort because of how crucial transparency of the ML lifecycle has become for AI policy and technical communities, as well as communities affected by the use of ML systems.

Modeled after iterative ongoing processes to design internet standards (such as W3C, [4] IETF, [5] WHATWG), [6] ABOUT ML will kick off with the publication of initial “draft v0” recommendations on ML lifecycle transparency this July, building on discussions and research in the community [7] [8] [9] [10] and practice in the ML field. This will be followed by successive drafts which will allow PAI to respond to and reflect upon newly learned lessons in the rapidly developing field of ML and broader AI research. Once draft recommendations from participants begin to converge, PAI will encourage members to pilot and test them, adapted for their own contexts. The recommendations will become best practices only with a large body of evidence in support of their efficacy, which will take time to achieve.

“As AI makes rapid progress, we interact with increasingly intelligent machines. But machine learning systems, which lay at the heart of most modern AI applications, can fail in strange and unintuitive ways that humans are not well equipped to understand,” said Peter Eckersley, Director of Research at The Partnership on AI. “PAI is gathering many communities to establish clear guidelines on how to document training, evaluation, and benchmarking datasets and machine learning models—both to aid engineers and product leaders in their work and educate ourselves alongside the public around the topic of transparency in machine learning development.”

About The Partnership on AI

The Partnership on AI (PAI) is a global multistakeholder organization that brings together academics, researchers, civil society organizations, companies building and using AI technology, and other groups working to realize the promise of artificial intelligence. The Partnership was established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society. Today, PAI convenes more than 90 partner organizations from around the world to be a uniting force for the responsible development and fielding of AI technologies.


[1] “Machine learning lifecycle” comprises the stages of designing and building a decision making system that includes an ML model, usually including designing the ML system and specifications, data collection and characterization, building, training, and testing the ML model, testing the overall system, using the ML decision making system, and maintenance and feedback. For more information, see E. Horvitz, Reflections on the meaningful understanding of the logic of automated decision making, Privacy Law Forum, Berkeley Center for Law & Technology, March 2017.

[2] “Machine learning decision makers” are people whose decisions impact the structure and features of ML systems. These include engineers, engineering leadership, product leads, system architects, designers, and business teams.

[3] Technology Policy Lab, “Diverse Voices” https://techpolicylab.uw.edu/project/diverse-voices/

[4] World Wide Web Consortium Process Document https://www.w3.org/2019/Process-20190301/

[5] Internet Engineering Task Force standards process https://www.ietf.org/standards/process/

[6] Web Hypertext Application Technology Working Group https://whatwg.org/faq#process

[7] Gebru, T., Morgenstern, J.H., Vecchione, B., Vaughan, J.W., Wallach, H.M., Daumé, H., & Crawford, K. (2018). Datasheets for Datasets. FAT/ML.

[8] Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D., & Gebru, T. (2019). Model Cards for Model Reporting. FAT*.

[9] Arnold, M., Bellamy, R., Hind, M., Houde, S., Mehta, S., Mojsilovic, A., Nair, R., Ramamurthy, K.N., Reimer, D., Olteanu, A., Piorkowski, D., Tsay, J., & Varshney, K.R. (2019). FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity. CoRR, abs/1808.07261.

[10] Bender, E.M., Friedman, B. (2018). Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6, 587-604.


Press Contact:

Peter Lo

Senior Communications Manager

peter.lo@partnershiponai.org

650-597-0858

Back to All Posts