The Partnership on AI Launches Multistakeholder Initiative To Enhance Machine Learning Transparency
Ongoing Effort To Shape, Test, and Propagate Best Practices
San Francisco, California, April 25, 2019 — The Partnership on AI (PAI) today announced an initiative to define best practices for transparency in machine learning (ML). This iterative multistakeholder initiative will produce best practices around the considerations, reflections, and documentation necessary to prompt a thoughtful process of creating and understanding ML systems that account for how the technology impacts all parties—including the public at large, differentially affected communities, policymakers, and users. This effort will be called “Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles”  (ABOUT ML). When organizations implement ABOUT ML best practices, the output will include documentation for use by internal ML decisionmakers,  external documentation similar to food nutrition labels or drug labels on pharmaceuticals, and a process that organization followed to improve the responsible development and fielding of AI technologies. Among other benefits of transparency, knowing how and on what data an ML system was trained and tested is essential for understanding the uses for which the system is and is not appropriate.
This initiative to improve the transparency of the ML lifecycle can only succeed if ABOUT ML incorporates and channels the concerns of a diverse set of perspectives. ML technology designed and built by a small number of organizations is increasingly impacting global populations. To that end, PAI will be convening its global membership in an open, inclusive process. The ABOUT ML initiative will also seek input from communities historically excluded from technology decision-making by following an adapted version of the University of Washington Tech Policy Lab’s Diverse Voices  process, as well as from stakeholders from academia, civil society organizations, and companies designing and deploying ML technology. PAI is committed to investing in this effort because of how crucial transparency of the ML lifecycle has become for AI policy and technical communities, as well as communities affected by the use of ML systems.
Modeled after iterative ongoing processes to design internet standards (such as W3C,  IETF,  WHATWG),  ABOUT ML will kick off with the publication of initial “draft v0” recommendations on ML lifecycle transparency this July, building on discussions and research in the community     and practice in the ML field. This will be followed by successive drafts which will allow PAI to respond to and reflect upon newly learned lessons in the rapidly developing field of ML and broader AI research. Once draft recommendations from participants begin to converge, PAI will encourage members to pilot and test them, adapted for their own contexts. The recommendations will become best practices only with a large body of evidence in support of their efficacy, which will take time to achieve.
“As AI makes rapid progress, we interact with increasingly intelligent machines. But machine learning systems, which lay at the heart of most modern AI applications, can fail in strange and unintuitive ways that humans are not well equipped to understand,” said Peter Eckersley, Director of Research at the Partnership on AI. “PAI is gathering many communities to establish clear guidelines on how to document training, evaluation, and benchmarking datasets and machine learning models—both to aid engineers and product leaders in their work and educate ourselves alongside the public around the topic of transparency in machine learning development.”
To contribute to this initiative and learn more, visit: https://www.partnershiponai.org/about-ml/
About The Partnership on AI
The Partnership on AI (PAI) is a global nonprofit organization committed to the creation and dissemination of best practices in artificial intelligence through the diversity of its Partners. By gathering the leading companies, organizations, and people differently affected by artificial intelligence, PAI establishes a common ground between entities which otherwise may not have cause to work together – and in so doing – serves as a uniting force for good in the AI ecosystem. Today, PAI convenes more than 90 partner organizations from around the world to realize the promise of artificial intelligence. Find more information about PAI at partnershiponai.org.
 “Machine learning lifecycle” comprises the stages of designing and building a decision making system that includes an ML model, usually including designing the ML system and specifications, data collection and characterization, building, training, and testing the ML model, testing the overall system, using the ML decision making system, and maintenance and feedback. For more information, see E. Horvitz, Reflections on the meaningful understanding of the logic of automated decision making, Privacy Law Forum, Berkeley Center for Law & Technology, March 2017.
 “Machine learning decision makers” are people whose decisions impact the structure and features of ML systems. These include engineers, engineering leadership, product leads, system architects, designers, and business teams.
 Technology Policy Lab, “Diverse Voices” https://techpolicylab.uw.edu/project/diverse-voices/
 World Wide Web Consortium Process Document https://www.w3.org/2019/Process-20190301/
 Internet Engineering Task Force standards process https://www.ietf.org/standards/process/
 Web Hypertext Application Technology Working Group https://whatwg.org/faq#process
 Gebru, T., Morgenstern, J.H., Vecchione, B., Vaughan, J.W., Wallach, H.M., Daumé, H., & Crawford, K. (2018). Datasheets for Datasets. FAT/ML.
 Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D., & Gebru, T. (2019). Model Cards for Model Reporting. FAT*.
 Arnold, M., Bellamy, R., Hind, M., Houde, S., Mehta, S., Mojsilovic, A., Nair, R., Ramamurthy, K.N., Reimer, D., Olteanu, A., Piorkowski, D., Tsay, J., & Varshney, K.R. (2019). FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity. CoRR, abs/1808.07261.
 Bender, E.M., Friedman, B. (2018). Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6, 587-604.
Senior Communications Manager
650-597-0858Back to All Posts