Board of Directors

Dario Amodei


Dario is a research scientist at OpenAI where he concentrates on reinforcement learning and safety. Prior to that he worked at Google and Baidu. He is the co-author of Concrete Problems in AI Safety, which explores practical issues in making modern machine learning systems behave in a safe and reliable manner. Dario also helped to lead the project that developed Baidu’s Deep Speech 2, a human-level speech recognition system which was named one of 10 “Breakthrough Technologies of 2016” by MIT Technology Review. He has also done work in natural language processing and computational biology. Dario is also a scientific advisor for the Open Philanthropy Project, where he advises on the societal impacts of machine learning technologies. He holds a PhD in physics from Princeton University, where he was awarded the Hertz Foundation doctoral thesis prize.

“We’re thrilled to combine the perspective of OpenAI with that of other nonprofits and industry leaders to work together to design safe, ethical AI systems. AI will likely be one of the most significant technologies ever invented by humans, so it’s critical at this stage in the development of AI that we all work together to educate the world about the capabilities of the technology, measure its progression, and ensure it is developed in a responsible manner. The best way to ensure a good future is to invent it together.”

Greg S. Corrado, PhD


pai_bio_corrado-greg_300x300_01Greg Corrado is a senior scientist at Google Research, and a co-founder of the Google Brain Team. He works at the nexus of artificial intelligence, computational neuroscience, and scalable machine learning, and has published in fields ranging from behavioral economics, to particle physics, to deep learning. In his time at Google he has worked to put AI directly into the hands of users via products like RankBrain and SmartReply, and into the hands of developers via opensource software releases like TensorFlow and word2vec. He currently leads several research efforts in advanced applications of machine learning, ranging from natural human communication to expanded healthcare availability. Before coming to Google, he worked at IBM Research on neuromorphic silicon devices and large scale neural simulations. He did his graduate studies in both Neuroscience and Computer Science at Stanford University, and his undergraduate work in Physics at Princeton University.

“Google and DeepMind strongly support an open, collaborative process for developing AI. This group is a huge step forward, breaking down barriers for AI teams to share best practices, research ways to maximize societal benefits, and tackle ethical concerns, and make it easier for those in other fields to engage with everyone’s work. We’re really proud of how this has come together, and we’re looking forward to working with everyone inside and outside the Partnership on Artificial Intelligence to make sure AI has the broad and transformative impact we all want to see.”

Jason Furman

Harvard Kennedy School (HKS)

Jason Furman is Professor of the Practice of Economic Policy at Harvard Kennedy School (HKS). He is also a nonresident senior fellow at the Peterson Institute for International Economics. This followed eight years as a top economic adviser to President Obama, including serving as the 28th Chairman of the Council of Economic Advisers from August 2013 to January 2017, acting as both President Obama’s chief economist and a member of the cabinet. During this time Furman played a major role in most of the major economic policies of the Obama Administration. In addition, Furman helped make the Council of Economic Advisers a thought leader on a wide range of topics including labor markets, competition policy, technology policy, and macroeconomics.

Previously Furman held a variety of posts in public policy and research. In public policy, Furman worked at both the Council of Economic Advisers and National Economic Council during the Clinton administration and also at the World Bank. In research, Furman was a Director of the Hamilton Project and Senior Fellow at the Brookings Institution and also a Senior Fellow at the Center on Budget and Policy Priorities and also has served in visiting positions at various universities, including NYU’s Wagner Graduate School of Public Policy. Furman has conducted research in a wide range of areas, including fiscal policy, tax policy, health economics, Social Security, technology policy, and domestic and international macroeconomics. In addition to numerous articles in scholarly journals and periodicals, Furman is the editor of two books on economic policy. Furman holds a Ph.D. in economics from Harvard University.

“AI will be critical to the future of the global economy, but the only way AI will work for everyone is if we ensure the right public policies and practices are adopted by the forces driving it forward. I am thrilled to be joining the Partnership’s Board as an independent director to help nurture the path of AI so it works for people and society more broadly.”

Tom Gruber


Tom Gruber is head of advanced development of Siri, Apple’s intelligent personal assistant used billions of times a week in over 30 countries around the world. Since 2010, Tom has been focused on the future direction of Siri and related products. Before joining Apple, Tom was the cofounder, CTO, and head of design at Siri Inc., the founder and CTO of Intraspect Software, founder and CTO of RealTravel, and the inventor of HyperMail, which helped to create a living conversational history of the Web.

Tom has spent over three decades researching and designing systems for knowledge sharing and collective intelligence. His research at Stanford University in AI and ontology engineering helped lay the foundation for the Semantic Web.

He received a double B.S. in psychology and computer science at Loyola University New Orleans, and a M.S. and Ph.D in Computer and Information Science from the University of Massachusetts Amherst. During this time, he helped design and implement an intelligent communication prosthesis assistant and his dissertation research focused on the issues of knowledge acquisition in AI systems.

“We’re glad to see the industry engaging on some of the larger opportunities and concerns created with the advance of machine learning and AI. We believe it’s beneficial to Apple, our customers, and the industry to play an active role in its development and look forward to collaborating with the group to help drive discussion on how to advance AI while protecting the privacy and security of consumers.”

Ralf Herbrich


pai_bio_herbrich-ralf_300x300_01Ralf is Director of Machine Learning at Amazon and Managing Director of the Amazon Development Center Germany. His team works on problems scalable and resource-aware machine learning, probabilistic learning algorithms (including forecasting), linking structured content, and computer vision. In 2011, he worked at Facebook leading the Unified Ranking and Allocation team. From 2000 – 2011, he worked at Microsoft Research and was co-leading the Applied Games and Online Services and Advertising group which engaged in research at the intersection of machine learning and computer games. Ralf was Research Fellow of the Darwin College Cambridge from 2000 – 2003. He has a diploma degree in Computer Science (1997) and a PhD in Statistics (2000). Ralf’s research interests include Bayesian inference and decision making, reinforcement learning, computer games, kernel methods, and statistical learning theory. He is one of the inventors of the Drivatars system in the Forza Motorsport series as well as the TrueSkill ranking and matchmaking system in Xbox 360 Live. He also co-invented the adPredictor click-prediction technology.

“We’re in a golden age of Machine Learning and AI. As a scientific community, we are still a long way from being able to do things the way humans do things, but we’re solving unbelievably complex problems every day and making incredibly rapid progress. This partnership will ensure we’re including the best and the brightest in this space in the conversation to improve customer trust and benefit society. We are excited to work together in this partnership with thought leaders from both industry and academia.”

Eric Horvitz – Chair


pai_bio_horvitz-eric_300x300_01Eric Horvitz is technical fellow at Microsoft, where he serves as director of Microsoft Research. His research contributions span theoretical and practical challenges with computing systems that learn from data and that can perceive, reason, and decide. His efforts have helped to bring multiple systems and services into the world, including innovations in transportation, healthcare, aerospace, ecommerce, online services, and operating systems. He has been elected fellow of the National Academy of Engineering (NAE), the Association for the Advancement of AI (AAAI), the American Association for the Advancement of Science (AAAS), and the American Academy of Arts and Sciences. He received the Feigenbaum Prize, the ACM-AAAI Allen Newell Award, and the ACM ICMI Sustained Achievement Award for foundational research contributions in AI. He was inducted into the CHI Academy for advances in human-computer collaboration. He has served as president of AAAI, chair of the AAAS Section on Computing, and on advisory committees for the National Institutes of Health, the National Science Foundation, the Computer Science and Telecommunications Board (CSTB), DARPA, and the President’s Council of Advisors on Science and Technology. He received his PhD and MD degrees from Stanford University. More information can be found at

“We’re excited about this historic collaboration on AI and its influences on people and society. We see great value ahead with harnessing AI advances in numerous areas, including health, education, transportation, public welfare, and personal empowerment. We’re extremely pleased with how early discussions among colleagues blossomed into a promising long-term collaboration. Beyond folks in industry, we’re thrilled to have other stakeholders at the table, including colleagues in ethics, law, policy, and the public at large. We look forward to working arm-in-arm on best practices and on such important topics as ethics, privacy, transparency, bias, inclusiveness, and safety.”

Subbarao Kambhampati (Rao)


Subbarao Kambhampati (Rao) is a professor of Computer Science at Arizona State University, and is the current president of the Association for the Advancement of AI (AAAI). His research focuses on automated planning and decision making, especially in the context of human-aware AI systems. He is an award-winning teacher and spends significant time pondering the public perceptions and societal impacts of AI. He was an NSF young investigator, and is a fellow of AAAI. He served the AI community in multiple roles, including as the program chair for IJCAI 2016 and program co-chair for AAAI 2005. Rao received his bachelor’s degree from Indian Institute of Technology, Madras, and his PhD from University of Maryland, College Park. More information can be found at

“AI is progressing at a rapid pace with the formidable promise of helping to solve some of the biggest problems facing us in the century ahead. As we prepare for this future with AI, we must also take deliberate and nuanced care, with deep, ethical consideration for the people and society our technology impacts. Breaking down research walls, developing and honoring best practices, and bringing together varied voices from industry and academia will be fundamental to this approach. I am thus tremendously excited to be joining the Partnership as an Independent Director and look forward to collaborating with visionaries in AI to tackle these challenges and strive for the safe and ethical future of AI.”

Yann LeCun


pai_bio_lecun-yann_300x300_01Yann is the Director of AI Research at Facebook since December 2013, and Silver Professor at New York University on a part-time basis, mainly affiliated with the NYU Center for Data Science, and the Courant Institute of Mathematical Science.

Yann received the EE Diploma from Ecole Supérieure d’Ingénieurs en Electrotechnique et Electronique (ESIEE Paris), and a PhD in CS from Université Pierre et Marie Curie (Paris). After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, New Jersey. Yann became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU as a professor in 2003, after a brief period as a Fellow of the NEC Research Institute in Princeton. From 2012 to 2014 he was the founding director of the NYU Center for Data Science.

Yann is the co-director of the Neural Computation and Adaptive Perception Program of CIFAR, and co-lead of the Moore-Sloan Data Science Environments for NYU. He received the 2014 IEEE Neural Network Pioneer Award.

“The possibilities for positively impacting a global society with advances in AI are numerous, ranging from connectivity, healthcare, and transportation. As researchers in industry, we take very seriously the trust people have in us to ensure advances are made with the utmost consideration for human values. By openly collaborating with our peers and sharing findings, we aim to push new boundaries every day, not only within Facebook, but across the entire research community. To do so in partnership with these companies who share our vision will help propel the entire field forward in a thoughtful responsible way.”

Deirdre Mulligan

UC Berkeley

Deirdre K. Mulligan is an Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, and a PI on the new Hewlett funded Berkeley Center for Long-Term Cybersecurity. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection. The past year, Mulligan chaired a series of interdisciplinary visioning workshops on Privacy by Design with the Computing Community Consortium to develop a research agenda. She is a member of the National Academy of Science Forum on Cyber Resilience. She is Chair of the Board of Directors of the Center for Democracy and Technology, a leading advocacy organization protecting global online civil liberties and human rights; a founding member of the standing committee for the AI 100 project, a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live, and play; and a founding member of the Global Network Initiative, a multi-stakeholder initiative to protect and advance freedom of expression and privacy in the ICT sector, and in particular to resist government efforts to use the ICT sector to engage in censorship and surveillance in violation of international human rights standards. She is a Commissioner on the Oakland Privacy Advisory Commission. Prior to joining the School of Information. she was a Clinical Professor of Law, founding Director of the Samuelson Law, Technology & Public Policy Clinic, and Director of Clinical Programs at the UC Berkeley School of Law.

Mulligan was the Policy lead for the NSF-funded TRUST Science and Technology Center, which brought together researchers at U.C. Berkeley, Carnegie-Mellon University, Cornell University, Stanford University, and Vanderbilt University; and a PI on the multi-institution NSF funded ACCURATE center. In 2007 she was a member of an expert team charged by the California Secretary of State to conduct a top-to-bottom review of the voting systems certified for use in California elections. This review investigated the security, accuracy, reliability, and accessibility of electronic voting systems used in California. She was a member of the National Academy of Sciences Committee on Authentication Technology and Its Privacy Implications; the Federal Trade Commission’s Federal Advisory Committee on Online Access and Security, and the National Task Force on Privacy, Technology, and Criminal Justice Information. She was a vice-chair of the California Bipartisan Commission on Internet Political Practices and chaired the Computers, Freedom, and Privacy (CFP) Conference in 2004. She co-chaired Microsoft’s Trustworthy Computing Academic Advisory Board with Fred B. Schneider, from 2003-2014. Prior to Berkeley, she served as staff counsel at the Center for Democracy & Technology in Washington, D.C.

“AI has the potential to help solve some of the world’s biggest challenges in the coming years, with its impact already being felt meaningfully across diverse fields including healthcare and energy. I am delighted to be joining the Partnership on AI as an independent director and look forward to working collaboratively to help take the field forward in a thoughtful, responsible way that benefits and empowers as many people as possible.”

Carol Rose


Carol Rose is the Executive Director of the ACLU of Massachusetts (, a nonpartisan organization with over 35,000 members and supporters in Massachusetts (more than 800,000 nationwide) that uses litigation, legislation, media, and organizing to promote civil rights and defend civil liberties.

In 2013, Rose launched the ACLU of Massachusetts’ “Technology for Liberty” project, focusing on the civil liberties implications and promise of new technology. Under Rose’s leadership, the Technology for Liberty project has won significant legal victories, such as strengthening the warrant requirements for government agents seeking to access to digital information, challenging government secret surveillance of political activists, defending the right of people to record the police, and challenging the government’s secret use of the “All Writs Act” against technology companies. In 2015, the Technology for Liberty project was recognized as a leader in the law and technology space when the Ford Foundation and Mozilla selected the ACLU of Massachusetts as a host organization for a groundbreaking program that places technologists from around the world in human rights organizations. Rose is a frequent speaker on technology and civil liberties issues, including the 2014 White House conference on big data privacy at MIT and the 2016 Forum on Data Privacy hosted by the Internet Policy Research Initiative at MIT.

She is a graduate of Stanford University (BSc 1983), the London School of Economics (MSc 1985), and Harvard Law School (JD 1996).

“This collaboration is an important opportunity to ensure that artificial intelligence and machine learning are developed in ways that enhance, rather than threaten, human rights and civil liberties. We are pleased to have a seat at the table alongside leaders of science and industry, who will shape not only the future of AI, but the future of human society. The ACLU of Massachusetts is proud of our long legacy defending and expanding core civil rights and civil liberties, and we stand ready to champion these core values in the brave new world of intelligent machines. Together, we can ensure that scientific and industry leaders who are coding the future do so in ways that promote equality, justice, and freedom for all people.”

Professor Francesca Rossi


pai_bio_rossi-francesca_300x300_01Francesca Rossi is a research scientist at the IBM T.J. Watson Research Centre and a professor of computer science at the University of Padova, Italy.

Francesca’s research interest focuses on artificial intelligence, specifically constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues surrounding the development and behavior of AI systems, in particular for decision support systems for group decision making. A prolific author, Francesca has published over 170 scientific articles in both journals and conference proceedings as well as co-authoring A Short Introduction to Preferences: Between AI and Social Choice. She has edited 17 volumes, including conference proceedings, collections of contributions, special issues of journals, and The Handbook of Constraint Programming.

Francesca is both a fellow of the European Association for Artificial Intelligence (EurAI fellow) and also a 2015 fellow of the Radcliffe Institute for Advanced Study at Harvard University. A prominent figure in the Association for the Advancement of Artificial Intelligence (AAAI), at which she is a fellow, she has formerly served as an executive councilor of AAAI and currently co-chairs the association’s committee on AI and ethics. Francesca is an active voice in the AI community, serving as Associate Editor in Chief of the Journal of Artificial Intelligence Research (JAIR) and as a member of the editorial boards of Constraints, Artificial Intelligence, Annals of Mathematics and Artificial Intelligence (AMAI), and Knowledge and Information Systems (KAIS). She is also a member of the scientific advisory board of the Future of Life Institute, sits on the executive committee of the Institute of Electrical and Electronics Engineers (IEEE)’s global initiative on ethical considerations on the development of autonomous and intelligent systems, and belongs to the World Economic Forum Council on AI and robotics.

A recognized authority on the future of AI and AI ethics, Francesca has been interviewed widely by publications including the Wall Street Journal, the Washington Post, Motherboard, Science, The Economist, CNBC, Eurovision, Corriere della Sera, and Repubblica, and has also delivered three TEDx talks on these topics.

“Over the past five years, we’ve seen tremendous advances in the deployment of AI and cognitive computing technologies, ranging from useful consumer apps to transforming some of the world’s most complex industries, including healthcare, financial services, commerce, and the Internet of Things. This partnership will provide consumer and industrial users of cognitive systems a vital voice in the advancement of the defining technology of this century – one that will foster collaboration between people and machines to solve some of the world’s most enduring problems – in a way that is both trustworthy and beneficial.”

Eric Sears – Vice Chair

John D. and Catherine T. MacArthur Foundation

Eric is a Senior Program Officer at the John D. and Catherine T. MacArthur Foundation. He leads MacArthur’s grantmaking to strengthen civil liberties and civil rights in the digital age, and address the social and rights-based implications of new and emerging technologies through research, policy, and practice. He is a member of the World Economic Forum’s Artificial Intelligence, the Internet of Things, and the Future of Trust Network. Eric has previously worked at Human Rights First in New York City and Amnesty International USA in Washington, D.C. where he carried out a range of research and advocacy initiatives. While at Amnesty, Eric launched and managed the organization’s campaign aimed at reforming U.S. counterterrorism policies and helped establish the organization’s Crisis Prevention and Response program. Eric holds an MSc from the London School of Economics and a BA from Saint Louis University.

“Artificial intelligence is poised to have a transformative impact on people around the world. While AI-related technologies hold great promise to help solve a range of problems, there is a need to consider and address the potential ethical and legal risks they pose. The Partnership on Artificial Intelligence is a timely initiative that establishes a forum for industry, civil society, and academia to come together and rigorously examine the potential risks and create best practices that aim to mitigate them, thereby helping to ensure that AI benefits as many people as possible.”

Mustafa Suleyman


pai_bio_suleyman-mustafa_300x300_01Mustafa Suleyman is co-founder and Head of Applied AI at DeepMind, where he is responsible for the application of DeepMind’s technology to real-world problems, as part of DeepMind’s commitment to use intelligence to make the world a better place. In February 2016 he launched DeepMind Health, which builds clinician-led technology in the NHS. Mustafa was Chief Product Officer before DeepMind was bought in 2014 by Google in their largest European acquisition to date. At 19, Mustafa dropped out of Oxford University to help set up a telephone counselling service, building it to become one of the largest mental health support services of its kind in the UK, and then worked as policy officer for then Mayor of London, Ken Livingstone. He went on to help start Reos Partners, a consultancy with seven offices across four continents specializing in designing and facilitating large-scale multi-stakeholder ‘Change Labs’ aimed at navigating complex problems. As a skilled negotiator and facilitator Mustafa has worked across the world for a wide range of clients such as the UN, the Dutch Government, and WWF.

“Google and DeepMind strongly support an open, collaborative process for developing AI. This group is a huge step forward, breaking down barriers for AI teams to share best practices, research ways to maximize societal benefits, and tackle ethical concerns, and make it easier for those in other fields to engage with everyone’s work. We’re really proud of how this has come together, and we’re looking forward to working with everyone inside and outside the Partnership on Artificial Intelligence to make sure AI has the broad and transformative impact we all want to see.”