Insights | Insights

AI Policy Frameworks We Are Seeing Today

MB Headshot 2

Michael Brown

August 12, 2019
Robot
Share

AI policyWe have talked a bunch on the Bowery Capital blog about AI Nationalism and various elements of AI including ethical challenges, AI country differences, and AI impacts on the B2B ecosystem. Issues concerning AI policy primarily revolve around the analysis of societal decision-making, with important questions such as asking how to avoid a dangerous arms race in developing powerful AI systems, how open should AI research be, or how the benefits of such systems can be equitably distributed. The concerns raised about unregulated AI development serve as looming issues to companies, governments, and private institutions alike. I thought it would be interesting to look at where we are at today with AI policy with the below laying out some specifics on how we are seeing things today. For a much more informed view check out Matt Chessen and his full list of everyone involved this important discussion.


1. U.S. Government Approach To AI Policy. Looking at reports from 2015, the U.S. government invested $1.1 billion in AI in 2015, but did not lay out an overall industrial policy. It instead, opted for the “public-private” balance, similar to how the U.S. handles many policy issues, knowing that tech giants like Microsoft, Amazon, and Facebook would funnel large sums of money and human capital into the field. The Obama administration announced an interagency working group in May 2016 to explore the benefits and risks of AI. By the end of that year, the group produced multiple reports featuring topics relating to investment in AI, data security, and big data, culminating in 23 recommendations for how the government, research, and industry sectors should work together as part of an extensive national plan to prepare both society and the economy. Examples of the results of this group include two reports by President Obama’s Executive Office, laying out plans for the future of AI in October 2016. The reports, though extensive, received little media attention due to the final weeks of the presidential campaign race seizing more attention. According to a summary of The Obama Administration’s Roadmap for AI Policy by HBS, some of the major points in the report note that: AI policy should be an urgent concern; the U.S. government is not designing policy for general intelligence or “strong AI; AI isn’t a science project; it’s commercially important; the U.S. needs to expand its AI workforce; the U.S. government has no clear vision regarding where to focus research funding; regulation could threaten AI’s progress, or further it AI can help governments do their jobs better; and China is a leader, not a copycat or a follower. Despite strides made by the Obama’s efforts, the Trump administration has abandoned much of this vision.


2. Other Government Approaches To AI Policy. The dominant regions proceeding with AI research include Europe and China, nations that are involved in a race with the US. The European Commission plays a large policy role in dictating European advancements in AI. The Commission is increasing its annual investments in AI by 70% under the innovation and research program Horizon 2020. During the period of 2018 – 2020, this program will reach $1.5 billion EUR. New innovations, raising ethical and legal questions, are governed by frameworks such as the General Data Protection Regulation (GDPR), which regulates the processing by an individual, a company or an organization of personal data relating to individuals in the EU. The changes to this policy made in 2017 presented a major step for building trust and ensuring legal clarity in AI-based applications. In 2019, the Commission will develop and make available: “AI ethics guidelines” and “Guidance on the interpretation of the Product Liability directive” to further create homogeneous guidelines used by its Member States. Another example of strong policy, 25 European countries signed a Declaration of cooperation on Artificial Intelligence in April 2018. This Declaration promotes the investments and achievements of the European research and business community in AI. The Commission will now work with Member States on a coordinated plan on AI to be delivered by the end of the 2018. AI has also become a clear priority for Chinese leaders, who are attempting to transform China into a “nation of innovation.” The active efforts of private enterprises have predated more recent policy support from the government. Under the 13th Five-Year Plan timeframe (2016 – 2020), China desires to transform itself into a science and technology superpower. This includes a launch of a series of 15 “Science and Technology Innovation 2030 Megaprojects” that include both intelligent manufacturing and robotics, and big data. Later in December 2017, the Ministry of Industry and Information Technology released the Three-Year Action Plan to Promote the Development of New-Generation Artificial Intelligence Industry (2018 – 2020). This calls for China to achieve major breakthroughs in a series of landmark AI products as well as establish an international competitive advantage by 2020. This new plan further implements a “support system” to include a data resource base with standard test data sets, cloud-based training frameworks, and initial evaluation and test systems. This latest policy framework thus brings to light the basic foundations of an ecosystem that would create a healthy environment for AI development, bolstered through high levels of funding and cultivating a strong talent pool. With a similar public-private treatment towards AI development, efforts are split between the government and private companies in China. For example, the government revealed details in January to spend over $2.1 billion USD to develop an AI industrial park in Beijing, with space for 400 companies. On the other hand, the leading Chinese AI companies, known as the national team, will undertake the development of new “open innovation platforms” in AI. Baidu is responsible for autonomous vehicles, Alibaba Cloud for smart cities, Tencent for medical imaging, and iFlytek for smart voice. Specialists claim that the expansive AI agenda may be difficult to disentangle from the Communist Party’s attempts to assure state security through bolstering its capacity for social control. For example, the creation of smart cities may lead to enhanced social management, which can be viewed negatively.


3. Private Sector Approaches To AI Policy. Major strides are being enforced by private companies. An effort to change the AI landscape was made when DeepMind, acquired by Google in 2014 and known as the world-leading AI company, launched a new ethics board “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.” In addition, the non-profit sector is heating up. In 2015 we see OpenAI launch with $1B of private funding. The non-profit artificial intelligence research company is focused on building safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible. They remain at the forefront of this effort and continue to work in the foreground and background of public and private discussions. Later in 2016, the Partnership on AI was formed by six tech giants – Amazon, Apple, Deepmind, Google, Facebook, IBM, and Microsoft – “to study and formulate best practices on AI technologies.” Finally, the think tank ecosystem is also participating. The World Economic Forum’s Council on the Future of AI and Robotics will explore how developments in Artificial Intelligence and Robotics could impact industry, governments and society in the future, and design innovative governance models that ensure that their benefits are maximized and the associated risks kept under control.


4. Academic Sector Approaches To AI Policy. Academics have always had a strong influence on policy decisions and AI is no different. The Stanford One Hundred Year Study on Artificial Intelligence, or AI100, is a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play. MIT is running three experiments related to AI policy with the most important probably being the MIT Media Lab, AI, Ethics and Governance Project that will support social scientists, philosophers, and policy and legal scholars who undertake research that aims to impact how artificial intelligence technologies are designed, implemented, understood, and held accountable. The Cambridge Center for the Study of Existential Risk is another academic thinking hard about AI. Their goals are to significantly advance the state of research on AI safety protocol and risk, and to inform industry leaders and policy makers on appropriate strategies and regulations to allow the benefits of AI advances to be safely realized. Finally, USC has a very interesting project in the USC Center for Artificial Intelligence in Society. Their mission is to conduct research in Artificial Intelligence to help solve the most difficult social problems facing our world. There are many, many, more on the academic side that we do not include here.


There is no definite answer on the most effective way to tackle AI policy. However, with developing focus on regulation in accordance with AI’s development, thoughtful conversations are being made about ethical, safe, and practical measures of instilling AI’s benefits throughout the population. What must happen is an intersection between government policy and that of the private sector in order to further such conversation.


If you liked “AI Ethical Challenges May Drive Isolationistic AI Bias” and want to read more content from the Bowery Capital Team, check out other relevant posts from the Bowery Capital Blog. Special thanks to Vera Wang for her contribution and work on this post.