AI Ethical Challenges May Drive Isolationistic AI Bias
AI ethical challenges and isolationistic AI bias are one of the main challenge areas as I continue to think about the impending AI arms race. My theory being that AI policy will likely become one of the most important areas of government policy as rapid progress occurs and an “AI arms race” likely takes shape. AI ethical challenges will be at the top of the issues that we will face if this takes off. Ultimately, it is critical to remember that AI is driven by what models are trained on, what models are trained for, and how models are applied. Humans are deeply involved in not only framing, but application of AI, so the bias of AI reflects human bias, largely implicit and unintentional. Diversification and global collaboration are critical to combating the gender, racial, and income biases that are inherent to AI in its current form and corporate development. Isolated research by individual countries can intensify this bias. Because isolated research communities are more likely to be racially, demographically, and socioeconomically homogenized than open source global ones, a pivot towards nationalism invites the propagation of gender, racial, and incomes biases to AI. Bias can be so substantial that frameworks and systems developed in one country cannot be applied to another country because of differences in culture and racial demographic. This post discusses three primary forms of AI bias and the implications of AI Nationalism in driving each of them, providing insight about national policy considering this bias.
1. Driving Racial Bias. Systems trained with biased data are biased. Consider for example, image recognition software in the U.S. has categorized black people as gorillas or misread images of Asians as people blinking. Bias in image recognition, can be extended to other AI applications. Consider that an AI recidivism predictor was biased against people of color. As AI is employed in everything from our justice systems to employment, racial biases are extremely important to consider. AI Nationalism is particularly dangerous in that simultaneously globalism is enhancing the diversification of communities, but AI nationalism will drive AI focused on narrow demographics. If countries close of their models and their data, pursuing research in isolation, these racial biases will propagate. Training sets of people in the United States, China, France, and India will be substantially different. The models generated in isolation from each of these countries will be incredibly biased in application to the other country unless the data used to train the model is representative of the global population. AI Nationalism thus can promote internal racism.
2. Driving Gender Bias. Driving countries involved in the AI arms race involve different levels of gender equality. Elimination of gender bias from AI is what may set the market leader apart from the rest. Different countries have vastly different levels of involvement of women into their workspaces. Consider for example Finland versus South Korea, two of the thought leaders in education with some of the highest research spends relative GDP. But, the degrees of involvement of women in workforce and government differ radically with Finland ranking among the top in terms of gender equality, and South Korea with one of the largest gaps in executive board participation and wages. Cultural and economic drivers of gender inequality reflect in training data, and thus generate gender bias in the intelligent systems derived based on that data. Globalized use of training data can decrease the overall bias, but when countries rely primarily on their own data: their own cultural inequalities with respect to gender intensify. This is problematic if for example a program or system from a less equal country starts generating inequality in a more equal country.
3. Driving Income Bias. AI reflects income inequalities and this remains one of the major AI ethical challenges. Different countries have different consumer income and habits because of broad differences in GDP per capita and average income. Differences in income, monetary, and consumer training data with model development is deeply problematic because many of the top use cases of machine learning are in financial services and consumer analytics. Reducing the geographic and income scopes of these models by restricting training data could prove restrictive towards broad application of intelligent systems reliant on financial and consumer data. Use cases by limited development of AI in this space include intelligence related to branding, eCommerce, and trading.
In this “AI arms race,” focusing on elimination of bias and upholding ethical standards is what may set apart a global leader in the field from others. Historically, national isolationism is the antithesis in a broader drive towards modernization and globalization. Eventually, AI too will become open access and swing back into a globalized state. But, the most successful countries will not necessarily be the countries investing the most in their AI infrastructure. Instead, it will be the country developing systems with the lowest bias and greatest scalability. As countries pivot to nationalistic AI policy, countries must support an equally robust legal policy focused on the major AI ethical challenges and the bias of AI. Attention must be given to addressing this bias in AI, so that solutions developed can effectively be scaled at a global level and reflect the economic, gender, and racial diversities of the world. Currently,AI ethical challenges remains very real with policy underdeveloped because of the novelty of technology’s scope and its rapid rate of proliferation. It is essential that further thought be given to develop the ethics and bias policy for both AI and data sciences considering the societal and privacy implications of the technology.
If you liked “AI Ethical Challenges May Drive Isolationistic AI Bias” and want to read more content from the Bowery Capital Team, check out other relevant posts from the Bowery Capital Blog. Special thanks to Aurnov Chattopadhyay for his contribution and work on this post.