Artificial intelligence (AI) is increasingly being used in commercial, government, and corporate systems that provide services to as well as monitor billions of people around the globe. As the use of AI systems becomes more widespread, we are beginning to see how this technology may enhance discrimination against marginalized or vulnerable populations. Undoubtedly, AI will affect social and economic inclusion, risking amplification of inequality at all levels. In order to address the emerging challenges in AI, we need to develop solutions from the standpoint of inclusion.
It’s time to take inspiration from inclusive movements — such as disability rights advocacy groups that helped shape norms and rules around building design to ensure more equitable access for citizens of all abilities — to build the AI future we want. This is particularly crucial now as various conversations about AI and inclusion have already started. However, these important conversations run the risk of being shaped by an ‘artificial intelligentsia’ that discusses inclusion without truly including the voices of the marginalized people likely to suffer most significantly in an AI ecosystem that didn’t consider their voice in its design.
The full article can be found in Data & Society: Points.
Photo: junaidrao, Flickr