AI for Good? AI for Harm?

By Christine Hemphill | 21st November 2021

AI can be a powerful enabling tool or a dangerous discriminatory tool depending on how it is used, who is creating the algorithms and for what purpose. Anything that is not OK for a person to do is not OK for an algorithm to do. We need more transparency in AI use, especially by governments, but also by commercial organisations, to ensure the approach being applied is ethical and will not erode rights or provide lesser outcomes for marginalised communities that are often subject to discrimination.

AI is the future. We need to harness it and make sure that it is taking us to a future we want and one which supports rather than erodes equity and social sustainability. However people are biased, and these biases are embedded into the data of our past and some leaders of the present. That means we need a proactive rather than reactive approach to address this.

Some examples that have been raised as potentially or clearly discriminating against people with disabilities or other marginalised and legally protected characteristics have included hiring and recruitment tools, government benefits reviews, justice system outcomes and more.

On the flipside many leading organisations are looking to use AI to specifically create solutions that improve outcomes for disabled and other underserved communities or to help solve unmet needs and address wicked problems our societies face. These include Microsoft’s AI for Accessibility programme, Google’s AI for Social Good, the Department of Labor, USA’s ableist language detector, and many more.

We need to do better to ensure AI is not being used maliciously or inappropriately and instead is using it’s power to solve significant equitable opportunities in new and ever more efficient ways.

I count myself in that “we” as Open Inclusion can share insights from the disability and older communities on how poorly (or well) designed solutions affect them, and look into the solutions themselves to provide design and innovation recommendations to address gaps and protect inclusive value.

Are you also included in this “we” also? Please step forward, get curious, learn, engage and get involved in this important debate to help progress our maturity in ethical AI usage. Do you design solutions that leverage AI? Do you develop them? Are you a data scientist or user of big data to help inform decisions? Are you a purchaser or user of systems that have AI embedded within them? Do you make decisions based on AI-infused insights? Do you know if you do? Do you represent a community/ies that are impacted by poor design or outcomes? Do you lead teams that are doing any of the above? Are you a social entrepreneur looking for interesting opportunities arising in the intersection between social needs and emerging tech/solutions? Are you an investor or advisor to firms using AI? Are you an ethicist, historian, data privacy or security expert? We are all needed.

Now is the time to have these important discussions and create guard rails and guidelines that make it easier to use AI ethically and valuably for all and harder to use it in a way that deliberately or inadvertently discriminates between people based on protected characteristics or creates any other long term degradation across or between communities.

We need to catch, celebrate and learn from good practices and out, legally protect and improve on practices that create long-term social and ethical harm. The time is now.

Image credit: Unsplash, Michael Dziedzic