Text Zoom

Ethical and inclusive AI – a powerfully positive workshop

Christine Hemphill

April 7, 2024

Christine Hemphill

April 7, 2024

Two people stand beside a large screen. One is a brown man with cool hair and funky clothing, the other is a white woman with bobbed blond hair wearing designer black. On the screen it reads "Technology is too important to be left to technologists. We need everybody to hold technology to account" Stephanie Hare

A few weeks ago Breandan and I were invited by Rama Gheerawo and the team at Tata Consultancy Services and The Helen Hamlyn Centre for Design to an incredible afternoon workshop on Inclusive AI.

A group of wonderful and wonderfully diverse individuals (in ways you can and cannot see) working in smaller groups at tables and on boards

The session was well designed with a brilliant group of people involved who are remarkable leaders across many inter-disciplinary fields from artists and poets, educators, activists and advocates, designers, researchers, technology strategists, applied technologists, data scientists and commercial leaders.

They also represented a powerfully diverse range of professional and personal experiences including people with many characteristics that are historically marginalised, such as a diverse range of abilities and disabilities, ages, backgrounds (socio-economic, regional, linguistic and cultural), identities such as race, gender and more.

The afternoon provoked meaningful dialogue and small group interaction around inclusive and ethical AI. It broadened our understanding and provided the opportunity for deep consideration by bringing people together with very different, distinct, informed perspectives. It enabled learning through open dialogues, and synthesis to actionable concerns and future opportunities to make progress.

Rachael Mole described the event beautifully in her Linked In post. I have wanted to add my thoughts on this excellent event for a while but needed to let them settle in my mind into usefulness to others first. A huge thanks to the organisers for such a positively provocative, helpful and hopeful event.

Here are my takeaways from the session. These are not the session outputs, just some of my thoughts that followed it. They are anything but comprehensive. Hopefully they may be useful when added to many others.

 

Some thoughts:

 

1/ Ethical AI is inclusive AI

When people talk about ethical AI (which thankfully they are doing a lot), those of us who work in and with communities that are marginalised today need to keep asking, “ethical for who?”.

We need to keep requiring both direct outcomes and secondary impacts to be measured for bias against groups with a wide range of characteristics including various disabilities, genders, ages, ethnicities, races, cultures, sexual orientations, socio-economic backgrounds, locations and more. We need to ensure people with expertise and experience across a broad range of humanity impacted by a solution are involved throughout. This will be hard and require new ways of engaging, informing and making decisions. It is however critical to equitable success.

 

2/ Inclusive AI is not the default. Exclusion is

Today we live in a society where structurally and socially the opportunities, expectations, datasets, interactions, employment practices, citizen and commercial solutions exclude many marginalised groups. We are not starting at neutral. This means we need very conscious focus and effort in order to design, create and manage solutions that are inclusive and equitable.

Exclusion is our current default state which is woven deeply, and often imperceptibly, into the fabric of our society.

 

3/ AI is a tool. It is morally and outcome indifferent

AI is a tool, like a fancy spanner. At essence AI is a set of zeros and ones that do what they are told, by a creator or team of creatives, influenced by data they are told to ingest and extend in specific ways, and used by people to solve problems differently to the way they are done currently, without AI. It is not, of itself, either harmful or good. How it is designed, created, applied and managed will make the outcomes of its use good, bad or indifferent.

“Ethical and inclusive AI” is not a conversation about the tool but about the considered, considerate, equitable and appropriate use of it by humans. This is especially true when we are in this early learning stage and don’t quite know how to predict and best manage the implications of using it – which leads on to my next takeaway…

 

4/ When you invent ships you invent shipwrecks

We need to build our understanding of the potentially negative implications of AI use at the same rate we build our understanding of the potential positive influence of applying it in various solutions and contexts.

I loved the quote shared during the session by Ve Dewey, MBA FRSA

“When you invent the ship, you also invent the shipwreck…and when you invent electricity, you invent electrocution…every technology carries it’s own negativity, which is invented at the same time as technical progress”

Paul Virillo, French Cultural theorist and Philosopher

A purple and white screen that reads "When you invent the ship, you also invent the shipwreck..." as per the quote in the text by Paul Virilio

This will take a cross disciplinary and diverse sent of people who are really focussed on noticing, assessing and countering AI’s shadow along with its light.  We need to be actively predicting, understanding, designing avoidance practices, guardrails and guidelines, measuring and mitigating the direct and indirect potentially harmful implications and outcomes of AI-infused solutions on individuals, communities and societies.

It will require diversity of perspectives and experiences both in engagement (who is involved) and decisioning power (who decides what and how to progress).  This is true across every aspect of AI-infused consideration and application – from where to apply AI, to how to design AI-enabled solutions, assessment and provision of data sources, development and application of the solutions, and monitoring outputs and outcomes.

 

5/ There is significant opportunity in AI for good

We need to not be scared of AI. Like fire, it is powerful and can provide many transformative benefits when applied within clearly defined confines. Without boundaries it can burn us and our house down. We need to look for and harness the good it can generate when used in well bounded ways, especially to solve for significant unmet needs that exist today in many areas.

  • AI-enabled solutions can more efficiently micro-tailor environments and assistive technologies to suit individuals’ unique or differentiated needs. Just a couple of examples of this include,
    • Combining computer vision with audio technologies can allow blind and partially sighted people to have a space, physical or digital products described to them in more accessible and meaningful ways along with navigation and wayfinding solutions that are more accurate and adapted to their needs and preferences.
    • Equally AI is and can be further used to absorb and interpret audio information and convert sound to meaningful visual information (text, visual soundscapes or descriptions) or even more targeted audio such as within the hearing range of an individual. This helps people who are deaf, have auditory loss or are in an environment that prevents them from hearing.
    • AI can make reading, writing or condensing information easier for those who think differently or find typing difficult.
    • AI can support physical mobility aids like wheelchairs adapt better to the environment they are in such as changing suspension levels. Increasing vehicle autonomy will allow greater independence for more people later in life or with a diverse set of access needs.
    • AI can notice physical patterns including heart rates, body movement or temperature, and even genetic or chemical composition faster and more broadly than humans. We can identify illness, disease or simply a fall more easily than we do without AI. This could make ageing in place, predictive health and preemptive or early interventions all much easier, safer and more effective increasing physical health.
  • AI can monitor AI. We can use the problem to become the solution as human minds may find it difficult to match the speed and breadth of AI, but AI could be trained to looked at outcomes and analyse them for inequality.
  • AI can monitor people, systems and policies. We can train AI to be the good cop looking for patterns in outcomes and impacts that expose deliberate or inadvertent harm, inequity and/or discrimination helping make unseen patterns of advantage and disadvantage more measurable and visible.
  • There is increasing consideration and attention on distorted datasets as a result of these potentially having their impacts amplified by AI. Many datasets today contain significant “exclusion footprints” within them as we call them here at Open. Today’s standard underlying research skills, practices and tools exclude many people, distorting datasets and therefore designs and solutions created on the basis of them. Of those they do include a broader range of people, many important characteristics are not well understood or tagged in ways that can help us correlate differentiated experiences to the individual characteristics that experience differently, such as disabilities. This focus on accurate data and debates now occurring as we are fearful (rightly in my opinion) of the implication of these distortions, may actually improve research and insight practices to create better underlying data and insights.

 

6/ AI is already doing both good and harm.

We need to learn and adapt fast to optimise the good and minimise the harm of AI usage.

AI is here. It already influences many services and solutions we experience today from a simple Google search to our workplace opportunities or insurance costs. It is also being rapidly introduced to influence and inform outcomes in more retail, transport, technology, educational, employment, health, justice and other commercial, social or government services, solutions and spaces. The speed of uptake is increasing, especially with packaging of AI into solutions that are much easier for more people with less skill to use.

AI is increasingly impacting more lives, and more elements in our lives. Sometimes it is doing its job well and other times very poorly when considered through and ethical and inclusive lens.

Misaligned or misunderstood costs relative to benefits

One of the risks of AI is that the benefits may be clearer and more understandable than the costs and risks.

Additionally the benefits may accrue to some, with the costs or negative implications being incurred by others. In economics this is defined as an “externality” – external implications of benefits that someone can obtain that generates a cost or negative implication on someone else. It is a natural place for government and/or industry intervention as externalities reduce the incentives for the market to naturally self learn and correct over time.

Calling everyone – AI needs us all

No small group, organisation, industry or government will be sufficient to predict, positively influence, monitor and manage AI inclusively and ethically.

However a small group can spark action across other individuals and groups, form larger groups and influence in outsized ways by,

  • identifying and asking the important questions at the right time
  • noticing impacts and implications and sharing these
  • lobbying for sensible controls to inappropriate use or sensible support for valuable use
  • connecting people with one part of a solution (or awareness of a problem) to others with another part.

We will need as many individuals as possible to play a smaller or larger part as just one element of what they do – as designers, data scientists, commissioners of research, researchers, participants in research, developers, activists, community leaders, lobbyists, government leaders and legislators, corporate leaders and managers, product owners, service providers, lawyers, doctors, journalists, artists, ethicists, philosophers, parents of children, older people bringing their wisdom and experience and young people bringing their perspectives looking to positively influence their future. There is a role for us all, particularly for people with more differentiated experiences and needs.

We will need to make space for more open dialogues and broadly shared influence than our traditional tech development and deployment models have tended to provide. We will need an active role for legislators and regulators also. The EU has led the way in this to date adopting the EU AI Act on March 13 2024.

We can’t let the extent of this or its importance overwhelm us. One small meaningful action at a time will help the weight of the change to be carried forwards by many people. This will keep us positively progressing. We will need to both proactively and reactively address ethical or equitable implications of AI as they are identified.

“Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.”

Margaret Mead, Anthropologist

Juggling a range of important priorities

There are a range of priorities to be juggled when it comes to AI deployment. Between us we need to keep our eyes on each to ensure that solutions enabled and informed by AI are delivered and used in ways that are ethical and equitable across their full extent of application.

Here are a few I am aware of. I am sure there are many more.

  • Explainability of outcomes
  • Privacy protections – in both the collection and usage of data
  • Equitable opportunities, benefits and harms of solutions
  • Efficiency – overall once all costs and effort are taken into account relative to the outcomes
  • Business, government, social and community specific risks
  • Robustness to unexpected shocks (such as a pandemic)
  • Monitoring and guard rails (legislation, regulation or other) with legal or commercial consequences

 

So what can I do?

Let me start by saying – honestly I don’t know. AI is moving faster than I can keep up with and I don’t understand much more than I do about it. However here are some things that I feel may be helpful.

 

1. Get curious

Ask questions such as How might it…? Where could it…? Where shouldn’t it….?  What are the underlying assumptions that drive this solution…? What data will you use to…? Where did it come from? Who may be impacted (positively or negatively) if …. happens? etc.

Bring your uniques perspectives and experiences to inform the questions you ask. Be demanding of good quality answers in spaces that the answers will impact you.

2. Get involved

Play, learn, engage with AI and engage with others about AI. Get more comfortable thinking about and talking about it so that people just a few steps ahead don’t try and bamboozle you with rubbish responses when you ask your good questions above.

3. Get active

Work out how you can, within your role and uniques set of perspectives in society (professional and personal), engage with others to positively influence the debate, inform ethical and / or legal constraints and priorities of AI usage and its outcomes.

 

Over to you.

What do you think we should be doing? What are you excited or worried about regarding AI? Does the above resonate with your thoughts or seem to be missing the point? Don’t worry about offending me – I am sure it is missing many if not most important points! It is such a big topic.

Please contact us and share your perspectives so that we are being positively challenged and informed. You can help us usefully engage in this important topic, aided by your perspectives.

The one thing in all this which is clear is that AI is already, and will further, influence us all. Whether it does so equitably, inclusively and ethically is up to us.

Back to top