- Geo Atherton
- Posts
- How to Avoid AI UX Disasters as a Designer: Empower People with RAI
How to Avoid AI UX Disasters as a Designer: Empower People with RAI
AI is like nuclear power. It can be used to power communities... or tear them apart.

This is Part 4 of a 5-Part series on how AI is revolutionizing the field of UX design, and how to keep your skills sharp. Read time ~5 min. If you haven’t read Part 3, read that first!
Last post, I explored how we’ll increasingly be using AI tools to make AI tools.
But with all the excitement about the possibilities of AI, it’s our responsibility as designers to tread carefully. AI is like nuclear power. It can be used to power communities... or tear them apart.
Imagine you’re a magician in a video game. You’ve created a talisman of ultimate power, with the hope of using it to help your village prosper and thrive.
...Now imagine it had no guardrails built in, and you accidentally used it to blow up your village. Or, a malicious villain stole it and intentionally blew up your village.
That would be on your shoulders.
That’s pretty much the situation we’re in as designers, and it’s not a video game.
We’re affecting the lives of real people, and whether we bring them livelihood, or misery, hinges on how well we’ve baked-in those guardrails.
(The Facebook-enabled Rohingya genocide & Cambridge Analytica scandals come to mind.)
The animated Disney short 'The Sorceror's Apprentice' provides a more light-hearted visual metaphor, but it gets the point across.
Mickey the novice magician creates an automated broom to carry water for him... but because he doesn't specify enough rules before launch, it spirals out of control into a flood crisis.

How can we avoid disasters like this as AI & UX designers?
The Power of RAI Frameworks

Microsoft’s public ‘TAY’ experiment is a perfect example of how having too much optimism about the AI you’re making, and not enough internet street-smarts, can lead lead to things going horribly wrong.
Without the proper care, your product could be spewing vulgarities, or worse.
To prevent that, Responsible AI Frameworks (RAI) must move to the forefront of the designer’s mind while crafting digital experiences.
These practices aim to make sure that AI systems are designed in a way that is ethical, transparent, and accountable.
RAI frameworks, developed by organizations such as OpenAI & Microsoft, intend to mitigate the risks and negative impacts of AI, and ensure that AI is used to benefit society.(Like an expanded versions of Asimov’s ‘3 Laws of Robotics’).
Common elements:
Human rights: AI systems should respect and protect the rights and dignity of individuals.
Fairness: AI systems should be designed to be unbiased, and avoid discrimination based on traits like race, gender, or age.
Transparency: AI systems should be transparent in their operation and decision-making processes.
Responsibility: Those involved in the development and deployment of AI systems should be accountable for their actions.
Security: AI systems should be secure and protect the data privacy of individuals.
Safety: AI systems should be designed to be safe and not cause harm to individuals or the environment.
These frameworks can help to build trust in AI (when deserved) among the general public. But it’s important to call out that RAI frameworks are not substitutes for regulations. We need both.

The Real-World Impact
So, all of that may seem a bit dry... but the real-world problems these frameworks are trying to address are charged.
As of this writing...
And you may recall hearing that AI facial recognition is pretty bad at correctly identifying the faces of Black people.
If you’ve experimented with image generators, you may have noticed that most generated images of humans are white people (as of this writing). Unless you specify greater diversity in your prompt, that's the default.
Until the platforms get better at doing this automatically, it sort of puts the onus on the prompt creator to be inclusive.
That said, in my own design team I’ve heard the perspective that this centers on whiteness.
I’ve read similar critiques of Diversity and Inclusion (D&I) education in the tech world. The line of thinking is that while D&I training is better than nothing, it is still aimed mostly at helping privileged white employees understand how to be more inclusive... which may not always be the same as addressing the root causes of inequality within the industry.
So, you could argue that relying on designers to write inclusive prompts reinforces the idea that diversity is a special request, rather than being natural and expected.
With regard to the heated discussion about AI art copyright, this also gets into some fraught territory about cultural appropriation.
There's already a track record in American history of Black artforms like Blues, Jazz, and Rap (and their fashion trends) getting absorbed into mainstream popular culture. But this has often been exploitative, without the original creators receiving much credit.
There are also too many examples of this for Black contributions to science, technology and math.

It’s a scary thought to consider racial biases getting turbo-charged by AI as it continues to grow exponentially.
Or scraping data from artists of color, and using their work to make new products without consent or pay.
This subject gets even more intense when you think about the possibilities like hate groups using AI to generate Nazi propaganda imagery.
As product designers, we want to embed guardrails to prevent AI from being used that way. But attempts to filter out racial content also run the risk of tipping the scales too far. We don't want outright erasure of communities or ethnicities. Balance and fine-tuning will be key.
Ultimately, while it’s important for designers to be thinking about D&I while generating content, it’s also crucial for AI platforms and the whole tech industry to work towards greater equity & inclusion.
RAI frameworks can help with that.
Knowing that AI powers could used for great evil, let’s also imagine how AI tools will also be used to do good, provided we do our jobs well.

Look on the Bright Side
AI may yet prove crucial to solving some of the most pressing issues of our time, like the climate crisis. As more sophisticated AI tools get integrated into the fields of science, medicine, and manufacturing, the rate of invention will surge.
Imagine a Cambrian explosion of new medicines that add life to years and years to life. Imagine new fuels and energy sources, innovations in battery technology. Green cities.
If the better angels of our ethics are encoded into the tools we make, it could dramatically improve life on Earth.
Product teams will bear the burden of integrating RAI ethics into actual products. So ask tough questions! Probe and push back when necessary, and always be asking, ‘who are the stakeholders, how might this harm them? How can we mitigate these harms?’
That turns them into problems we can solve for.
The stakes are high, and the work is important.
As Spiderman teaches us, with great power comes great responsibility!
If we do it right, the tools we design will be a force for good in the world, and usher in a beautiful future.

As exciting as that is...
After the previous posts in this series, you may be feeling daunted by the rapid pace of change. So much new information to absorb, so many new tools and skills to keep pace with!
As designers, we’re already pretty used to the tech industry changing quickly... but AI is like that on steroids.
It can be stressful trying to keep up and avoid falling behind the curve.
So next post, we’ll draw inspiration from a master of meta-learning.