Pride-Day_R1_Blog1.webp

Why AI Needs Queer Technologists & How To Get Involved

06/07/2023
7 minutes

A big part of being a responsible developer and machine learning practitioner in the “age of AI” is understanding the dangers of biased datasets and AI systems. From the planning stage to well after deployment, it’s on developers to design equitable algorithms that protect people from discrimination. 

The LGBTQ+ community, for example, faces unique risks when it comes to AI. People in the LGBTQ+ community have largely been left out of research on biased algorithms for a variety of logistical, ethical, and philosophical reasons, explains Kevin McKee, a Senior Researcher Scientist at Google DeepMind. Sexual orientation and gender identity are examples of traits that we can’t observe, and as a result, they’re often missing, unknown, or hard to measure in data. When queer people, perspectives, and issues are excluded from conversations around bias in AI systems, it can perpetuate inequities and open the door for algorithmic discrimination.

The good news? AI can also be used to dismantle the existing power structures that have historically marginalized communities. “AI, as a concept, is a radical reimagining. It is a reconceptualization of our traditional concept of intelligence, from a property just of biological brains to other forms and possibilities,” Kevin says. “True equity could similarly involve rethinking traditional concepts.” 

Learn something new for free

As a programmer or person learning to code, you have the opportunity to build equitable AI systems and tools that are inclusive and uplifting for the LGBTQ+ community. Ahead, Kevin answers some key questions you might have about how to use AI to promote algorithmic justice for the queer community and beyond. 

What do you do at Google DeepMind?

“I’m a Senior Research Scientist at Google DeepMind. My job is to conduct research studies to advance and help us understand our AI systems. I spend my time on a mix of AI development work and social psychology research, with a particular focus on designing more inclusive and cooperative AI systems.

My primary research interest lies in the social and ethical aspects of AI. Plenty of psychology research explores the various factors that lead humans to cooperate with each other. Similarly, social science offers us insights on how we can build fair approaches to distributing resources, developing consensus, and other related decisions. The key question I spend my time on is: How can we draw from these traditions to build AI that makes prosocial, cooperative, and fair decisions?”

Can you explain what “algorithmic fairness” means? 

“Discussing algorithmic fairness is always a bit complicated. Different people define it in different ways, and each definition tends to carry its own advantages and disadvantages. To me, algorithmic fairness means ensuring that we do not develop AI systems that maintain or exacerbate social inequalities. Auditing existing algorithms for bias, developing new systems to help ensure equitable outcomes, and talking with marginalized communities to understand their needs are all examples of work that falls under algorithmic fairness.”

Why have LGBTQ+ people been excluded from this area of research?

“A combination of logistical, ethical, and philosophical factors has historically excluded queer communities from algorithmic fairness research. LGBTQ+ people are often logistically excluded from fairness work when datasets fail to include information on sexual orientation and gender identity — often because data collectors do not realize that this can be important information to record. Collection of data on sexual orientation and gender identity, often considered ‘sensitive information’ in legal frameworks, can also be ethically and legally precluded when knowledge of this sort of personal information threatens an individual’s safety or wellbeing. 

In many parts of the world, queer people continue to face very real risks of discrimination and violence, and it’s important that researchers avoid contributing to those risks. Finally, collecting data on sexual orientation and gender identity raises some thorny philosophical questions. Queerness is a fluid cultural construct that changes over time and across social contexts. How effectively can we measure a concept that often defies measurement? Given this set of challenges for collecting data, it’s not surprising that progress on algorithmic fairness has been slow for queer communities.”

What are the most serious risks that AI poses to the LGBTQ+ community? 

“Modern AI systems are increasingly used in important domains including hiring, healthcare, and education. One of the primary risks posed by these systems is reinforcing existing patterns of bias and discrimination. Systems applied directly ‘out of the box,’ without any modifications, learn from prior decisions and their effects. That can include learning biases that affect minority communities. It doesn’t matter if those biases were originally introduced consciously or unconsciously: if they show up in the data used to train AI, then AI systems can end up recreating the same patterns in their decisions. Unfortunately, it’s well established that queer communities face discrimination in many of the domains to which AI is now applied. We’ll need to put in additional work to avoid ‘locking in’ bias and discrimination in these areas.

Queerness is a fluid cultural construct that changes over time and across social contexts. How effectively can we measure a concept that often defies measurement?

Kevin McKee
Senior Researcher Scientist at Google DeepMind

Another risk on my mind comes from the recent popularity of large language models. These models demonstrate really impressive abilities to generate language, including as chatbots, and can be helpful to users in a number of ways. They also introduce some new risks that deserve attention as we continue model development. For example, I think we’ll increasingly see language models in online spaces and social platforms. Young queer people and trans people of all ages often seek solace, inclusion, and guidance through online spaces and resources. That makes it likely that they’ll encounter chatbots powered by language models. These chatbots may come across as supportive and friendly, but they can also produce messages that are emotionally harmful or that reinforce toxic stereotypes. That would be potentially damaging for individuals in vulnerable situations.”

On the flip side, how can AI be used to empower and support the queer community? Are there any exciting applications of AI that you’ve come across?

“I’ll mention two possibilities here. The first is a very careful approach: it involves identifying opportunities that minimize the risks of modern AI systems, while still leveraging their advantages. The Trevor Project, a nonprofit that provides crisis support services to LGBTQ+ young people, is working on particularly thoughtful projects in this area. For example, they use large language models to run practice conversations between their crisis helpline workers and simulated callers. This allows the helpline team to practice their skills, while also ensuring that the AI interactions can be closely monitored and stopped if the model starts to produce content in unpredicted ways.

The second possibility is more theoretical and creative: can AI help us explore queer identity in new and inventive ways? For instance, in one project, several engineers and I proposed using a type of AI called a ‘generative adversarial network’ to learn traditional gender characteristics and boundaries. The network could then use what it had learned to generate combinations of characteristics that defy categorization. Our intent was to playfully demonstrate the social construction of our perceptions of different identities. This type of project can help us imagine queerness in ways that we hadn’t even thought of.”

What can aspiring developers and machine learning practitioners do to address the algorithmic biases that impact the queer community? 

“The first step that we need is to engage and talk with these communities. Improving representation of the LGBTQ+ community in the tech industry is one way of achieving that. We frequently see situations where including team members who are queer (and who belong to other marginalized communities) helps to identify issues that would not have been caught otherwise.

A goal to better representation is community participation. More research and better techniques can help define and mitigate risks that new AI systems may introduce for queer people. But how do we know what risks we should prioritize, or what goals we should aim for? Scientists and experts likely have good ideas, but a key source during the research process should be those in the affected groups themselves. How can we know what queer communities need if we don’t talk with them? Engaging with the marginalized folks who might be affected by new AI systems can help us recognize what real-world harms look like and what technical solutions to develop.”

Want to dig into the ethics of AI and large language models? Start with our free course Intro to ChatGPT. Take your knowledge further with our machine learning courses like Build Chatbots with Python and Intro to Machine Learning. Then read more about the types of careers you can have in generative AI and explore our career paths to start working towards your new career. And if you’re looking for fun ways to use code to give back to the causes and communities you care about, try these Pride-themed Python code challenges.

Related courses

3 courses

Related articles

7 articles