OPINION: Navigating algorithm insights into our sexual orientation

by Guillaume Chevillon | ESSEC
Monday, 14 August 2023 10:28 GMT

A man sits with a laptop in St James's Park, London, Britain, March 24, 2022. REUTERS/Peter Cziborra

Image Caption and Rights Information

* Any views expressed in this opinion piece are those of the author and not of Thomson Reuters Foundation.

Algorithms can be a force for good as well as bad, particularly for LGBTQ+ people

Guillaume Chevillon is professor at ESSEC Business School, and Academic CoDirector of the ESSEC Metalab for Data, Technology & Society.

Since the creation of the High Commissioner for Human Rights by the United Nations in 1993, human rights have come a long way. The role of the high commissioner is to ensure that universal rights are not restricted on the basis of nationality, gender, color, religion, sexual orientation and more. However, the growth of technology has created new issues that need to be addressed.

Despite the fact that many countries now have ambassadors or advisors dedicated to the rights of LGBTQ+ people, more is needed.

The implementation of General Data Protection Regulation (GDPR) brought some improvements to people’s digital privacy, but social networks themselves should be the next mountain to climb. These networks having meshed our lives, and with AI and the “metaverse” coming next, it is high time we addressed how hate and discrimination cross country borders.

As seen in the 2022 Social Network Security Index, published by GLAAD in the US, LGBTQ+ people face specific issues online that need more targeted responses. While the internet can provide an incredibly useful space for young LGBTQ+ people to educate themselves, forge friendships and more, there are also dangers to be considered.

With more people being comfortable to explore their sexuality online, we can forget the risk of all-seeing algorithms. Everyone knows the feeling of clicking on a link once, only to then find similar photos, websites or videos suggested all over their screens for hours or days afterwards. The algorithm will tailor itself to show you what it believes you want to see. Similarly, it will try to avoid showing you something that may not be well received. Therefore, if someone engages with content depicting a same-sex relationship, it is likely that similar content will appear time and time again. Our algorithms are eager to please!

This means that the algorithms, technically, have the capability to know users’ sexualities. Although sexual orientation isn’t explicitly a category used in social media algorithms, ads on social networks can implicitly target LGBTQ+ identities, showing recommendations aligned to their interests, and also categorizing the user’s assessed sexual orientation. This is a definite negative consequence for targeted advertising.

Simultaneously, the classification needs to be monitored closely due to the fact that bias can be carried out in what is known as “algorithmic discrimination.” The machines responsible for targeted advertising alter their analysis based on past behaviors, so they could reinforce prejudices. For example, if some users find images of two men or women kissing offensive, a poorly designed algorithm may remove that image for everyone.

While it is certainly a very tough thing to get right, algorithm creators owe it to their users to work with national and international bodies to grant everyone real control over their data, to ensure that social networks are respectful of everyone’s individual rights. They need to do this with full legal oversight, and by providing truly understandable information to users. Then, we can imagine a world where social networks explain to users which of our past actions lead to specific recommendations. We will then have a much better understanding of how our data is being used, giving us the means to really control our analyzed profiles.

Of course, there should be more done to control the use of implicit categories, such as sexual orientation. The solution for this does not lie solely with moderators or algorithms. It lies with social networks themselves.

It does not mean that major social networks, such as Meta/Facebook, should be tasked with providing the solution. But there are clear avenues that social networks can explore to tackle discrimination online. The 2021 GLAAD report, for example, proposed increasing the visibility and impact of users with a benevolent outlook – those acting as beacons illuminating and guiding the decisions of others.

It has always been known that the fight against hate and discrimination must be a global effort, since it doesn’t stop at a country’s borders. While the digital world unites us, it can also be weaponized, and algorithms are a good example of that. They are also the tools with which to fight back.

Openly is an initiative of the Thomson Reuters Foundation dedicated to impartial coverage of LGBT+ issues from around the world.

Our Standards: The Thomson Reuters Trust Principles.

Themes
Update cookies preferences