* Any views expressed in this opinion piece are those of the author and not of Thomson Reuters Foundation.
AI has its own biases and limitations, but can be used to help improve diversity in hiring
Jack Mizel is founder of Pride 365, a certification of inclusion business consultancy
In an ideal world, all businesses would have solid diversity and inclusion initiatives. It isn’t always easy to navigate though, and a significant number of those implemented fail to achieve their objectives.
Artificial Intelligence (AI) is a complicated but necessary inclusion tool. Bias, particularly against LGBT+ people, has a huge negative influence and impact, both directly and indirectly, on decision-making in both the workplace and employment.
AI is being used by companies, globally, to create more diverse hiring processes, and develop better engaged businesses and learning environments.
Predominantly, AI is used in interview processes by creating more diverse panels which, in turn, attracts more diverse candidates. Some human resources (HR) professionals use matching systems to source higher-quality candidates.
Tools are used to write unbiased job descriptions and analyse language to ensure more universal relatability. AI is also being used to equate the pay gap by removing human emotion from salary discussions, using numerical calculations rather than personal data.
Success and results, however, are unique, dependent on how each company adopts AI.
According to IBM, only a third of HR professionals believe their organisations have the statistical capability to check if their recruitment processes are bias free. AI provides a great solution by supporting the efforts of HR. Many professionals claim to be optimistic and encouraged by what AI can offer.
We must realise that AI isn’t there to do the job for us though. It has its own biases and limitations, and needs to be nurtured. Identifying weak spots in their diversity and inclusion programmes, AI algorithms, or deep learning, are used to find data patterns and allow companies to understand why. In order to create a level playing field for LGBT+ candidates, we must first identify the cause of the problems.
Diversity, for diversity’s sake does no-one any favours. Solely using the technology as the solution dampens efforts and limits progress. It’s equivalent to cheating on a test because you have been given all the answers, and then having to give a speech about it because you did so well. You can hold it together, but people will see through the gaps.
Over time, with more of our input, the AI’s capabilities will expand, as it learns from data, environments and situations, utilising this knowledge to formulate understanding. The data that AI provides can and should be used by professionals to continually identify gaps, inconsistencies, management issues and inequality within the workplace. We then need to use these results to create better workplaces and societies by delivering significant improvements around diversity and inclusion.
We must always be cognisant of the importance of human connection, however. AI bias is hard to fix due to processes and social context. The inability of humans to view the world without bias limits our abilities to transfer this to robotics. We need to create algorithms, and continually analyse and produce data reports to hold companies accountable. Humans respond to data, so only when this happens, can we start to create better dialogues around inclusion and fairness within working environments.
It’s an ongoing process, with as many challenges as solutions, and no easy fix. The human and tech elements are equally important to assess and improve the ever-developing data. It’s a perfect example of humans and robots working together, and how their experiences can influence each other to create more inclusive pathways for the LGBT+ community.