AI is changing the way businesses operate, however, effective design and consumption frameworks need to be created and implemented in order to allay fears and build trust in AI tools, writes Rob Scott
You don’t have to venture too far into the realm of HR technology to discover the rapid growth of AI tools silently influencing decision making, providing newly discovered insights and simply making things easier to do. Whether this is analysing candidate facial expressions or voice tones during an interview, examining individual network and collaboration habits for leadership potential, monitoring employee fatigue signals or spotting who is likely to exit your company, AI is changing the way we operate.
But do we trust AI outcomes? And I mean the collective ‘we’, both HR professionals administering these tools as well as applicants and employees who are the AI subjects.
In July 2019, I ran a snap poll via LinkedIn to test general perceptions. Let me at the outset declare this would likely not pass academic requirements of a well-designed and administered survey, but LinkedIn is a global business-focused platform and the results provide a reasonable reflection of this cohort.
The initial question on trusting AI shows 61 per cent of respondents have partial to serious concerns trusting AI outcomes. A further 25 per cent of respondents registered complete or strong disagreement, and there was nobody who completely trusted AI outcomes.
Of course, one could argue that trust is like pregnancy, you can’t be half-pregnant, so if you don’t fully trust something, you essentially distrust it. We should, however, look past this binary perspective and understand that respondents are expressing their uncertainty towards something that is largely an unknown entity. People are concerned, but not necessarily against it.
As an HR or business professional, when 3 out of 5 people are not supportive of AI, it’s not something you should likely ignore. From an HR perspective, you could be losing good candidates and alienating top talent. There is plenty of newsworthy evidence about AI bias, AI decision failure and even fake AI results to warrant concern.
One of the fundamental criticisms of AI outcomes is the inability to explain the answer.
In the same poll, we asked respondents if they would trust AI outcomes more if the reasoning was visible. A whopping 92 per cent agreed or strongly agreed. While this is good news, the reality for most HR professionals is they will be unable to do this. Most HR tools using AI are commercial off-the-shelf products, producing commoditised AI answers. If it’s a true AI tool, it needs lots of data, more than you probably have of your own. The algorithms sit in a ‘black box’ and even if you could access the code, understanding how the answer is reached is complex.
“One could argue that trust is like pregnancy, you can’t be half-pregnant, so if you don’t fully trust something, you essentially distrust it”
This is why the third question in the survey – having an AI code of ethics, is so important. Close to 50 per cent of respondents scored at top scale on this question. There is a significant amount of good work evolving in this space. Many governments, technology giants and private companies are discussing and developing important principles. Some of the key focus areas include concepts such as ‘transparent AI’ and ‘white-box’ development which will increase credibility by allowing answers to be explained. Other areas are independent algorithm auditing, validated unbiased training data and developers using open-source methods and code.
AI will become a powerful solution to many of our business problems. But while it’s in its infancy, we need to build effective design and consumption frameworks in order to allay fear and build trust in these tools.
Key takeaways: HR and AI
- Three out of five people don’t trust AI outcomes. As HR professionals we need to look for ways to address applicants and employee concerns.
- Most AI tools used in the HR space are commercial off-the-shelf products. They may use some of your data, but the results are based on other data that you know little about.
- If you are using AI tools in HR, ensure you declare this to users and find ways to explain how the tools got to an answer
- In the future, applicants your suppliers and government agencies will ask you to show them your AI code of ethics. If you use AI, you should start working on this.