Should Companies Use AI to Assess Job Candidates?

Few things seem creepier than algorithms mining our voices or photos to determine whether we should be considered for a job, and yet we’re not that far from this scenario at all. What’s more, it may not be as creepy as you think.

For starters, all organizations struggle with talent identification, which is why many complain that they are unable to find the right person for key positions, and why most people end up in jobs that are far from inspiring. Consider that even in the biggest economy in the world, where talent management practices are far more science-driven and sophisticated than anywhere else, the labor market is quite inefficient. Today in the U.S., there are around six million job seekers for seven million job openings. Even if we look at the global knowledge economy, comprised of the most qualified and skilled cognitive elite (roughly the 500 million people who are on LinkedIn), job satisfaction is the exception rather than the norm: it is estimated that as many as 70% of these top talented individuals are open to other, hopefully more meaningful or interesting, jobs or careers. Elsewhere, the norm characterizing recruitment and hiring processes is considerably more backward, with hiring managers over-emphasizing hard skills at the expense of the more important and critical soft skills, or using intuitive and biased hiring methods, such as the unstructured job interview, to determine who gets the job.  All the while, predictive assessments and data-driven tools are largely under-utilized, and the prevalence of prejudice, bias, and discrimination are everywhere.

In short, if we want to make talent identification more effective — and more meritocratic — it’s important to continue to look beyond existing methods, particularly if technological innovations enable us to predict, understand, and match people at scale.

One of the major problems with the way we currently interview job candidates is that the process is largely unstructured, leaving the questioning to the whims and fancies of the interviewer. It shouldn’t take much convincing to see how this is not only inefficient, but how it also leads to biased decision-making due to interviewers expressing and seeking to confirm their own preferences. This is where video or digital interviews are likely to help. Digital interviews can remove these limitations almost entirely. Using technology to create a highly structured and standardized interview experience, every candidate can be presented with the same set questions and given the same opportunity to express their talent, which ultimately improves the video’s predictive utility. While digital interviews provide a fairer interview experience for candidates and allow organizations to access more diverse talent, when it comes to reviewing these interviews we run in to the same problems — biased humans are left to make the hiring decisions. But what if AI and machine-learning algorithms were tasked with mining the data from these videos to identify reliable connections between what people do and say during interviews, and their personality, ability, or job performance.  In the case of digital interviews, AI algorithms can mine a candidate’s facial expressions and body language, alongside both what they say, and how they say it. Mining all this data can reveal a lot about the candidate’s talent, and can indicate how they might perform on the job. Although scientific research in this area is still in its infancy, there are already some interesting and promising findings. For instance, researchers have trained algorithms that mine various characteristics of an individual’s voice (i.e. vocal pitch, loudness, and intensity); body movement (e.g. hand gestures, posture, etc.), or facial expressions (i.e. happiness, surprise, anger, etc.) to accurately predict their personality profile, which we know is one of the leading predictors of job performance. Going further, researchers have mined similar signals to predict behaviors and qualities that are critical for performance: communication skills, persuasiveness, stress tolerancehire-ability, and leadership. Further revealing how insightful this technology truly is, a team of researchers used the aforementioned technologies to quantify the emotionality of CEO’s as they spoke in conference calls to accurately predict the firm’s future financial performance.

AI has the potential to significantly improve the way we identify talent as it can reduce the cost of making accurate predictions about one’s potential, while at the same time removing the bias and heuristics that so often cloud human judgement. The fact that AI algorithms can detect and measure latent or seemingly intangible human qualities may lead some to be skeptical of the aforementioned findings, but it is worth noting that there are plenty of scientific studies that demonstrate that humans can accurately identify personality and intellect from just thin slices of verbal and non-verbal behavior. AI algorithms simply leverage the same cues that humans do. The difference between humans and AI is that the latter can scale, and can be automated. What’s more, AI does not have an ego that needs to be managed.

Currently, many organizations that use digital interviews do not leverage these types of powerful AI analytics, as their recruiters are often unwilling to accept the algorithm’s recommendations, and continue to rely on their own naïve judgement. Sadly, this ignorance is harming both the candidate and the organization. The HR departments that realize that science and data, and not intuition or instinct, should be the basis for decisions will attract and retain the best talent. Of course, we do not advocate that all hiring decisions are made by an AI system. There must always be human oversight. Instead, we believe that human decisions can be significantly improved if there is accurate and valid data to inform and shape our judgements.

Of course, it’s essential to consider the legal and ethical implications of using these innovative tech tools, just as we do when we consider using traditional assessment methods. These systems can end up learning all sorts of harmful biases of their own, depending on the data they’re trained on, among other factors. Companies need to pay attention to how these systems are trained, and also regularly audit them for potential bias. Also, clearly, there is now a difference between what we can know about people, and what we should know about them, with the possibilities surpassing both legal and ethical boundaries. Yet, at the same time, it’s still possible to deploy innovations like the ones we describe here while operating within the constraints of good codes of conduct. Candidates can be fully briefed and debriefed about the technologies being used to evaluate them, and should be invited to actively opt in. Organizations should fully protect and keep safe all sensitive data, and the entire process should be transparent. In fact, it is even possible (and advisable) for candidates to have ownership of their data and results, which they may voluntarily decide to share with selected recruiters and employers — or not. While this scenario may seem more utopian than the emerging technologies we described, we would like to urge recruiters and employers to consider it. After all, there is no tension between understanding job candidates well, and helping them understand themselves better. Organizations — and individuals — will benefit enormously when new technologies can boost their ability to place the right person in the right job.