By Richard Collins - 10 May 2023
AI is here to stay in recruitment. Its adoption has caught on like wildfire, with 65 per cent of businesses using it in the hiring process. It certainly has its benefits, allowing employers to automate and hasten the time taken to filter through a candidate list.
But arguably, AI is like an unruly gunslinger employed by recruiters giving them the upper hand over jobseekers in the Wild West of recruitment town. Meanwhile, those looking for their next position have continued to search for jobs, create CVs and write cover letters manually. As a result, many might feel they’re missing out on the top jobs.
To level the playing field, jobseekers have recruited their own AI gunslinger in the form of ChatGPT and other generative AIs. This is powerful, free and easy-to-use software that can create all kinds of data, including tailored CVs and covering letters. It’s been all over the media and even school children are using it to write homework.
The problem is, both sides now have gunslingers to fight their battles. As any Western fan knows, this scenario leads to escalation and shoot-outs. If a recruiter can race through the hiring process and a job seeker can apply to many ads equally fast, you end up in a situation where both sides are battling to gain control.
The real problem: a broken system
The real problem here isn’t the AI per se. It’s the fact that there’s no sheriff to control the Wild West. It’s a chaotic, broken system without the rules needed to function correctly. To explain, let’s consider how recruitment normally works. It starts with a job ad from an employer. These are often badly written and poorly targeted. This is problem one.
This leads to an application from a jobseeker with a CV and a covering letter. The CV was first invented 400 years ago and has changed little. They come in all shapes and sizes with 63 per cent holding misinformation. They also paint a picture of social status, listing names of schools, universities, connections and background rather than sticking to skills. It begins to introduce bias to the situation. Problem two.
Then there’s the screening process where the advertiser makes judgements about the applications based on skills, but also the other, less relevant factors. Problem three. Then an interview is arranged where more judgements are made by either side about the other – often based on gut feel and cultural fit. Problem four.
Finally, references are made by the employer to see how other people felt about the applicant – and even checking if they’re telling the truth. Problem five. It’s a process full of luck, misinformation, misunderstanding and bias – a bit like a frontier town lacking the rules of modern society. The impact is that a third of those hired leave within 90 days.
Bring in the gunslingers
Add AI to the mix and things get worse. Employers task AI with automatically writing job ads and placing them. The targeting isn’t any better, there are just more ads to see, in more places. In response, jobseekers apply at scale using generative AI. According to McKinsey the software is susceptible to error, at times producing inaccurate responses to queries. There’s currently no in-built filtration process to catch or question this misinformation.
The employer then has the Herculean task of wading through piles of potentially incorrect CVs. So, they bring in AI again, which reinforces and industrialises bias. For example, in 2018 Amazon’s AMAZ.O Machine Learning (ML) specialists uncovered that their recruitment software discriminated against women.
The company’s experimental hiring tool used AI to give job applicants a five-star rating having observed patterns in CVs submitted to the company over a 10-year period. Most came from men, thanks to a historic dominance in the industry. As a result, Amazon’s system taught itself that male candidates were preferable. Thereby, penalising CVs that included the word ‘woman’.
Then the highly subjective interview takes place before the process of referencing. Again, AI is used to begin digging through hundreds of databases to get every detail it can about the applicant in an automated process. There’s no regard for relevance to the job, let alone privacy.
Overall, we now have a system that’s increased speed, but not solved the inherent problems of the hiring process. It may well have made them worse.
The Web3 sheriff comes to town
As employers and jobseekers continue to engage in an ever-faster, AI-fuelled Wild West, a solution’s needed to calm things down. A sheriff that can lay down the law and ensure the gunslingers keep in line.
Enter Web3 technologies such as digital identities, decentralisation and verifiable credentials. This has the power to fix the issues inherent in recruitment and allow AI to work within a trustworthy and fair framework.
Web3 could provide jobseekers with a smart CV held securely on their phone in a decentralised app-based wallet. This can grow alongside their career and would hold verified proof of qualifications from education and skills bodies.
It would remove the extraneous information that often gets into traditional CVs, and cut out inaccuracies or misinformation thanks to the verification process with assessors. In short, it wouldn’t allow people to fib about what qualifications they have, which can be disastrous given recent news about a woman who faked a medical degree certificate to work as a psychiatrist for more than two decades.
This smart CV would be easy to share and would allow employers to find jobseekers based solely on the skills needed, rather than identity. Meanwhile, employers would have access to an open, skills-based hiring platform, built on verifiable credentials.
Instead of having to publish job ads, employers would be able to approach relevant, verified candidates. And because qualifications and skills are authenticated by awarding organisations, there’s no need for referencing.
Web3 will be increasingly needed to reimagine the recruitment process, introducing skills-based hiring that will keep AI honest rather than trigger-happy. It will form an impenetrable ecosystem, in which qualifications are authenticated by official bodies and un-doctorable by candidates.
The sooner the Web3 sheriff can enter the Wild West of recruitment town, the better.
The AI Journal