AI Recruiting:
Are We Ready For It?

Artificial intelligence (AI) within recruitment is becoming increasingly common – both as a topic of discussion and as an attainable solution for hiring teams looking to transform their processes.

This piece explores how AI is currently being used in recruitment workflows, along with perceived benefits, drawbacks, and potential legal regulations related to using currently-available AI recruitment technology.

AI recruiting Hero Image
Mobile Hero Image

What is AI recruiting?

AI hiring refers to the recent adoption of using artificial intelligence (AI) to automate parts of the hiring process. It can be used to source, engage, screen, interview, evaluate and communicate with candidates.

It’s important to note upfront that not all recruitment tech solutions are examples of AI. For a technology to be considered AI-powered, it needs to feature components of machine learning (the system learns and improves by gathering data, rather than being explicitly programmed).

Here’s an example of an AI recruiting process vs automation that’s used within recruiting tech:

AI-powered recruiting Illustration

AI-powered recruiting

Learns the desired skill set and features for a role based on a growing number of data sets and using the information to scan applications and advance qualified candidates.

Versus Icon
Automation in recruiting Illustration

Automation in recruiting
(not AI-powered)

Enables humans to make faster decisions by using preset rules to prioritize applications and advance qualified candidates.

How is AI being used in recruiting?

Here is how some companies are using AI within their recruitment practices:

  • Writing enticing job posts

    Using data findings, AI can suggest the most effective words and phrases to include in a job posting based on the job title, industry, and location.

  • Automatic job matching

    In some cases, AI is hard at work before a candidate has even applied. AI has the ability to determine the requirements of a job role based on previous or similar applicants for the role. This data is used to dictate which candidates see a job posting based on their experience, knowledge, and skills.

  • Communicating with candidates via chatbot

    AI can be used to mimic human conversational abilities, using technology such as natural language understanding (NLU) to comprehend a candidate’s text-based messages and know how to respond. These chatbots can be used to schedule or determine job fit. They can even recommend other jobs that might be a fit, and encourage applicants to apply.

  • Filtering candidates

    AI can scan or evaluate candidates’ resumes using an algorithm to scan for keywords that are relevant to a job role or preferred skill set. Some AI software can also analyze candidate data to discover their online presence for a broader scope of their history and abilities.

  • Evaluating Video Interviews

    AI software can analyze video interview transcripts using natural language processing (NLP), without recruiters needing to be present. Candidates complete a one-way AI-based video interview by recording their answers to preset questions. AI performs an algorithmic analysis on the recording and determines the candidate’s outcome.

    Here are some features that are analyzed in an AI-based video interview:

    • Word and phrase choices
    • Tone of voice
    • Body language
    • Facial expressions
    • Emotional responses
    • Eye movement
    • Communication skills
    • Level of interest
    • Level of confidence

Do all video interview platforms use AI to evaluate a candidate? No.

One way interviews using ai illustration

Many pre-recorded video interviews – including all interviews conducted on VidCruiter’s platform – do not use AI as of today. Non-AI platforms offer a convenient solution that allows candidates to interview at a time that works best for them, and recruiters to evaluate the videos when they’re available to do so.

Leading video interview providers use a structured interview methodology, which includes a structured rating guide and standard rating scale. This helps to facilitate a fair and comparable interview process that allows recruiters to evaluate efficiently, every time.

Interested in a non-AI interviewing approach?
Learn about the available alternatives.

How does an AI interview work?

Within the recruiting process, AI “robots” can speak to and understand candidates through the use of conversational AI. Conversational AI is used to facilitate human-like chatbot conversations with candidates, and aspects of it are also used in AI-evaluated audio and video interviews.

How does conversational AI work? The AI-powered application receives the spoken word (or written text) and transcribes it into a machine-readable text. Next, the application uses natural language understanding (NLU), which is the first component of natural language processing (NLP), to understand the intent of the text. Generally, in interviews, the system isn’t required to make a response, so NLP is used to evaluate the text, based on AI algorithms.

In circumstances where a response is required (e.g. chatbot interactions), the system’s dialogue management (DM) formulates a response and converts it into an understandable format using natural language generation (NLG), the other component of NLP. The application delivers the response to the user via text, or text-to-speech, depending on the conversation style.

Lastly, the application uses machine learning (ML) to improve the responses for future interactions by accepting corrections and carrying context from one conversation to the next.

How conversational AI is used to interact with candidates

Conversational Artificial Intelligence Illustration

In theory, conversational AI is an efficient and convenient way to filter and engage with candidates. However, in real-world use cases, it has limitations. Different languages, dialects, and accents fail to be understood properly by AI applications, meaning some transcriptions are full of incorrect information, which can ultimately introduce bias. Even in text-based conversations, instances of sarcasm, emojis, and slang can confuse AI, causing misinterpretations.

Would you know if you were speaking to a robot?
Probably not…

72% icon

In a recent study, 72% of candidates thought that they had spoken with a recruiter, even though they were notified upfront that the chatbot was a virtual assistant.

Robot AI Illustration

Would you know if you were speaking to a robot?
Probably not…

72% icon Robot AI Illustration

In a recent study, 72% of candidates thought that they had spoken with a recruiter, even though they were notified upfront that the chatbot was a virtual assistant.

Why are companies using AI in recruiting?

Companies seek out AI to assist with their recruiting for the following reasons:

Speed up time-to-hire illustration

“Speeds up
time-to-hire”

AI takes over some processes, so fewer resources are needed for recruiting tasks.

Improve candidate experience illustration

“Improves
candidate experience”

AI can be used to communicate with candidates in a timely manner and help improve the candidate experience.

Improve Quality of Hires illustration

“Improves
quality of hires”

Hundreds of data points are collected for each candidate interaction, allowing AI to objectively calculate the top talent.

Minimize hiring bias illustration

“Minimizes
hiring bias”

AI software can be programmed to ignore demographic information such as gender, race, and age, however, some factors of AI can also introduce bias.

Will AI replace recruiters?

AI allows hiring teams to remove many of the repetitive, time-consuming processes from their recruitment workflows. Companies that produce AI recruiting software say this allows recruiters more time to focus on engaging with candidates, training hiring teams, and developing a better hiring process. However, many AI tools do replace the need for recruiters or hiring managers to engage with candidates. It seems a little contradictory.

Are we heading towards a dystopian future where robots are in full control of corporate hiring? The short answer is no, nor should any company want AI to take over their human taskforce. AI isn’t able to replace the social skills, empathy, and negotiating abilities needed for a successful recruitment workflow, particularly while AI recruiting is still considered to be in its infancy.

AI vs recruits Illustration

What are some challenges of using AI in recruiting?

AI technology is a double-edged sword in most use cases. Within recruiting, AI can help introduce efficiencies and eradicate certain time-consuming tasks. However, the software can also create new – sometimes serious – challenges to be aware of:

  • AI needs a lot of data to be accurate

    Machine learning (the component of AI that allows algorithms to be improved) requires a lot of data to accurately mimic the intelligence of humans. For example, AI that’s used to screen applications would need to screen potentially hundreds of thousands of resumes for a specific role to be as accurate as a human recruiter. Its intelligence is always limited to the data source available, therefore at first, the AI tool may be less than helpful, and even potentially biased.

  • AI can learn bias from previous data

    Companies that create AI recruitment software often share how AI can eliminate bias from the hiring process through its use of factual information, rather than the subjective, and sometimes biased decisions found in human evaluations. However, saying AI can eliminate bias is avoiding a large part of how AI works – it’s trained to find patterns in previous behavior. As mentioned above, AI extracts insights from large amounts of data, then makes predictions based on its findings. This is what makes AI recruiting so powerful, but it can also make its algorithms heavily susceptible to learning from past biases.

    For example, if a company has more male than female employees, an AI-powered tool can easily favor male candidates to match the current identity of the company, so long as there isn’t a regularization term to stop the system from doing so. In a harder-to-detect example, say many employees graduated from the same university. This could be due to its proximity, or because of a referral program. The AI software could notice this trend, and form a pattern to favor graduates of that university or those with similar backgrounds. This pattern could end up being highly discriminatory towards non-college grads and certain demographics that were less likely to attend that specific university.

Amazon’s AI hiring bias

Amazon's Hiring Bias Illustration

In 2014, Amazon created its own AI-powered recruiting tool to help screen resumes, scoring them from one to five stars. Its algorithm used all resumes submitted to the company over a ten-year period to learn how to determine the best candidates. As there was a much lower proportion of women working in the company at the time, the algorithm picked up on the male dominance and presumed it was a factor in success.

Amazon made edits to the software to rectify the issue, but there was no guarantee that the machines wouldn’t sort candidates in another way that could be discriminatory. The project was abandoned a few years later.

Amazon's Hiring Bias Illustration
  • AI lacks the human touch

    It goes without saying, but humans are complex. AI can screen a candidate’s skills and abilities that are relative to the role, but the system would struggle to analyze many aspects of a candidate’s emotional intelligence that could help them succeed in the company. For example, an AI interviewing platform that analyzes facial expressions and tone of voice along with the candidate’s response isn’t able to determine exactly what a smile and a formal tone mean – does it mean the candidate is sincere and serious? Or possibly, they’re trying to be friendly but their tone makes them seem distant? Perhaps it also depends on the question asked. AI doesn’t have the technology to fully understand the nuances of social cues, and cannot possibly allocate these features to imply the presence or absence of specific skill sets.

    Secondly, AI cannot build a rapport with a candidate. As we’re currently experiencing a candidate-driven market, companies need to be able to truly connect with top talent – failure to do so could result in high-candidate drop off. In order to win them over, recruiters need to show interest and empathy, and remember details from previous conversations – even if AI could replicate these traits, a system would entirely lack authenticity.

  • AI can misinterpret human speech

    AI recruiting tools that screen, interview or evaluate applicants will use automated speech recognition (ASR) software that’s also used in voice recognition services. This software listens to the applicant’s spoken response and converts the voice data into computer-readable text data. In theory, this allows companies to rely on AI to capture a candidate’s complete response and evaluate them fairly and objectively. However, anyone that’s used leading voice recognition services, such as Alexa, Siri, or Google will know that not every word is interpreted correctly – in fact, entire sentences can be misinterpreted, leading to an incorrect response from the platform. Specific minorities are more commonly prone to these errors.

Women in the rain illustration

Black speakers are more likely to be misunderstood by speech recognition software

Women in the rain illustration

A study conducted by Stanford University found that five leading ASR systems (Apple, IBM, Microsoft, Google, and Amazon) had an average word error rate (WER) of 35% for black speakers compared with 19% for white speakers.

If the leading ASR systems can’t always recognize and contextualize voice commands, how can an AI software company, with far less funding, create an algorithm that can properly analyze lengthy and often complex interview responses? Unfortunately, they can’t. Even a leading AI-driven interviewing provider states that their software has a word error rate (WER) of ‘less than 10%’ for native American English speakers – so about 1 in every 9 or 10 words are incorrectly translated. The WER was higher for speakers outside of the U.S., depending on their country of origin (e.g. 12% WER for Canadian English speakers and 22% WER for participants with a Chinese accent).

This means that in an AI-powered interview, the software will fail to understand at least approximately 10% of a candidate’s response, and is likely to misinterpret up to a quarter of the response from a non-native English speaker.

How does an AI interviewing platform make errors?

Let’s say that a person speaks 17 words in their interview response. Among those words spoken, the automated speech recognition included 3 errors. This calculates an 18% word error rate (3÷17 = 0.176).

Interview Questions Illustration

Interview question:

“Tell me about your educational background…”

Gray Chevron
Spoken Word Illustration

Spoken word:

“So I got a 1350 on my SAT which got me into UT Austin to study psychology…”

Gray Chevron
Computer Generated Transcription Illustration

Computer-generated transcription:

“So I got a 1350 on my AC, which got me into you see Austin to study psychology…”

Gray Chevron
The outcome of AI illustration

Outcome:

The system failed to acknowledge the candidate’s credentials or university experience due to misinterpreting abbreviations.

Interview Questions Illustration

Interview question:

“Tell me about your educational background…”

Gray Chevron
Spoken Word Illustration

Spoken word:

“So I got a 1350 on my SAT which got me into UT Austin to study psychology…”

Gray Chevron
Computer Generated Transcription Illustration

Computer-generated transcription:

“So I got a 1350 on my AC, which got me into you see Austin to study psychology…”

Gray Chevron
The outcome of AI illustration

Outcome:

The system failed to acknowledge the candidate’s credentials or university experience due to misinterpreting abbreviations.

It can be hard to get buy-in

Not everyone is interested in using AI within the recruitment process – many companies are comfortable with traditional, or less intrusive, hiring methods and aren’t looking for a change. Additionally, candidates are often hesitant to complete an AI-based interview.

  • Push-back from HR teams

    Whenever people are asked to make technological advances in their processes – even if they’re told it will make their lives easier – it comes with an inevitable push-back. Any change requires more training and new processes. Additionally, recruiters may be hesitant to embrace AI as they’re fearful of their jobs becoming automated and, ultimately, obsolete.

Jon Thurmond Headshot

“AI is only as good at the input that it has. We’ve seen very public issues of organizations using it in hiring decisions and there was bias in the process. I don’t know if AI is necessarily ready to make hiring decisions, and what’s more interesting is how many people are prepared for and willing to allow it to make those decisions.”

Jon Thurmond

HOST OF #HRSOCIALHOUR HALF HOUR PODCAST

  • Failing to win over candidates

    There’s a lot of conflicting information regarding how candidates feel about AI in recruiting. One survey from 2016 (which is still heavily credited by AI-powered recruitment companies today) asked 200 candidates how they felt about AI recruitment: 58% of candidates were comfortable interacting with AI technologies to answer initial questions. However, in 2018 a survey of 2000 Americans reported that 88% would feel uncomfortable if an AI interview application was used during their candidate screening process.

    Why does it matter how candidates feel about AI-enabled recruiting tools? Because according to an article in the AI and Ethics Journal, the more candidates trust the use of AI in job applications, the more likely they are to engage in and complete that process. The same goes for anxiety: the more anxious it makes the applicant, the more likely it is to negatively influence their experience.

  • Does AI improve candidate experience?

    Cultivating a positive candidate experience can help you to win over more top talent, build positive brand associations, and more. When it comes to improving your recruiting process, chatbots can address some candidate experience problems, but they can also create others.

Green Check

A study by Indeed found that waiting to hear back from a potential employer is the #1 pain point for 48% of job seekers. Bots respond instantly, opening up the line of communication with candidates so they feel acknowledged even if they get rejected.

Green Check

They are also available on the candidate’s schedule, including evenings and weekends when real recruiters might not be available.

Pink x

The candidates’ first impression and reaction will largely depend on how well the chatbot can understand them and answer their questions.

Pink x

If the conversation doesn’t flow naturally or the bot becomes unresponsive it can create a frustrating experience for the candidate. According to a study done by CareerArc, 72% of candidates who had a negative experience will share their experience with others (online or directly).

Right now, public sentiment is split. As tech improves, and people adapt, the sentiment will improve.

The legal and ethical implications of using AI in recruitment

AI usage in recruitment has been on the radar of U.S. federal regulators for a long time, but in recent years, issues surrounding AI usage have gained a lot of traction. Back in 2020, ten U.S. Senators sent a letter to the EEOC (Equal Employment Opportunity Commission) about the use and impacts of hiring technologies and the commission’s ability to conduct much-needed oversight and research on the topic.

Since then, the EEOC and the U.S. Department of Justice created a guidance document called “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring”, explaining how AI can lead to hiring discrimination against those with disabilities, potentially violating the Americans with Disabilities Act. After issuing this initial technical guidance document on AI, the EEOC filed its first lawsuit alleging algorithmic discrimination.

The EEOC put forward a Draft Strategic Enforcement Plan (SEP) for 2023-2027 as part of the planning process, and for the first time ever, the draft was formally published in the Federal Register for public input. The draft document said the EEOC will focus “on the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected group.” The SEP is not finalized yet.

On January 10, 2023, the EEOC held a public hearing where 12 experts tabled their thoughts and amendments to the guidance document. Here are some key takeaways from the hearing:

Investigation and regulation

Panelists agreed in unison that bias audits are necessary to mitigate potential biases against groups with protected characteristics.

Worth noting: there is no industry standard in place for bias audits and there wasn’t agreement on who should conduct these audits. This is why multiple panelists encouraged the EEOC to enforce strategically selected targets to ensure accountability, and enact transparency rules to increase standardization and accountability.

Compliance

Several participants voiced concerns with the four-fifths rule as a way to determine adverse impact, particularly when sample sizes are small.

The Four-Fifths rule is a rule of thumb put forward by the EEOC. If an employer hires any race, age, sex, or ethnic group (protected classes) at less than 80% the rate at which the majority group is hired, it could be seen as adverse impact.

Related to the Uniform Guidelines for Employee Selection Procedures, speakers called for guidance on the justification of predictors in algorithmic models, asking whether evidence beyond simple correlations needs to be presented to justify the use of predictors and constructs.

The Uniform Guidelines require that predictors should be informed by job analysis to identify what needs to be measured and when creating additional assessments, speakers argued that this is difficult to do with algorithmic assessments. 

For some jobs and data sets, there can be a huge number of predictors when it comes to potential job performance. In some cases, it can be hard to justify how each predictor relates to the construct or job performance. This can cause the AI to assess irrelevant predictors, like items in the background of a video interview, and factor them in when assigning the person a score.

Stakeholders called for the Title VII of the Civil Rights Act of 1964 to be revisited, as it was developed with human decision-makers in mind.

The ambiguity of whether an AI vendor could meet the definition of employment agency under Title VII requires further clarification. If AI vendors are considered to be agencies, they could be in scope for legal compliance. Experts suggested the law needs to be updated to address the risks that are posed by automated systems and clarify who is on the hook for machine decision-making when it comes to hiring.

In 2022, the US White House Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights” that outlines five core protections all consumers are entitled to: 

  • Protection against unsafe and ineffective systems
  • Protection against discrimination by algorithms and systems
  • Data privacy, which includes agency over how your data is used, and protection from abusive data practices
  • Notice when an automated system is being used and an explanation as to how and why it contributes to outcomes that are related to you
  • The option to opt-out and speak to a person who can help

The fact that the White House is taking this step certainly sets the tone about what’s to come as far as regulations and laws are concerned.

Astronaut illustration

Laws being created to regulate AI in recruiting

Here are a few examples of how lawmakers are reacting to AI recruiting tools:

USA

United States

The Washington D.C. City Council introduced the “Stop Discrimination by Algorithms Act” in 2021. The bill has not been passed yet, but when it was tabled Phil Mendelson said it would target “employment algorithms that can filter job applicants by how closely they match a business’s current workers and screen out applicants with disabilities,” in addition to other use cases.

As of right now, each state, city, or municipality sets its own AI regulatory frameworks.

  • In 2020, Illinois became the first state to regulate the increasing usage of artificial intelligence in recruitment practices, cracking down on the use of AI within video interviewing with the introduction of the “Artificial Intelligence Video Interview Act”.
  • Also in 2020, Maryland passed a law that requires notice and consent from candidates prior to using facial recognition technology during a job interview.
  • Originally passed in 2020, Local Law 144 made New York City the first U.S. municipality to regulate AI hiring tools. Originally, it governed AI processes that “substantially assist or replace” human decision-making. In 2023, New York’s Department of Consumer and Worker Protection updated the law to only include AI hiring tools that have more say than human decision-makers. The law requires employers to have their tools independently audited for bias, and defines situations when companies must tell applicants they’re being evaluated using AI hiring tools and provide an opt-out option. 
  • Similar to New York, in 2022 New Jersey introduced an act that prohibits the sale of automated recruiting tools without a bias audit, requires automated tools to have yearly bias audits, requires employers to notify candidates if they are using automated employment decision tools for evaluation or screening and more.
European Union

European Union

In 2021, the European Commission proposed regulations to address the use of AI in the EU. The current version of the AI Act states that AI used for recruiting would be considered “high risk” under the draft legislation, and subject to heavy compliance requirements. The European Parliament (EP) is currently developing its own position on the AI Act which is expected to be finalized by March 2023. The proposal will be passed as law once the EC and the EP agree on a common version of the text.

Canada

Canada

In 2022, the Canadian federal government introduced Bill C-27. This represents Canada’s first attempt to regulate AI. Similar to the EU, “The Artificial Intelligence and Data Act” is focused on high-impact systems, and recruitment falls into this category.

Australia

Australia

In 2019, Australia put out the Artificial Intelligence (AI) Ethics Framework to “guide businesses and governments to responsibly design, develop and implement AI.” Within the framework is a set of AI Ethics Principles, but they are voluntary. There are no legal profession-specific regulations for AI planned.

Gender and racial bias illustration

Gender and racial bias – a big AI problem beyond recruitment

Gender and racial bias illustration

In 2021, the Berkeley Haas Center for Equity, Gender, and Leadership reported that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias.

A 2018 study by MIT and Stanford University showed that facial recognition algorithms had a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

US Lawsuits related to AI in recruiting

Algorithmic discrimination and AI bias have attracted substantial attention from US federal, state, and local authorities. Here are some court cases so far:

  • In May 2022, the EEOC filed its first enforcement action, suing three integrated companies under the “iTutorGroup” brand, alleging that they programmed their online software to automatically disqualify more than 200 older applicants. 
  • Real Women in Trucking filed a class discrimination charge against Meta Platforms Inc. with the EEOC in December 2022, accusing them of steering job ads to specific age and gender groups on Facebook.
  • Derek Mobley filed a lawsuit against Workday in February 2023 accusing them of being biased against black, disabled, and older applicants. Mobley is seeking to represent all applicants in the protected classes who applied since June 3, 2019.

The ethics of AI in recruiting

AI product marketing is governed by the FTC in the US, but not all claims have equal merit, and this can impact candidates. Incorporating AI into your hiring processes come with some ethical questions to consider.

Question Mark

Is it ethical to train a system without auditing the data to ensure it’s unbiased?

While a reduction in human bias could be positive, an algorithmic bias could result in homogeneity and/or magnify existing discrimination. Also, there isn’t a universally accepted auditing standard, so even after an audit, there’s no way to be certain a data set isn’t biased.

Question Mark

Is it ethical to collect more intrusive personal data about candidates?

Collecting more personal information, like in the instance of using social media monitoring, could lead to more accurate assessments, but the cost is invading applicants’ privacy. In addition to that, much of the data collected through things like facial scanning software is outside of the control of the applicant. Access to all this additional information creates an increased power asymmetry between applicants and recruiters.

Question Mark

Is it ethical to trust predictive decision-making processes that developers can’t or won’t fully explain?

AI algorithms can be a black box: opaque and impossible to open to outside agents. Additionally, the people who built the box have an incentive to not reveal how it works in order to maintain trade secrets. When we trust the processes, we are trusting the people who programmed them too.

Question Mark

Is it ethical to rely on a system that can’t be held accountable for its recommendations or decisions?

The performance of AI used in hiring instances is largely the byproduct of the data set it learns from, so if the data isn’t neutral, the decisions the algorithm makes won’t be neutral either. When we talk about accountability, in many cases it falls to the companies the data belongs to. The quality and strength of the evidence to support a decision made by AI should be called into question because it supports the validity, reliability, and fairness of its claim.

Do the ethical risks contradict the rewards? Only you can decide this. If you do choose to use AI in your hiring process, be mindful of blind reliance. It’s important to mitigate any potential harm as much as possible.

Ethics of AI

Using AI? Here’s what you need to be doing…

If your company uses AI within its recruitment lifecycle, here are a few things you should consider to ensure you’re in control and in compliance, and you’re providing a transparent hiring experience:

  • Fully understand the algorithms being used

    In the same way there are guidelines surrounding how a candidate is traditionally screened and evaluated, recruiting teams and other stakeholders should be fully aware of the factors being considered by the AI algorithm. Consider all inputs being fed into the screening and evaluating software – is all information job-related? Also, look at the data being created by the system – does it comply with data governance standards?

  • Audit AI tools on a regular basis

    Tools that use AI can adapt to their own findings, meaning algorithms can progress over time. Doing an initial analysis isn’t enough. Hiring an auditing firm to conduct bias audits and assessments should be a non-negotiable part of using AI or machine learning, especially for hiring purposes. These tools need to be regularly audited to make sure the AI isn’t unintentionally learning immoral or even illegal algorithms from the data it’s receiving.

  • Understand that outsourced tools don’t eliminate liability risks

    There are only a select few companies that have the resources to internally develop AI tools, so most companies use outsourced, third-party recruiting solutions. However, using third-party software doesn’t exempt companies from liable risks, such as allegations of discriminatory hiring practices found within the software. Companies need to ensure recruiters and third-party vendors are compliant with all existing, relevant employment laws.

  • Share how AI is used within your hiring process

    Commonly, candidates want to know the full ins and outs of the recruitment process – not only to help them succeed but also to build trust. As a matter of transparency, think about what is communicated about the use of AI in the hiring process. Consider informing candidates ahead of time that AI will be used to screen or evaluate their application (in some cases, this could be a legal requirement).

Leading companies taking the high ground illustration

Leading companies taking the high ground

Walmart, Meta, Deloitte, and IBM are some of the leading companies that have joined the newly formed Data & Trust Alliance – an organization that’s helping companies to learn, develop and adopt responsible data and AI practices. Their first initiative is helping companies evaluate recruitment vendors by detecting and monitoring bias in their algorithms.

What are some efficient alternatives to using AI in recruiting?

Using AI within a recruitment process has some beneficial factors, but it can also present a lot of serious challenges and risks. Fortunately, there are ways to create a more efficient hiring experience without the use of AI. We’ve looked at the top reasons teams turn to AI-powered recruiting and provided solutions that offer comparable benefits, without the implications of AI.

Recruiting goal

Solutions available without using AI

Recruiting goal

Speed up time-to-hire

Solutions available without using AI

  • Pre-recorded video interviewing: Evaluate candidates’ pre-recorded interviews at any time
  • Automated scheduling: Real-time availability shared with candidates
  • Automated reference checking: Reference forms are automatically sent to referees
  • Automated communication/reminders: Timely reminders and email/SMS notifications sent to all parties

Recruiting goal

Improve candidate experience

Solutions available without using AI

  • Applicant empowerment: Allow candidates to select how they’re most conformable to conduct their interview – virtual or on-site
  • Automated communication/reminders: Provide consistent communication to keep candidates in the know
  • Mobile first platform: Allow candidates to apply and interview via mobile

Recruiting goal

Improve the quality of hires

Solutions available without using AI

  • Advanced searchability: Mass-boolean search applications and resumes for job-specific keywords and skill sets
  • Pre-recorded video interviewing: Get deep insights into candidates from day one
  • Structured interview methodology: Understand candidates better with a predictive validity of up to 65%
  • Skills testing and proctoring: Assess candidates fairly using proctored skills tests or work samples

Recruiting goal

Minimize hiring bias

Solutions available without using AI

  • Built-in rating guides and rating scales: Evaluate fairly within the interview with HR-approved evaluation tools
  • Structured interviewing: Methodology that assures an equal interview for every candidate
  • Diversified evaluators: Interviews can be recorded and reviewed by a diversified panel
  • Accessibility-friendly system: Candidates can use screen readers and opt-in to utilize other accessibility features

Does AI have a place in recruiting?

AI's place in recruiting illustration

AI has incredible potential when it comes to HR and recruiting. The current software has shown it can tackle long-standing, common recruiting challenges by speeding up time-to-hire and eradicating low-value administrative tasks, like communicating with candidates. However, AI software within recruiting is still in its infancy, and because of the technical developments still required, using AI has created its own set of challenges.

It’s likely that regulatory issues and allegations of unfair hiring algorithms will plague AI-powered evaluation or selection software for some time. Companies that use AI in their recruitment processes will likely face severe push-back internally and externally as people become more aware of AI’s presence and power.

AI has a place in recruiting when technological advancements allow the software to exceed the screening and evaluating methods used by human recruiters. It’s unlikely that this level of technology will exist in the next few years, or possibly even in the next decade. Until then, there are many non-AI candidate evaluation solutions available that can offer the same benefits – and more – without the serious drawbacks.

AI's place in recruiting illustration

Achieve your recruiting goals without AI

Artificial Intelligence Infograhpic