top of page

AI in UN Recruitment: Potential for Efficiency, but will it be Trusted?

1 Dec. 2024

By Katja Hemmerich


A human like robot facing a person with a skeptical look.

Across all organizations, including the UN system, there is a growing interest in and experimentation with artificial intelligence in human resources. As IBM has highlighted: “Artificial intelligence (AI) can help usher in a new era of human resource management, where data analytics, machine learning and automation can work together to save people time and support higher-quality outcomes.” - IBM, ‘Artificial intelligence and a new era of human resources', 9 Oct. 2023


This presents huge potential for the UN, where the Secretariat alone receives an average of more than 600,000 applicants per year, and where achievement of a diverse workforce remains an elusive goal and better data can lead to better decision-making. But new research highlights that considering the integration of AI purely from an efficiency and process improvement perspective may be short-sighted.


Recruitment is a major way in which current and prospective employees interact with the UN, and therefore a key driver of their trust in the organization. And the reality is that there is already limited trust of UN recruitment processes, not only by applicants but often also by member states. So this month, our spotlight is on new research that explores how the integration of AI into organizational processes like recruitment can impact trust, and not just efficiency.


Why are trust dynamics different for AI than other technologies?

From an academic perspective, trust is normally a concept reserved for humans. This is because research demonstrates that trust exists when someone is willing to take a risk or make themselves vulnerable vis-a-vis another person because they consider that other person to be competent, have integrity, and benevolence (meaning they will act not just in their own interest). Concepts like integrity and benevolence imply a certain level of choice and free will, which generally only apply to humans.


But modern AI is changing this perspective, because modern AI has the ability to make choices. Generally, we humans consider that someone has agency when they can think, plan and act. And increasingly, modern AI has those abilities, and as a result, it also means that AI can create or exploit situations where we make ourselves vulnerable - like when we apply for a job, where AI is involved in reviewing our application and potentially even deciding on whether we get that job or not. New research indicates that the agency (or our perceptions of how much AI can think, plan and act) creates unique dynamics for how human trust, or mistrust, AI.


“Modern AI technologies, which learn how to act rather than simply obey programmed instructions on how to act, occupy a curious place somewhere between humans and inanimate technology in the extent to which they are seen as agentic [having agency]. They are typically perceived as more agentic than other technologies but less than humans” - B. S. Vanneste & P Puranam, ‘Artificial Intelligence, Trust, and Perceptions of Agency' (2024).


What does this mean for integrating AI in recruitment processes?

Given the challenges that the UN has in a timely review of the immense number of applications it receives, there is an obvious interest in exploiting the ability of AI to speed up the process. As, for instance, the early adoption of AI in managing surge deployments in UNDP has illustrated, it can not only makes the process faster, which is in the interest of applicants and the organization, but it also reduces the workload on the HR staff and hiring managers. When integrating 'older' technologies into processes, core elements of a project's success were to track and highlight these process improvements and to invest in communication and training for users to facilitate their adoption of the new technology. But this may not be a winning strategy if you're integrating AI into an organizational process.


The research by Profs. Vanneste and Puranam of University College London and INSEAD highlights that a more nuanced approach is needed when integrating modern AI into such processes because generating trust in the process is more complex. And this is where the natural tendency to make your AI more human-like, through a conversational chatbot or an avatar, for example, can actually undermine trust in some cases.


“An overemphasis on making AI seem human-like can backfire if this is not carefully managed. Communicating the system’s capabilities and limitations transparently can help manage user expectations and foster appropriate levels of trust.” - P. Puranam & B Vanneste, ‘Could More Human-Like AI Undermine Trust?’, 14 Nov. 2024


In the case of a recruitment process, you are asking applicants to expose themselves to a decision-making process where your integrity and benevolence in addition to your competence are key. The risk of them feeling betrayed when an outcome is not perceived as fair or correct is huge. All of these are directly related to trust - both when humans or AI run the process. Most recruitment officers quickly learn on the job how to manage these complex trust dynamics, and it is always challenging. Throwing AI into the mix will make it even more challenging, especially if you don't consider the unique dynamics of how humans trust AI.


Graph illustrating that ReformWorks and Talentum survey participants are more likely to trust a review of their application by HR officers than an AI tool.

A key finding of our recent ReformWorks research, undertaken in collaboration with Talentum Consulting, is that applicants are currently more willing to trust a 'real' human resources officer's judgement than that of an AI tool (see graphic on the left). Our survey was completed by 635 Millennials and GenZ respondents earlier this year. Although these generations are much more tech savvy and digitally literate than their predecessors, only 15% of respondents indicated a strong level of trust if their application was reviewed by AI. Conversely, 25% of respondents indicated a strong level of trust in a ‘real’ human resources officer reviewing their application. Explicit mistrust of AI was more than double the level of mistrust of a human resources officer (20% for AI versus 8% for a human resources officer). These results illustrate that trust is always a challenge in recruitment processes, and adding AI to the mix has real potential to erode trust even further.


What Profs. Vanneste and Puranam's research tells us, is that our natural instinct to give AI interactions with candidates a more human-like quality is likely make the trust deficit worse. Similarly, traditional communication strategies which focus only on how to use the new process or platform and don't explain how the AI works is also likely to foster mistrust. But there are ways to avoid these trust pitfalls.


Tactics to build trust when integrating AI into recruitment processes

Even if your investment in integrating AI into a recruitment process demonstrates significant efficiency gains, a lack of trust will prevent this from being perceived as a success. The research demonstrates that there are three tactics which you can build into such projects to help build trust in the technology and the process, and lead you to success.


First, in the design phase, it’s important to consider how much agency you want to give to the AI and where humans still need to take a final decision or review or spot-check outcomes. Particularly in the case of an actual selection decision, there is a risk that more agency by the AI calls into question its perceived benevolence and integrity (and therefore the fairness of the process). So that’s where you may want to prioritize human decisions. And more generally, you want to avoid making the AI appear human-like because we humans naturally attribute agency to something that looks and sounds like us. So even if humans are making selection decisions, a human-like chatbot communicating the results can lead applicants to think the AI was involved in the decision and create concern amongst candidates.


Second, you want to build a consultative design process into your project. While it's tempting to handover the design and configuration of your AI to a tech person, your users’ level of trust in the designer will impact their level of trust in the final product. So using a trusted and reputable designer is important. But more than that you need qualified and trusted humans guiding the technological design, in particular the identification of data from which the AI will learn. Letting AI learn from historical data on recruitment in the UN will simply reinforce the previous biases and diversity gaps, so at some point, an ideal data set for learning needs to be identified and AI prompts need to be defined. If this can be a collaborative process with trusted HR experts, and potentially input from staff representatives or similar relevant stakeholders, you can help build trust in the process and the final product. (Check out our previous spotlight on how to avoid bias when integrating AI into recruitment processes).


Finally, you need to adjust your communication strategy. Monitoring and sharing the data on how the process has improved the speed of the process and the quality of its outcomes is still important. But in addition, the more you can create transparency around the process and specifically where and how AI is used and what it does - or does not - decide will help your users and stakeholders trust the process. This is particularly important the more agency you give to the AI in your recruitment process. Communicating who was involved in the collaborative design process can also help build trust in the new process.


AI has tremendous potential to improve UN recruitment processes. But UN organizations will only reap those benefits if applicants trust and adopt these newly improved processes. So considering trust and building these three tactics into your project planning should help build trust in AI-driven process improvements, and ideally the overall recruitment process as well.

bottom of page