Ethics and AI: Is It Morally Permissible to Use AI in the Hiring Process?
top of page

Ethics and AI: Is It Morally Permissible to Use AI in the Hiring Process?

It is hotly debated whether the use of artificial intelligence (AI) to assist recruiters in the hiring process is right. Through a deontological framework specifically, this article will investigate the argument that the use of AI violates the right to non-discrimination (and is therefore morally impermissible). Various objections to the deontological argument are addressed, noting the significance of fairness in machine learning throughout.


1. A Deontological Argument


Though the use of AI in the hiring process has undeniably harmful consequences, this does not necessarily mean that any choices may be justified by their effects. On the contrary, actions should respect absolute rights and duties. The ‘rightness’ or ‘wrongness’ of any action is determined by examining the action itself. Hence the deontological argument—in premise conclusion form—that the use of AI in the hiring process is not morally permissible is as follows:


Premise 1. An action that violates a person’s rights is not morally permitted.

P2. Everyone has the right to be free from discrimination.

P3. Freedom from discrimination is particularly important and relevant in job hiring processes.

P4. AI seemingly has the potential to provide an impartial and unbiased way for companies to hire.

P5. When using AI to assist recruiters in the hiring process, AI systems are trained by observing historical data.

P6. Use of historical data to train AI systems results in algorithmic bias and discrimination, even if this is unintended bias.

Conclusion. Using AI to assist recruiters in the hiring process is therefore not morally permissible.


Figure 1. John McCarthy: Computer scientist known as the father of AI (Childs, 2011).

Morally permissible behaviour must uphold the rights of people. Per premise 2, everyone has the right to be free from discrimination. Article 14 of the Human Rights Act (1998) even declares this right, ensuring that no one is denied their rights because of protected characteristics (such as race, sex, language, or religion.) Premise 3 indeed then flags this as an undeniably important factor to consider in job recruitment. A protected characteristic should not solely determine a job application result. Unfair and unequal treatment occurs when an employer selects a candidate on such grounds. Arguably, however, AI can overcome this. Their design may avoid discrimination since they are (prima facie) uniquely neutral and objective to their human counterparts (Zimmermann et al., 2020). Moreover, AI could simply be a question of reality for the size of an e-retailer like Amazon, given the sheer volume of applicants. AI thus represents an opportunity for companies to reduce the time spent on repetitive and time-consuming tasks like automating the screening of resumes or scheduling interviews.


Regardless of practicality, the use of AI must be fair. Unfortunately, this tends not to be the case. Take an example, namely the experimental use of hiring AI in 2015 by Amazon (Lavanchy, 2018). The system was trained on data submitted by applicants over a 10-year period (recall premise 5) to review job applications and give candidates scores ranging from 1-5 stars. Most of the previous applicants across the 10 years notably came from men. The AI system then effectively taught itself that male candidates were preferable. Unnamed members of the team who developed the tool have even claimed that the system did not only favour males but would penalise female candidates (Lavanchy, 2018). Having been built and trained on data accumulated from CVs submitted from mostly males, the AI rating system was not gender neutral. For instance, penalising CVs including the word ‘women’ and down-grading candidates from all-women colleges. Hence premise 6: AI systems result in algorithmic bias. This may be unintentional (humans may subconsciously feed AI biased data), but it is nevertheless discriminatory. As such, since it is a right to be free from discrimination (Vickers, 2016), the use of hiring AI is morally impermissible (says the deontological argument).


Figure 2. Amazon's Flywheel approach has learned to leverage AI (Morgan, 2018).

2. Fairness


It is difficult to deny that hiring must be fair. Although this appears obvious, agreeing on a conception of fairness proves rather difficult. AI then only complicates matters further. Reuben Binns (2018) summarises this nicely:


What does it mean for AI to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? (pp.1)


Securing fairness when hiring by machine is difficult. As Binns (2018) highlights, there are multiple and often conflicting notions. On top of that, fairness metrics are also relative to context. The evaluation of algorithmic bias is thus contextual (Danks and London, 2017).


This article considers fairness as avoidance of bias. Take the following—real life—example of a recent lawsuit. In 2020, the software start-up Palantir was set to pay $1.7 million to settle a racial discrimination lawsuit (Gomez, 2020). Much like the protected characteristics noted earlier, on which everyone has a right not be discriminated on, Palantir as a federal government contractor cannot discriminate on race (or colour), religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. Palantir, however, was accused of disproportionately eliminating qualified Asian applicants for engineering positions (Guynn, 2017). The lawsuit alleges Asian applicants were routinely eliminated in the resume screening and telephone interview phases despite being as qualified as white applicants (Gomez, 2020). Despite the importance of (i) avoiding discrimination and (ii) fairness in hiring, the Palantir lawsuit is just one example amongst many.


Figure 3. American Philosopher John Rawls, best known for his conception/defence of “justice as fairness.” (Duignan, 2022)

Interestingly, a considerable amount of blame for discrimination like this is placed on implicit bias. An implicit bias, to which everyone is susceptible, refers to relatively unconscious and relatively automatic features or prejudiced judgment and social behaviour (Brownstein, 2019). Implicit biases result from implicit attitudes, often resulting in unfair treatment toward members of socially stigmatised groups such as African Americans, women, and/or the LGBTQ community (Brownstein, 2019). It involves stereotypes and prejudices, differentiated judgments, differentiated emotions, and cultural norms. Imagine a hypothetical subject James, for example, who explicitly believes that homosexual men and heterosexual men are equally suited for typically physically demanding jobs. Despite James’ explicit belief, James might nevertheless implicitly associate homosexual men with jobs that are not physically demanding, and heterosexual men with physically demanding (stereotypically so-called ‘manly’ jobs). This implicit association might lead James to behave in many (unconscious) biased ways. Hiring decisions may be affected by discrimination over resumes, first impression, or biased attitudes and stereotypes during interviews. Recent research has even found that resumes with typically ‘English-sounding’ names receive interview requests 40% more often than identical resumes with Chinese, Indian, or Pakistani names, for instance (Gomez, 2020). One problem is that what a person says and even what they (explicitly) believe about themselves is not necessarily an accurate representation of what they feel and think, nor of their behaviour. Business and hiring contexts are unfortunately especially vulnerable to implicit bias as such.


Figure 4. Political Philosopher Annette Zimmermann (2020) explores algorithmic injustice via AI.

Consider an opposing argument in favour of hiring by machine (an objection to the earlier deontological argument). Supposedly, according to this objection, AI reduces unconscious bias. As Gomez (2020) claims, AI can (1) make sourcing and screening decisions based on data points, and (2) be programmed to ignore demographic information about candidates. (1) and (2) indeed rely on the assumption that reducing implicit bias requires a non-human solution, namely AI. Point (1) is that AI makes sourcing/screening decisions based on data. Broadly, such data is combined using algorithms to make predictions about who is the best candidate. AI systems can process information on a much larger scale than the human brain (Gomez, 2020). Most importantly, however, AI systems are objective (an objective alternative to humans). Objectivity involves reduced assumptions and biases since AI creates a profile based on the actual qualifications of successful employees, providing hard data either validating or disconfirming beliefs about what to look for in applicants. Point (2) then follows: AI can be programmed in such a way to ignore demographic information about applicants. For instance, ignoring information about the applicants gender or race (Gomez, 2020).


Now consider a response to this objection, however, which questions AI as a ‘non-human’ solution that with the use of data can reduce/avoid bias. As mentioned in part 1, AI is often fed already biased data. It is biased due to deep societal inequalities, which either manifest intentionally or unintentionally in data (i.e., explicit vs. implicit bias) (Howard, and Borenstein, 2018). The point is that AI is trained to find patterns in previous behaviour as an objective alternative to humans (Upadhyay and Khandelwal, 2019), but this isn’t entirely true. Any human bias that already exists in the hiring process can be learned by AI. AI, particularly when assessing implicitly biased data, doesn’t reduce bias but instead exposes bias. The result is discrimination in hiring. With this notion of fairness (i.e., avoiding bias), AI is an unfair ‘solution’. One must also point out that point (2) (Gomez, 2020) doesn’t hold either. Designing AI to ‘ignore’ demographics just misses the point. It still inherits human bias from available data. Besides, ignoring information is often ineffective. AI systems can uncover information by other means such as pages visited online or on social network (Dwork, 2018). Discrimination resulting from AI is clearly a pressing issue since skewed data is what is assessed by the AI. The data is skewed because of human bias. AI is not a ‘non-human’ solution after all.


Figure 5. With advances in AI, it's possible that the line between robotics and AI will become more blurred (Michie, 2022).

3. Discrimination and ‘wrongfulness’


So far, this article has considered an objection to premise 6. Another objection is explored hereafter, that discrimination is not always ‘wrong’. Take two hypothetical scenarios:


A: The head of a company instructs the organisation’s receptionist not to take applications from job seekers from a certain racial or ethnic background.

B: The head of a company hires someone with a disability to increase the number of people with disabilities in the workforce, despite another candidate for the job being better qualified.


Discrimination can take many forms. In some cases—scenario A—discrimination is direct and intentional, based on negative attitude and bias. B is somewhat different. B qualifies as discrimination, in that the hired applicant is treated differently because of their disability. B, however, favours the disabled individual. They are hired on the grounds of their disability (i.e., a diverse workforce). This could be another kind of fairness: promotion of diversity and inclusion. As discussed in part 2, it is often argued that demographic (‘generalised’) information is what makes AI technology or hiring processes biased. Arguably, ignoring or removing demographic could reduce biases. Discrimination in its different forms, however, raises a unique issue here since generalisations may not always discriminate wrongfully. Consider an example whereby a company employs recent graduates for their graduate scheme. Again, excluding non-graduate applicants is not discriminatory in the wrongful sense.


Figure 6. Mark Zuckerberg told the world in 2021 that he was rebranding Facebook to Meta as the company pushes toward the metaverse (Shead, 2022).

More specifically, this is problematic for Cynthia Dwork’s (2018) concept of individual fairness, holding that similar people may be treated similarly. Going through the same procedural process, for instance, or having equal opportunities. Regarding AI in hiring, the algorithm must therefore have the right conception of similarity and dissimilarity (i.e., how to evaluate how similar people are). Yet generalisations are not always discriminatory and wrong. Hence, this is a potential objection to premises 2 and 3, since even if freedom of discrimination is a right, and even if it does prove relevant/important in hiring, this doesn’t suggest that AI discrimination is always wrong. There is a simple response to this, though. Granted, discrimination may not always be wrongful, but freedom of discrimination remains a fundamental human right. The deontological argument therefore holds that the use of AI in hiring is not morally permissible because it violates this right. Also, the argument holds that AI violates this right in the sense of wrongful discrimination. Fairness metrics is not appropriate in all contexts, and evaluating algorithmic bias is contextual. So, different forms of discrimination are relative to various contexts. ‘Harmless’ discrimination is only an issue with a particular notion of fairness in mind, which this argument does not adopt. It is not suggested that removing or ignoring demographic information would be fair or helpful. Only in this context would ‘non-wrongful’ kinds of discrimination and generalisation cause a problem. Instead, AI in the job hiring process is morally impermissible in violating the right to freedom of discrimination.


Figure 7. Immanuel Kant (1724-1804) by Johann Gottlieb Becker. Kant’s ethics focuses on duties, defined by right and wrong (deontology) (Kranak, 2019).

Conclusion


Questioning the use of AI involves a careful formulation of fairness. Of course, there is no straightforward, formalised, or agreed-upon conception of ‘fairness’ in reality. ‘Fairness’ must also be contextual. This article has not by any means proposed a deontological ‘solution’ to fairness in machine learning but has argued that the use of AI in the hiring process is not morally permissible. Achieving or seeking ‘fairness’ must often involve the use of hard data for AI systems to assess if they are a hiring tool, thus reflecting the complexity of the issue. AI is certainly not a simple or fair ‘non-human’ alternative to biases. Nor is ignorance of demographic information helpful. What is clear, however, is that AI in hiring is not currently morally permissible when it violates the right to freedom from discrimination.


Bibliographical References

Binns, R. (2018). What Can Political Philosophy Teach Us about Algorithmic Fairness?. IEEE Security &Amp; Privacy, 16(3), 73-80. doi: 10.1109/msp.2018.2701147


Brownstein, M. (2019). Implicit Bias. In The {Stanford} Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.


Danks, D., & London, A. J. (2017, August). Algorithmic Bias in Autonomous Systems. In Ijcai (Vol. 17, No. 2017, pp. 4691-4697).


Dwork, C.; and Ilvento, C. (2018). Fairness Under Composition. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019), volume 124 of Leibniz International Proceedings in Informatics (LIPIcs), 33:1–33:20. doi:10.4230/LIPIcs.ITCS.2019.33.


Gomez, D. (2020). How AI Can Reduce Unconscious Bias In Recruiting [Blog]. Retrieved from https://ideal.com/unconscious-bias/


Guynn, J. (2017). Palantir settles Asian hiring discrimination lawsuit. USA Today.


Howard, A., & Borenstein, J. (2018). The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Science and engineering ethics, 24(5), 1521-1536.


Lavanchy, M. (2018). Amazon’s sexist hiring algorithm could still be better than a human. The Conversation.


Other rights protected under the Human Rights Act. (1998). Retrieved 25 May 2022, from https://www.citizensadvice.org.uk/law-and-courts/civil-rights/human-rights/what-rights-are-protected-under-the-human-rights-act/other-rights-protected-under-the-human-rights-act/


Upadhyay, A. K., & Khandelwal, K. (2019). Artificial intelligence-based training learning from application. Development and Learning in Organizations: An International Journal.


Vickers, L. (2016). Religious freedom, religious discrimination and the workplace. Bloomsbury Publishing.


Zimmermann, A., Di Rosa, E., & Kim, H. (2020). Technology Can't Fix Algorithmic Injustice - Boston Review. Retrieved 25 May 2022, from https://bostonreview.net/articles/annette-zimmermann-algorithmic-political/


Visual Sources

Cover Image. Teichmann, J. (2019). Bias and Algorithmic Fairness. Towards Data Science. Retrieved January 20, 2023, from https://towardsdatascience.com/bias-and-algorithmic-fairness-10f0805edc2b.


Figure 1. Childs, M. (2011) John McCarthy: Computer Scientist Also Known As The Father of AI, Independent . Available at: https://www.independent.co.uk/news/obituaries/john-mccarthy-computer-scientist-known-as-the-father-of-ai-6255307.html (Accessed: January 20, 2023).


Figure 2. Morgan, B. (2018) How Amazon Has Reorganized Around Artificial Intelligence And Machine Learning, Forbes. Available at: https://www.forbes.com/sites/blakemorgan/2018/07/16/how-amazon-has-re-organized-around-artificial-intelligence-and-machine-learning/?sh=67803af67361 (Accessed: January 20, 2023).


Figure 3. Duignan, B. (2022, December 28). John Rawls. Encyclopedia Britannica. https://www.britannica.com/biography/John-Rawls


Figure 4. Zimmermann, A., Di Rosa, E., & Kim, H. (2020). Technology Can't Fix Algorithmic Injustice - Boston Review. Retrieved 25 May 2022, from https://bostonreview.net/articles/annette-zimmermann-algorithmic-political/


Figure 5. Michie, J. (2022) As artificial intelligence gets smarter, is it game over for humans?, The Guardian. Available at: https://www.theguardian.com/technology/2022/mar/31/as-artificial-intelligence-gets-smarter-is-it-game-over-for-humans?utm_term=Autofeed&CMP=twt_gu&utm_medium&utm_source=Twitter (Accessed: January 20, 2023).


Figure 6. Shead, S. (2022) Meta’s A.I. exodus, CNBC. Available at: https://www.cnbc.com/2022/04/01/metas-ai-lab-loses-some-key-people.html (Accessed: January 20, 2023).


Figure 7. Kranak, J. (2019). Kantian Deontology. Rebus Community. Retrieved January 20, 2023, from https://press.rebus.community/intro-to-phil-ethics/chapter/kantian-deontology/.


Author Photo

Rebecca Ivory

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page