Sunday, August 27, 2023

7. Building Ethical AI for Talent Management



As per Thomas C (2019) Artificial intelligence has disrupted every area of our lives — from the curated shopping experiences we’ve come to expect from companies like Amazon and Alibaba to the personalized recommendations that channels like YouTube and Netflix use to market their latest content. But, when it comes to the workplace, in many ways, AI is still in its infancy. This is particularly true when we consider the ways it is beginning to change talent management. To use a familiar analogy: AI at work is in the dial-up mode. The 5G WiFi phase has yet to arrive, but we have no doubt that it will.

To be sure, there is much confusion around what AI can and cannot do, as well as different perspectives on how to define it. In the war for talent, however, AI plays a very specific role: to give organizations more accurate and more efficient predictions of a candidate’s work-related behaviors and performance potential. Unlike traditional recruitment methods, such as employee referrals, CV screening, and face-to-face interviews, AI is able to find patterns unseen by the human eye.

Many AI systems use real people as models for what success looks like in certain roles. This group of individuals is referred to as a “training data set” and often includes managers or staff who have been defined as “high performers.” AI systems process and compare the profiles of various job applicants to the “model” employee it has created based on the training set. Then, it gives the company a probabilistic estimate of how closely a candidate’s attributes match those of the ideal employee.

Theoretically, this method could be used to find the right person for the right role faster and more efficiently than ever before. But, as you may have realized, it has become a source of both promise and peril. If the training set is diverse, if demographically unbiased data is used to measure the people in it, and if the algorithms are also de-biased, this technique can actually mitigate human prejudice and expand diversity and socioeconomic inclusion better than humans ever could. However, if the training set, the data, or both are biased, and algorithms are not sufficiently audited, AI will only exacerbate the problem of bias in hiring and homogeneity in organizations.

In order to rapidly improve talent management and take full advantage of the power and potential AI offers, then, we need to shift our focus from developing more ethical HR systems to developing more ethical AI. Of course, removing bias from AI is not easy. In fact, it is very hard. But our argument is based on our belief that it is far more feasible than removing it from humans themselves.

When it comes to identifying talent or potential, most organizations still play it by ear. Recruiters spend just a few seconds looking at a resume before deciding who to “weed out.” Hiring managers make quick judgments and call them “intuition” or overlook hard data and hire based on cultural fit — a problem made worse by the general absence of objective and rigorous performance measures. Further, the unconscious bias training implemented by a growing number of companies has often been found to be ineffective, and at times, can even make things worse. Often, training focuses too much on individual bias and too little on the structural biases narrowing the pipeline of underrepresented groups.

Though critics argue that AI is not much better, they often forget that these systems are mirroring our own behavior. We are quick to blame AI for predicting that white men will receive higher performance ratings from their (probably also white male) managers. But this is happening because we have failed to fix bias in the performance ratings that are often used in training data sets. We are shocked that AI can makes biased hiring decisions but fine living in a world where human biases dominate them. Just take a look at Amazon. The outcry of criticism about their biased recruiting algorithm ignored the overwhelming evidence that current human-driven hiring in most organizations is ineradicably worse. It’s akin to expressing more concern over a very small number of driverless car deaths than the 1.2 million traffic deaths a year caused by flawed and possibly also distracted or intoxicated humans.

Realistically, we have a greater ability to ensure both accuracy and fairness in AI systems than we do to influence or enlighten recruiters and hiring managers. Humans are very good at learning but very bad at unlearning. The cognitive mechanisms that make us biased are often the same tools we use to survive in our day-to-day lives. The world is far too complex for us to process logically and deliberately all the time; if we did, we would be overwhelmed by information overload and unable to make simple decisions, such as buying a cup of coffee (after all, why should we trust the barista if we don’t know him?). That’s why it’s easier to ensure that our data and training sets are unbiased than it is to change the behaviors of Sam or Sally, from whom we can neither remove bias nor extract a printout of the variables that influence their decisions. Essentially, it easier to unpack AI algorithms than to understand and change the human mind.

To do this, organizations using AI for talent management, at any stage, should start by taking the following steps.

1.     Educate candidates and obtain their consent. 

Ask prospective employees to opt in or to provide their personal data to the company, knowing that it will be analyzed, stored, and used by AI systems for making HR-related decisions. Be ready to explain the what, who, how, and why. It’s not ethical for AI systems to rely on black-box models. If a candidate has an attribute that is associated with success in a role, the organization needs to not only understand why that is the case but also be able to explain the causal links. In short, AI systems should be designed to predict and explain “causation,” not just find “correlation.” You should also be sure to preserve candidate anonymity to protect personal data and comply with GDPR, California privacy laws, and similar regulations.

2.     Invest in systems that optimize for fairness and accuracy.

Historically, organizational psychologists have pointed to a drop in accuracy when candidate assessments are optimized for fairness. For example, much academic research indicates that while cognitive ability tests are a consistent predictor of job performance, particularly in high-complexity jobs, their deployment has adverse impact on underrepresented groups, particularly individuals with a lower socioeconomic status. This means that companies interested in boosting diversity and creating an inclusive culture often de-emphasize traditional cognitive tests when hiring new workers so that diverse candidates are not disadvantaged in the process. This is known as the fairness/accuracy trade-off.

However, this trade-off is based on techniques from half a century ago, prior to the advent of AI models that can treat the data very differently than traditional models. There is increasing evidence that AI could overcome this trade-off  by deploying more dynamic and personalized scoring algorithms that are sensitive as much to accuracy as to fairness, optimizing for a mix of both. Therefore, developers of AI have no excuse for not doing so. Further, because these new systems now exist, we should question whether the widespread use of traditional cognitive assessments, which are known to have an adverse impact on minorities, should continue without some form of bias-mitigation.

3.     Develop open-source systems and third-party audits.

Hold companies and developers accountable by allowing others to audit the tools being used to analyze their applications. One solution is open-sourcing non-proprietary yet critical aspects of the AI technology the organization uses. For proprietary components, third-party audits conducted by credible experts in the field are a tool companies can use to show the public how they are mitigating bias.

4.     Follow the same laws — as well as data collection and usage practices — used in traditional hiring.

Any data that shouldn’t be collected or included in a traditional hiring process for legal or ethical reasons should not be used by AI systems. Private information about physical, mental or emotional conditions, genetic information, and substance use or abuse should never be entered.

If organizations address these issues, we believe that ethical AI could vastly improve organizations by not only reducing bias in hiring but also by enhancing meritocracy and making the association between talent, effort, and employee success far greater than it has been in the past. Further, it will be good for the global economy. Once we mitigate bias, our candidate pools will grow beyond employee referrals and Ivy League graduates. People from a wider range of socioeconomic backgrounds will have more access to better jobs — which can help create balance and begin to remedy class divides.

To make the above happen, however, businesses need to make the right investments, not just in cutting-edge AI technologies, but also (and especially) in human expertise — people who understand how to leverage the advantages that these new technologies offer while minimizing potential risks and drawbacks. In any area of performance, a combination of artificial and human intelligence is likely to produce a better result than one without the other. Ethical AI should be viewed as one of the tools we can use to counter our own biases, not as a final panacea.



The Legal and Ethical Implications of Using AI in Hiring

Digital innovations and advances in AI have produced a range of novel talent identification and assessment tools. Many of these technologies promise to help organizations improve their ability to find the right person for the right job, and screen out the wrong people for the wrong jobs, faster and cheaper than ever before.

These tools put unprecedented power in the hands of organizations to pursue data-based human capital decisions.  They also have the potential to democratize feedback, giving millions of job candidates data-driven insights on their strengths, development needs, and potential career and organizational fit. In particular, we have seen the rapid growth (and corresponding venture capital investment) in game-based assessments, bots for scraping social media postings, linguistic analysis of candidates’ writing samples, and video-based interviews that utilize algorithms to analyze speech content, tone of voice, emotional states, nonverbal behaviors, and temperamental clues.

While these novel tools are disrupting the recruitment and assessment space, they leave many yet un-answered questions about their accuracy, and the ethical, legal, and privacy implications that they introduce.  This is especially true when compared to more longstanding psychometric assessments such as the NEO-PI-R, The Wonderlic Test, the Ravens Progressive Matrices test, or the Hogan Personality Inventory that have been scientifically derived and carefully validated vis-à-vis relevant jobs, identifying reliable associations between applicants’ scores and their subsequent job performance (publishing the evidence in independent, trustworthy, scholarly journals).  Recently, there has even been interest and concern in the U.S. Senate about whether new technologies (specifically, facial analysis technologies) might have negative implications for equal opportunity among job candidates.

In this article, we focus on the potential repercussions of new technologies on the privacy of job candidates, as well as the implications for candidates’ protections under the Americans with Disabilities Act and other federal and state employment laws. Employers recognize that they can’t or shouldn’t ask candidates about their family status or political orientation, or whether they are pregnant, straight, gay, sad, lonely, depressed, physically or mentally ill, drinking too much, abusing drugs, or sleeping too little. However, new technologies may already be able to discern many of these factors indirectly and without proper (or even any) consent.

Before delving into the current ambiguities of the brave new world of job candidate assessment and evaluation, it’s helpful to take a look at the past. Psychometric assessments have been in use for well over 100 years, and became more widely utilized as a result of the United States Military’s Army Alpha, which placed recruits into categories and determined their likelihood of being successful in various roles. Traditionally, psychometrics fell into three broad categories: cognitive ability or intelligence, personality or temperament, and mental health or clinical diagnosis.

Since the adoption of the Americans with Disabilities Act (ADA) in 1990, employers are generally forbidden from inquiring about and/or using physical disability, mental health, or clinical diagnosis as a factor in pre-employment candidate assessments, and companies that have done so have been sued and censured. In essence, disabilities — whether physical or mental — have been determined to be “private” information that employers cannot inquire about at the pre-employment stage, just as employers shouldn’t ask applicants intrusive questions about their private lives, and cannot take private demographic information into account in hiring decisions.

Cognitive ability and intelligence testing have been found to be a reliable and valid predictor of job success in a wide variety of occupations. However, these kinds of assessments can be discriminatory if they adversely impact certain protected groups, such as those defined by gender, race, age, or national origin. If an employer is utilizing an assessment that has been found to have such an adverse impact, which is defined by the relative scores of different protected groups, the employer has to prove that the assessment methodology is job-related and predictive of success in the specific jobs in question.

Cognitive ability and intelligence testing have been found to be a reliable and valid predictor of job success in a wide variety of occupations. However, these kinds of assessments can be discriminatory if they adversely impact certain protected groups, such as those defined by gender, race, age, or national origin. If an employer is utilizing an assessment that has been found to have such an adverse impact, which is defined by the relative scores of different protected groups, the employer has to prove that the assessment methodology is job-related and predictive of success in the specific jobs in question.

Unfortunately, there is far less information about the new generation of talent tools that are increasingly used in pre-hire assessment. Many of these tools have emerged as technological innovations, rather than from scientifically-derived methods or research programs.  As a result, it is not always clear what they assess, whether their underlying hypotheses are valid, or why they may be expected to predict job candidates’ performance. For example, physical properties of speech and the human voice — which have long been associated with elements of personality — have been linked to individual differences in job performance.  If a tool shows a preference for speech patterns such as consistent vocal cadence or pitch or a “friendly” tone of voice that do not have an adverse impact upon job candidates in a legally protected group, then there is no legal issue; but these tools may not have been scientifically validated and therefore are not controlling for potential discriminatory adverse impact — meaning the employer may incur liability for any blind reliance.  In addition, there are yet no convincing hypotheses or defensible conclusions about whether it would be ethical to screen out people based on their voices, which are physiologically determined, largely unchangeable personal attributes.

Likewise, social media activity — e.g., Facebook or Twitter usage — has been found to reflect people’s intelligence and personality, including their dark side traits.  But is it ethical to mine this data for hiring purposes when users will have generally used such apps for different purposes and may not have provided their consent for data analysis to draw private conclusions from their public postings?

When used in the hiring context, new technologies raise a number of new ethical and legal questions around privacy, which we think ought to be publicly discussed and debated, namely:

 

What temptations will companies face in terms of candidate privacy relating to personal attributes?

As technology advances, big data and AI will continue to be able to determine “proxy” variables for private, personal attributes with increased accuracy. Today, for example, Facebook “likes” can be used to infer sexual orientation and race with considerable accuracy. Political affiliation and religious beliefs are just as easily identifiable. Might companies be tempted to use tools like these to screen candidates, believing that because decisions aren’t made directly based upon protected characteristics that they aren’t legally actionable?  While an employer may not violate any laws in merely discerning an applicant’s personal information, the company may become vulnerable to legal exposure if it makes adverse employment decisions by relying on any protected categories such as one’s place of birth, race, or native language — or based on private information that it does not have the right to consider, such as possible physical illness or mental ailment.  How the courts will handle situations where employers have relied upon tools using these proxy variables is unclear; but the fact remains that it is unlawful to take an adverse action based upon certain protected or private characteristics — no matter how these were learned or inferred.

This might also apply to facial recognition software, as recent research predicts that face-reading AI may soon be able to discern candidates’ sexual and political orientation as well as “internal states” like mood or emotion with a high degree of accuracy. How might the application of the Americans with Disabilities Act change? Additionally, the Employee Polygraph Protection Act generally prohibits employers from using lie detector tests as a pre-employment screening tool and the Genetic Information Nondiscrimination Act prohibits employers from using genetic information in employment decisions. But what if the exact same kind of information about truth, lies, or genetic attributes can be determined by the above-mentioned technological tools?

What temptations will companies face in terms of candidate privacy relating to lifestyle and

activities?

Employers can now access information such as one candidate’s online “check in” to her church every Sunday morning, another candidate’s review of the dementia care facility into which he has checked his elderly parent, and a third’s divorce filing in civil court.  All of these things, and many more, are easily discoverable in the digital era. Big data is following us everywhere we go online and collecting and assembling information that can be sliced and diced by tools we can’t even imagine yet — tools that could possibly inform future employers about our fitness (or lack thereof) for certain roles.  And big data is only going to get bigger; according to experts, 90% of the data in the world was generated just in the past two years alone.  With the expansion of data comes the potential expansion for misuse and resulting discrimination — either deliberate or unintentional.

Unlike the EU, which has harmonized its approach to privacy under the General Data Protection Regulation (GDPR), the U.S. relies on a patchwork approach to privacy driven largely by state law. With regard to social media, specifically, states began introducing legislation back in 2012 to prevent employers from requesting passwords to personal internet accounts as a condition of employment. More than twenty states have enacted these types of laws that apply to employers.  However, in terms of general privacy in the use of new technologies in the workplace, there has been less specific guidance or action. In particular, legislation has passed in California that will potentially constrain employers’ use of candidate or employee data. In general, state and federal courts have yet to adopt a unified framework for analyzing employee privacy as related to new technology.  The takeaway is that at least for now, employee privacy in the age of big data remains unsettled. This puts employers in a conflicted position that calls out for caution: Cutting-edge technology is available that may be extremely useful. But it’s giving you information that has previously been considered private. Is it legal to use in a hiring context? And is it ethical to consider if the candidate didn’t consent?

What temptations will companies face in terms of candidate privacy relating to disabilities?

The Americans with Disabilities Act puts mental disabilities squarely in its purview, alongside physical disabilities, and defines an individual as disabled if the impairment substantially limits a major life activity, if the person has a record of such an impairment, or if the person is perceived to have such an impairment. About a decade ago, The U.S. Equal Employment Opportunity Commission (EEOC) issued guidance to say that the expanding list of personality disorders described in the psychiatric literature could qualify as mental impairments, and the ADA Amendments Act made it easier for an individual to establish that he or she has a disability within the meaning of the ADA. As a result, the category of people protected under the ADA may now include people who have significant problems communicating in social situations, people who have issues concentrating, or people who have difficulty interacting with others.

In addition to raising new questions about disabilities, technology also presents new dilemmas with respect to differences, whether demographic or otherwise. There have already been high-profile reallife situations where these systems have revealed learned biases, especially relating to race and gender. Amazon, for example, developed an automated talent search program to review resumes — which was abandoned once the company realized that the program was not rating candidates in a gender-neutral way. To reduce such biases, developers are balancing the data used for training AI models, to appropriately represent all groups. The more information that the technology has and can account for/learn from, the better it can control for potential bias.

In conclusion, new technologies can already cross the lines between public and private attributes, “traits” and “states” in new ways, and there is every reason to believe that in the future they will be increasingly able to do so. Using AI, big data, social media, and machine learning, employers will have ever-greater access to candidates’ private lives, private attributes, and private challenges and states of mind. There are no easy answers to many of the new questions about privacy we have raised here, but we believe that they are all worthy of public discussion and debate.


9 comments:

  1. Overall, this article provides a valuable resource for building ethical AI for talent management . It discusses the potential impact of new technologies on job candidate privacy and their protections under the Americans with Disabilities Act and other employment laws. Employers must respect candidates' privacy by not asking about factors like family status, political orientation, or mental health. Psychometric assessments, which have been in use for over 100 years, have evolved since the Army Alpha. Since the ADA's adoption in 1990, employers are prohibited from using physical disability, mental health, or clinical diagnosis as factors in pre-employment candidate assessments. This means that disabilities are considered "private" information and cannot be used in hiring decisions. Supper article Prakash...

    ReplyDelete
    Replies
    1. I agree that this article provides a valuable resource for building ethical AI for talent management. It is important to be mindful of the potential impact of new technologies on job candidate privacy and their defenses under the law. Employers must respect candidates' privacy by not asking about factors that are protected by law, such as family status, political orientation, or mental health.
      I also agree that psychometric assessments have evolved since the Army Alpha. These assessments can be a valuable tool for forecasting job performance, but they must be used ethically and responsibly. Employers should not use psychometric assessments to discriminate against candidates on the basis of protected characteristics.
      I appreciate you bringing up the ADA and its prevention on using physical disability, mental health, or clinical diagnosis as factors in pre-employment candidate assessments. This is an important reminder that disabilities are considered "private" information and cannot be used in hiring decisions.
      Thank you for your attentive comment! I'm glad that you found this article to be helpful.

      Delete
  2. The acknowledgement of the complexity and the absence of easy answers is on point. The emphasis on initiating public discussions and debates is crucial in navigating this evolving landscape. It underscores the need for a collective effort to define boundaries, establish guidelines, and address the ethical dilemmas posed by these emerging technologies.By bringing attention to these emerging challenges, your perspective underscores the importance of being proactive in addressing the implications of technological advancements on privacy and personal information. It's a reminder that as technology advances, so must our discussions around its responsible and ethical implementation.

    ReplyDelete
    Replies
    1. I appreciate your thoughtful response to the viewpoint I presented. Your recognition of the complexity surrounding emerging technologies and the absence of straightforward solutions is indeed on point. It's essential to acknowledge the workings involved in navigating this ever-evolving landscape.
      I'm pleased that the importance on initiating public discussions and debates resonated with you. In such a dynamic environment, it's vital to engage in collective conversations that help define boundaries, set guidelines, and challenge the ethical concerns that arise due to these new technologies.
      Your reflection about the importance of proactive efforts in addressing the implications of technological advancements on privacy and personal information is spot-on. As we witness rapid technological progress, it becomes increasingly vital to ensure responsible and ethical implementation through ongoing dialogues and considerations.
      I'm happy the perspective brought attention to these emerging challenges and served as a reminder of the ongoing necessity to stay engaged in discussions surrounding technology's impact on society. If you have any further thoughts or if there are connected topics you'd like to explore, please don't hesitate to share. Your engagement adds depth to these important discussions.

      Delete
  3. Nice article. The creation of ethical AI for talent management is a vital pursuit. Crafting AI systems that prioritize fairness, transparency, and accountability in processes like recruitment, employee development, and performance evaluation is essential. By ensuring that AI-driven decisions are unbiased, inclusive, and aligned with ethical standards, organizations can harness the power of technology to enhance talent management practices while upholding integrity and fostering a diverse and thriving workforce.

    ReplyDelete
    Replies
    1. Thank you for your constructive feedback on the article. I'm glad to hear that you found value in the conversation about ethical AI in talent management.
      Your emphasis on the importance of creating ethical AI systems for talent management is well-stated. Certainly, crafting AI systems that prioritize fairness, transparency, and accountability is a crucial step forward. Ensuring that AI-driven processes, such as recruitment, employee development, and performance evaluation, are unbiased and inclusive is essential for promoting a just and equitable workplace.
      Your understanding into the potential of AI to enhance talent management practices while upholding integrity and fostering diversity is both accurate and inspiring. By leveraging technology responsibly, organizations can not only optimize their processes but also contribute to a workplace culture that respects ethical standards and values.
      If you have any further thoughts, comments, or if you'd like to research deeper into any related topics, please feel free to share. Your engagement is valuable, and I'm here to continue the conversation.

      Delete
  4. Very interesting topic you touched on. AI systems are increasingly being used for recruitment and selection. According to Geetha, R. and Bhanu, S.R.D., (2018). it is gaining attention and importance in automating recruitment systems compared to traditional recruitment methods.
    However, the subjective assessment of how they work varies. For example, some argue that AI can reduce bias in the recruitment process. Others claim that AI can perpetuate existing discrimination and bias. Regardless, it's clear that AI brings significant benefits to the recruitment and selection process.
    According to Dijkkamp (2019), the responsibilities of HR professionals are shifting towards the end of the recruitment funnel. However, this transition will occur gradually over several years, allowing HR professionals to adapt to new roles and responsibilities. With AI tools supporting rather than leading the decision-making process, HR professionals can add value to other parts of the process.

    ReplyDelete
    Replies
    1. I appreciate you expressing your opinions on this fascinating issue. According to Geetha and Bhanu's research, the incorporation of AI into recruiting and selection procedures represents a considerable departure from conventional methods in favor of more automated ones.

      The intricacy of this topic is highlighted by the divergent opinions on whether AI can effectively reduce bias or, conversely, whether it can really enhance discrimination. Undoubtedly, finding a balance between using AI's skills and guaranteeing fairness is a difficult task that needs considerable thought.

      Your reference to Dijkkamp's description of the HR professional's shifting position adds a crucial dimension to the conversation. The progressive change in roles gives HR professionals a chance to make the most of their experience in strategic decision-making and relationship-building, while AI assumes a supporting function.

      Delete
  5. This comment has been removed by the author.

    ReplyDelete

8. The neglected role of talent proactivity : Integrating proactive behavior into talent-management

  As per McKinsey, (2018).   Effective talent management is considered a key driver of an organization's ability to outstrip competitors...