Podcast: Playing the job market

Increasingly, job seekers need to pass a series of tests in the form of artificial-intelligence games just to be seen by a hiring manager. In this third of a four-part miniseries on AI and hiring, we speak to someone who helped create these tests, and we ask who might get left behind in the process and why there isn’t more policy in place. We also try out some of these tools ourselves.

We Meet:

  • Matthew Neale, Vice President of Assessment Products, Criteria Corp. 
  • Frida Polli, CEO, Pymetrics 
  • Henry Claypool, Consultant and former member, Obama administration Commission on Long-Term Care
  • Safe Hammad, CTO, Arctic Shores  
  • Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
  • Nathaniel Glasser, Employment Lawyer, Epstein Becker Green
  • Keith Sonderling, Commissioner, Equal Employment Opportunity Commission (EEOC)

We Talked To: 

  • Aaron Rieke, Managing Director, Upturn
  • Adam Forman, Employment Lawyer, Epstein Becker Green
  • Brian Kropp, Vice President Research, Gartner
  • Josh Bersin, Research Analyst
  • Jonathan Kestenbaum, Co-Founder and Managing Director, Talent Tech Labs
  • Frank Pasquale, Professor, Brooklyn Law School
  • Patricia (Patti) Sanchez, Employment Manager, MacDonald Training Center 
  • Matthew Neale, Vice President of Assessment Products, Criteria Corp. 
  • Frida Polli, CEO, pymetrics 
  • Henry Claypool, Consultant and former member, Obama administration Commission on Long-Term Care
  • Safe Hammad, CTO, Arctic Shores  
  • Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
  • Nathaniel Glasser, Employment Lawyer, Epstein Becker Green
  • Keith Sonderling, Commissioner, Equal Employment Opportunity Commission (EEOC)

Sounds From:

  • Science 4-Hire, podcast
  • Matthew Kirkwold’s cover of XTC’s, Complicated Game, https://www.youtube.com/watch?v=tumM_6YYeXs

Credits:

This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green, and Karen Hao. We’re edited by Michael Reilly.

Transcript

[TR ID]

Jennifer: Often in life … you have to “play the metaphorical game”… to get the win you might be chasing.

(sounds from Matthew Kirkwold’s cover of XTC’s Complicated Game, “And it’s always been the same.. It’s just a complicated game.. Gh – ah.. Game..”

Jennifer: But what if that game… was literal?

And what if winning at it could mean the difference between landing a job you’ve been dreaming of… or not.

Increasingly job seekers need to pass a series of “tests” in the form of artificial-intelligence games… just to be seen by a hiring manager.

Anonymous job seeker: For me, being a military veteran being able to take tests and quizzes or being under pressure is nothing for me, but I don’t know why the cognitive tests gave me anxiety, but I think it’s because I knew that it had nothing to do with software engineering that’s what really got me.

Jennifer: We met this job seeker in the first episode of this series … 

She asked us to call her Sally because she’s criticizing the hiring methods of potential employers and she’s concerned about publishing her real name.

 She has a graduate degree in information from Rutgers University in New Jersey, with specialties in data science and interaction design. 

And Sally fails to see how solving a timed puzzle… or playing video games like Tetris… have any real bearing on her potential to succeed in her field.

Anonymous job seeker: And I’m just like, what? I don’t understand. This is not relevant. So companies want to do diversity and inclusion, but you’re not doing diversity and inclusion when it comes to thinking, not everyone thinks the same. So how are you inputting that diversity and inclusion when you’re only selecting the people that can figure out a puzzle within 60 seconds. 

Jennifer: She says she’s tried everything to succeed at games like the ones from Cognify she described… but without success. 

She was rejected from multiple jobs she applied to that required these games.

Anonymous job seeker: I took their practice exams. I was practicing stuff on YouTube. I was using other peers and we were competing against each other. So I was like, all right, it’s not me because I studied for this. And I still did not quote unquote pass so…

Jennifer: I’m Jennifer Strong and in this third episode of our series on AI and hiring… we look at the role of games in the hiring process…

We meet some of the lead creators and distributors of these tools… and we share with them some feedback on their products from people like Sally.

Matthew Neale: The disconnect I think for this candidate was between what the assessment was getting the candidate to do and, and what was required or the perceptions about what was required on the job.

Jennifer: Matthew Neale helped create the Cognify tests she’s talking about.

Matthew Neale: I think the intention behind Cognify is to look at people’s ability to learn, to process information, to solve problems. You know, I would say, I suppose that these kinds of skills are relevant in, in software design, particularly in, in software design where you’re going to be presented with complex difficult or unusual problems. And that’s the connection that I would draw between the assessment and the role.

Jennifer: So we tested some of these tools ourselves…

And we ask who might get left behind in the process… Plus, we find out why there isn’t more policy in place … and speak with one of the leading US regulators. 

Keith Sonderling: There has been no guidelines. There’s been nothing specific to the use of artificial intelligence, whether it is resume screening, whether it’s targeting job ads or facial recognition or voice recognition, there has been no new guidelines from the EEOC since the technology has been created.

[SHOW ID]

Frida Polli: So I’m Frida Polli. I’m a former academic scientist. I spent 10 years at Harvard and MIT and I am the current CEO of a company called Pymetrics.

Jennifer: It’s an AI-games company that uses behavioral science and machine learning to help decide whether people are the right fit for a given job.   

Frida Polli: I was a scientist who really loved the research I was doing. I was, at some point, frustrated by the fact that it wasn’t there wasn’t a lot of applications, real-world applications. So I went to business school looking for a problem, essentially, that our science could help solve. 

Jennifer: When I spoke to her earlier this year she called this her fundamental “aha” moment in the path to creating her company.

Frida Polli: Essentially, people were trying to glean cognitive, social, and emotional aptitudes, or what we call soft skills, from a person’s résumé, which didn’t seem like the optimal thing to do. If you’re trying to understand somebody more holistically, you can use newer behavioral science tools to do that. So ultimately just had this light bulb go off, thinking, okay, we know how to measure soft skills. We know how to measure the things that recruiters and candidates are looking to understand about themselves in a much more scientific objective way. We don’t have to tea-leaf-read off a résumé .

Jennifer: The reason companies score job seekers is because they get way too many applications for open roles. 

Frida Polli: You know if we could all wave our magic wand and not have to score people and magically distribute opportunity. I mean, my God, I’m all in all in. Right? And what we can do, I think, is make these systems as fair and predictive as possible, which was always kind of the goal. 

Jennifer: She says Pymetrics does this using cognitive research… and they don’t measure hard skills…like whether someone can code… or use a spreadsheet.

Frida Polli: The fundamental premise is that we all sort of have certain predispositions and they’ll lead us to be more versus less successful. There’s been a lot of research showing that, you know, different cognitive, social and emotional, or personality attributes do make people particularly well suited for, you know, role A and less well suited for role B. I mean, that research has, you know, predates Pymetrics and all we’ve done is essentially make the measurement of those things less reliant on self-report questionnaires and more reliant on actually measuring your behavior. 

Jennifer: These games measure nine specific soft skills including attention and risk preference… which she says are important in certain jobs. 

Frida Polli: It’s not super deterministic. It can change over time. But it’s a broad brush stroke of like, Hey, you know, if you tend to be like, let’s take me for a second, I tend to be somewhat impulsive, right. That would make me well disposed for certain roles, but potentially not others. So I guess what I would say is that both hard and soft skills are important for success in any particular role and the particular mix… it really depends on the role at hand, right?

Jennifer: Basically it works like this. — Employees who’ve been successful in a particular job a company is hiring for… are asked to play these games. That data gets compared against people already in a Pymetrics database…  The idea is to build a model that identifies and ranks the skills unique to this group of employees… and to remove bias. 

Jennifer: All of that gets compared against incoming job applicants… And it’s used by large global companies including KraftHeinz and AstraZeneca.

Another big player in this field is a company called Arctic Shores. Their games are used by the financial industry… and by large companies mainly in Europe.  

Safe Hammad: The way we recruit was and in many cases is broken. 

Jennifer: Safe Hammad is a cofounder and CTO of Arctic Shores.

Safe Hammad: But companies are recognizing that actually they can do better. They can do better on the predictability front to improve the bottom line for the companies. And also they can do better on the bias front as well. It’s a win-win situation: by removing the bias, you get more suitable people, the right people, in your company. I mean, what’s not to like? 

Jennifer: The same as Pymetrics, Arctic Shores teases out personality traits via AI-based “video games.” 

Safe Hammad:  So, the way we measure something like sociability isn’t “Oh, you’re in a room and you want to go and talk to someone,” or, you know, actually, you really wouldn’t realize, you know, there’s a few tasks where we ask you to choose left, choose right, and press you a little bit. And we come out with a measure of sociability. I mean, for me, it’s magic. I mean, I understand the science a little bit underneath. I certainly understand the mathematics, but  it’s like magic. We actually don’t put you in a scenario. That’s anything to do with sociability. And yet, if you look at the stats, the measurements are great. 

Jennifer: He says the company’s tools are better than traditional testing methods, because the games can pull out traits of job applicants that might otherwise be hard to figure out. Like whether they’re innovative thinkers… something most candidates would probably just answer with yes if they were asked in a job interview. 

Safe Hammad: When you ask questions, they can be faked. When I ask a question about, you know, how would you react if you’re in this position? You’re not thinking, oh, how would I react? You’re thinking, oh, what does the person asking me want me to say, that’s going to give me the best chance of getting that job. So without asking questions, by not asking questions, it’s a lot harder to fake and it’s a lot less subjective.

Jennifer: And to him, more data equals more objective hiring decisions. 

Safe Hammad: It’s about seeing more in people before you bring in some of this information which can lead to bias, so long as in the first stage of the process, you’re blind to their name, the school they went to, what degree they got, and you just look at the psychometrics, what is their potential for the role? That’s the first thing we need to answer.

Jennifer: He says games give everyone a fair chance to succeed regardless of their background. Both Pymetrics and Arctic Shores say their tools are built on well-established research and testing.

Safe Hammad: Now, these interactive tasks are very carefully crafted based on the scientific literature based on a lot of research and a lot of sweat that’s gone into making these, we actually can capture thousands of data points and, and a lot of those are very finely nuanced. And by using all that data, we’re able to really try and hone in on some of the behaviors that will match you to those roles. 

Jennifer: And he says explainability of results is key in building trust in these new technologies. 

Safe Hammad: So we do use AI, we do use machine learning to try and inform us to help us build that model. But the final model itself is more akin to what you would find a traditional psychometrics. It means that when it comes to our results, we can actually give you the results. We can tell you where you lie on the low to medium to high scale for creativity, for resilience, for learning agility. And we will stand by that.

Jennifer: And he says his company is also closely monitoring if the use of AI games — leads to better hiring decisions. 

Safe Hammad: We’ll always be looking at the results, you know, has the outcome actually reflected what we said would happen. Are you getting better hires? Are they actually fulfilling your requirements? And, and better doesn’t necessarily mean, Hey, I’m more productive. Better can mean that they’re more likely to stay in the role for a year.  

Jennifer: But not everyone is feeling so optimistic. Hilke Schellmann is a professor of journalism at NYU who covers the use of AI in hiring… She’s been playing a whole lot of these games… as well as talking to some critics. She’s here to help put it all into context.

Hilke: Yeah Jen.. AI-based video games are a recent phenomenon of the last decade or so. It’s a way for job applicants to get tested for a job in a more “fun” way…(well…  that’s at least how vendors and employers pitch it), since they are playing “video games” instead of answering lots of questions in a personality test for example. 

Jennifer: So, what types of games are you playing? 

Hilke: So…. And I played a lot of these games. For one company, I had to essentially play a game of tetris and put different blocks in the right order. I also had to solve basic mathematical problems and test my language skills by finding grammar and spelling mistakes in an email.

Jennifer: But these aren’t like the ones you might play at home. There’s no sound in these games… and they look more like games from the 1980’s or early 90’s… and something that surprised me was all the biggest companies in this space seem to really be into games about putting air in balloons… What’s that about?

Hilke: Well.. The balloon game apparently measures your appetite for risk. When I played the game… I figured out pretty early on that yellow and red balloons pop after fewer pumps than the blue balloons, so I was able to push my luck with blue balloons and bank more money. But while I was playing I was also wondering if this game really measures my risk taking preferences in real life or if this only measures my appetite for risk in a video game. 

Jennifer: Yeah, I could be a real daredevil when playing a game, but totally risk averse in real life. 

Hilke: Exactly…  And..  that’s one concern about AI-games. Another one is whether these games are actually relevant to a given job. 

Jennifer: Ok, so help me understand then why companies are interested in using these games in the first place… 

Hilke: So.. Jen From what I’ve learned these AI-games are most often used for entry level positions. So they are really popular with companies who hire recent college graduates. At that point, most job applicants don’t have a ton of work experience under their belts and personality traits start to play a larger role in finding the right people for the job. And oftentimes..  traits like agility or learning capabilities are becoming more important for employers. 

Jennifer: Right… And companies are more likely to need to change up the way they do business now… so it means some skills wind up with a shorter shelf life. 

Hilke: Yeah.. so in the past it may have been enough to hire a software developer with python skills, because that’s what a company needed for years to come. But these days, who knows how long a specific programming language is relevant in the workplace. Companies want to hire people who can re-train themselves and are not put off by change. And… these AI-games are supposed to help them find the right people. 

Jennifer: So, that’s the sales pitch from the vendors. But Walmart.. (one of the biggest employers in the U-S)… they shared some of their findings with these technologies in a recent episode of an industry podcast called Science 4-Hire. 

David Futrell: There’s no doubt that what we run is the biggest selection and assessment machine that’s ever existed on the planet. So, I mean, we test, everyday, between ten and fifteen thousand people and that’s just entry level hires for the stores. 

Jennifer: That’s David Futrell. He is the senior director of organizational performance at Walmart. 

David Futrell: When this machine learning idea first came out, I was very excited by it because, you know, it seemed to me like it would solve all of the problems that we had with prediction. And so we really got into it and did a lot of work with trying to build predictors using these machine based algorithms. And they work. But whenever it’s all said and done, they don’t really work any better than, you know, doing it the old fashioned way. 

Jennifer: And he told the host of that podcast that Walmart acquired a company that was using a pure games based approach…

David Futrell: And uh we found that it just didn’t work well at all. I won’t mention the company, but it’s not the big company that it’s in this space. But they were purported to measure some underlying aspects of personality, like your willingness to take risks and so on.

Jennifer: And a concern with these and other AI hiring tools (that goes beyond whether they work better than what they’re replacing… is whether they work equally on different groups of people…

 [sound of mouse clicking] 

including those with disabilities. 

Henry Claypool: I’m just logging in now.

Jennifer: Henry Claypool is a disability policy analyst… and we asked him to play some of these games with us. 

Henry Claypool: Okay, here we go…complete games.

Jennifer: He’s looking at one of the opening screens on a Pymetrics’ game. It asks players to select if they want to play a version that’s modified for color blindness, ADHD or dyslexia… or if they’d rather play a non-modified version. 

Henry Claypool: Seems like that alone would be a legal challenge here.

Jennifer: He thinks it might even violate the Americans with Disabilities Act…. or A-D-A.

Henry Claypool: This is a pre-employment disclosure of disability, which could be used to discriminate against you. And so you’re putting the applicant on the horns of a dilemma, right — do I choose to disclose and seek an accommodation or do I just push through? The thing is you’re not allowed to ask an applicant about their disability before you make a job offer.

Jennifer: Claypool himself has a spinal cord injury from a skiing accident during his college years… It left him without the use of his legs and an arm.

But this hasn’t held back his career… He worked in the Obama administration and he helps companies with their disability policies. 

Henry Claypool: The fear is that if I click one of these, I’ll disclose something that will disqualify me for the job, and if I don’t click on say dyslexia or whatever it is that I’ll be at a disadvantage to other people that read and process information more quickly. Therefore I’m going to fail either way or either way now my anxiety is heightened because I know I’m probably at a disadvantage.

Jennifer: In other words, he’s afraid if he discloses a disability this soon in the process… it might prevent him from getting an interview.

Henry Claypool: Ooops… am I… oh I’m pumping by the mouse.

Jennifer: Pymetrics’ suite of games starts with one where people get money to pump up balloons… and they have to bank it before a balloon pops.  

Henry Claypool: Okay. Now carpal tunnel is setting in..

Jennifer: A few minutes into the game it’s getting harder for him. 

Henry Claypool:  I really hate that game. I just, I don’t see any logic in there at all. Knowing that I’m being tested by something that doesn’t want me to understand what it’s testing for makes me try to think through what it’s anticipating.  

Jennifer: In other words, he has a dialogue going in his head trying to figure out what the system might want from him. And that distracts him from playing… such  that he’s afraid he might not be doing so well. 

And the idea that he and his peers have to play these games to get a job… doesn’t sit right with him. 

He believes it’ll be harder for those with disabilities to get hired if that personal interaction early on in the process is taken away. 

Henry Claypool: It’s really, it’s too bad that we’ve lost that human touch. And is there a way to use the merits of these analytic tools without leaving people feeling so vulnerable? And I feel almost scared and a little bit violated, right? That I’ve been probed in ways that I don’t really understand. And that feels pretty bad.

[music transition]

Alexandra Givens: When you think about the important role of access to employment, right? This is the gateway to opportunity for so many people. It’s a huge part, not only of economic stability, but also personal identity for people.

Jennifer: Alexandra Givens is the CEO of the Center for Democracy and Technology.

Alexandra Givens: And the risk that new tools are being deployed that are throwing up artificial barriers in a space that already has challenges with access is really troubling.

Jennifer: She studies the potential impacts of hiring algorithms on people with disabilities. 

Alexandra Givens: When you’re doing algorithmic analysis, you’re looking for trends and you’re looking for kind of the statistical majority. And by definition, people with disabilities are outliers. So what do you do when an entire system is set up to not account for statistical outliers and not only not to account for them, but to actually end up intentionally excluding them because they don’t look like the statistical median that you’re gathering around. 

Jennifer: She’s the daughter of the late Christopher Reeve, best known for his film roles as Superman… that is until he was paralyzed from the neck down from a horseback riding accident. 

About 1 in 5 people in the U-S will experience disability at some point in their lives… and like Claypool she believes this type of hiring may exclude them.

Alexandra Givens: You hear people saying, well, this is actually the move toward future equality, right? HR run by humans is inherently flawed. People are going to judge you based on your hairstyle or your skin color, or whether you look like they’re friends or not their friends. And so let’s move to gamified tests, which actually don’t ask what someone’s resume is or doesn’t mean that they have to make good conversation in an interview. We want to see this optimistic story around the use of AI and employees are buying into that without realizing all of the ways in which these tools really can actually entrench discrimination and in a way even worse than human decision-making because they’re doing it at scale and they’re doing it in a way that’s harder to detect than individualized human bias because it’s hidden behind the decision-making of the machine.

Jennifer: We shared a specific hiring test with Givens… where you have to hit the spacebar as fast as you can. She says this game might screen out people with motor impairments — maybe even people who are older.

Alexandra Givens: They’re trying to use this as a proxy, but is that proxy actually a fair predictor of the skills required for the job? And I would say here, the answer is no for a certain percentage of the population. And indeed the way in which they’re choosing to test this is affirmatively going to screen out a bunch of people across the population in a way that’s deeply unfair.

Jennifer: And since job applicants don’t know what’s in these AI-games before they take a test… how do they know if they need to ask for an accommodation?

Also, she says people with disabilities might not want to ask for one anyway… if they’re afraid that could land their application into Pile B … and an employer may never look at Pile B. 

Alexandra Givens: This isn’t just about discrimation against disabled people as a protected class. This is actually a question about the functioning of our society. And I think that’s pulling back that I think is one of the big systemic questions we need to raise here. Increasingly as we automate these systems and employers push to what’s most fastest, and most efficient, they’re losing the chance for people to actually show their qualifications and their ability to do the job and the context that they bring when they tell that story. And that is a huge loss. It’s a moral failing. I think it has legal ramifications, but that’s what we need to be scared about when we think about entrenching inequality in the workforce.

[Music transition]

Jennifer: After the break… a regulator in Washington answers why his agency hasn’t given any guidance on these tools.

But first… I’d like to invite you along for EmTech MIT in September. It’s Tech Review’s annual flagship conference and I’ll be there with the rest of the newsroom to help unpack the most relevant issues and technologies of our time. You can learn more at EmTech M-I-T dot-com.

We’ll be right back.

[MIDROLL]

Jennifer: I’m back with our reporting partner on this series, Hilke Schellmann… and as we just heard, access and fairness of these hiring tools for people with disabilities is a big concern…And…Hilke, did you expect to find this when you set out to do your research? 

Hilke: Actually – that surprised me. I kinda expected to see a bias against women and people of color.. because we’ve seen that time and time again.. And it’s widely acknowledged that there’s a failing there… But I didn’t expect that people with disabilities would be at risk too. And all this made me ask another question. Are the algorithms in these games really making fair and unbiased decisions for all candidates?

Jennifer: And… so.. are the decisions fair? 

Hilke: Well, actually no. Early on in my research.. I spoke to employment lawyers that deal with a lot of companies who are planning on doing business with AI-hiring vendors. They told me that they’re no strangers to problems with these algorithms…and they shared with me — something they haven’t shared publicly before.

Nathaniel Glasser: I think the underlying question was do these tools work? And I think the answer is, um, in some circumstances they do. And in some circumstances they don’t, a lot of it is, it is both vendor slash tool dependent and, also employer dependent and, and how they’re being put to work. And then practically, what’s the tool doing? And to the extent that we see problems and more specifically an adverse impact on a particular group, what are the solutions for addressing those issues? 

Jennifer: Nathaniel Glasser is an employment lawyer in Washington, DC.

Nathaniel Glasser: So monitor, monitor, monitor, and if we see something wrong, let’s make sure that we have a plan of attack to address that. And that might be changing the algorithm in some sense, changing the inputs or if it doesn’t work, just making that decision to say, actually this tool is not right for us. It’s unfortunate that you know, we spent a little bit of money on it, but in the long run, it’s going to cause more problems than it’s worth. And so let’s cut ties now and move forward. And I’ve been involved in that situation before. 

Jennifer: And he recalls a specific incident involving a startup vendor of AI-games. 

Nathaniel Glasser: And unfortunately after multiple rounds in beta prior to going live, the tool demonstrated adverse impact against the female applicants and no matter the tweaks to the inputs and the traits and, and, and the algorithm itself, they couldn’t get confident that it wouldn’t continue to create this adverse impact. And they ultimately had to part ways and they went out to the market and they found something else that worked for them. Now that initial vendor, that was a startup five years ago, has continued to learn and grow and do quite well in the market. And, and I’m very confident, you know, that they learned from their mistakes and in working with other companies have figured it out.

Hilke: So, unfortunately the two lawyers signed a non-disclosure agreement and we don’t know which companies he’s talking about. 

Jennifer: We only know that the company is still out there… and grew from a startup into an established player. 

Hilke: And that could indicate that they fixed their algorithm or it could mean that no one’s looking…

Jennifer: And that’s something that comes up again and again. There’s no process that decides what AI-hiring tools are fair game… And anyone can bring any tool to market. 

Hilke: Yeah… and The Equal Employment Opportunity Commission is the regulator of hiring and employment in the United States…. But they’ve been super quiet. So that’s probably why we’re now seeing individual states and cities starting to try to regulate the industry, but everyone is still kind of waiting for the commission to step in. 

Jennifer: So we reached out to the E-E-O-C and connected with Keith Sonderling… He’s one of the commissioners who leads the agency. 

Keith Sonderling: Well, since the 1960s, when the civil rights laws were enacted, our mission has been the same and that’s to make sure that everyone has an equal opportunity in the workplace… to enter the workplace and to succeed in the workplace. 

Jennifer: Women, immigrants, people of color, and others have often had fewer workplace opportunities because of human bias… and despite its challenges, he believes AI has the potential to make some decisions more fair. 

Keith Sonderling: So, I personally believe in the benefits of artificial intelligence in hiring. I believe that this is a technology that can fundamentally change how both employees and employers view their working relationship from everything of getting the right candidates to apply, to finding the actual right candidates to then when you’re working to making sure you’re in the job that is best suited for your skills or learn about other jobs that may, you may be even better at that you didn’t even know that a computer will help you understand. So there’s unlimited benefits here. Also, it can help diversify the workforce. So I think it is an excellent way to eliminate bias in recruiting and promotion, but also more importantly, it’s going to help employers find the right candidates who will have high level job satisfaction. And for the employees too, they will find the jobs that are right for them. So essentially it’s a wonderful matchmaking service.

Jennifer: But he’s also aware of the risks. He believes bad actors could exclude people like older workers… by doing things like programming a resume parser to reject resumes from people with a certain amount of experience. 

And he says the tools themselves could also discriminate…. unintentionally. 

Keith Sonderling: For instance, if an employer wants to use AI to screen through 500,000 resumes of workers to find people who live nearby. So they’re not late to work, say it’s a transportation company and the buses need to leave on time. So I’m only going to pick people who live in one zip code over from my terminal. And you know, that may exclude a whole protected class of people based on the demographics of that zip code. So the law will say that that person who uses AI for, intentionally, for wrong versus an employer who uses it for the right reasons, but winds up violating the law because they have that disparate impact based on those zip codes are equally held liable. So there’s a lot of potential liability for using AI unchecked.

Jennifer: Unintentional discrimination is called disparate impact… and it’s a key thing to watch in this new age of algorithmic decision making.

But…with these systems… how do you know for sure you’re being assessed differently? When most of the laws and guidelines that steer the agency were established more than 40 years ago… it was much easier for employees to know when and how they were being evaluated. 

Keith Sonderling: Well, that could be potentially the first issue of using AI in the hiring process is that the employee may not even know they’re being subject to tests. They may not even know a program is monitoring their facial expressions as part of the interview. So that is a very difficult type of discrimination to find if you don’t even know you’re being discriminated against, how could you possibly bring a claim for discrimination?

Jennifer: Sonderling says that employers should also think long and hard about using AI tools that are built on the data of their current workforce. 

Keith Sonderling: Is it going to have a disparate impact on different protected classes? And that is the number one thing employers using AI should be looking out for is the ideal employee I’m looking for? Is that just based on my existing workforce, which may be of a certain race, gender, national origin. And am I telling the computer that’s only who I’m looking for? And then when you get 50 resumes and they’re all identical to your workforce, there’s going to be some significant problems there because essentially the data you have fed that algorithm is only looking towards your existing workforce. And that is not going to create a diverse workforce with potentially workers from all different categories who can actually perform the jobs.

Jennifer: Experts we’ve talked to over the course of this reporting believe there’s enough evidence that some of these tools do not work as advertised and potentially harm women, people of color, people with disabilities and other protected groups… and they’ve criticized the agency’s lack of action and guidance. 

The last hearing it held on big data? was in 2016… and a whole lot has changed with this technology since then.

And so we asked the commissioner about that.

Keith Sonderling: There has been no guidelines. There’s been nothing specific to the use of artificial intelligence, whether it is resume screening, whether it’s targeting job ads or facial recognition or voice recognition, there has been no new guidelines from the EEOC since the technology has been created.

Jennifer: And we wanted to understand how that fits with the agency’s mission…

Keith Sonderling: Well, my personal belief is that the EEOC is more than just an enforcement agency. Yes, we are a civil law enforcement agency. That’s required by law to bring investigations and to bring federal lawsuits. But part of our mission is to educate employees and employers. And this is an area where I think the EEOC should take the lead within the federal government.

Jennifer: What might be surprising here is this question of whether these tools work as advertised and pick the best people? That Isn’t the agency’s concern…

Keith Sonderling: Companies have been using similar assessment tests for a very long time and whether or not those tests are actually accurate and predict success of an employee, you know, that is beyond the scope of our job here at the EEOC. The only thing that the EEOC is concerned with when these tests are being instituted, are, is it discriminating against a protected class? That is our purpose. That is our responsibility and whether or not the tools actually work and whether not, it can computer can figure out is this employee in this position, in this location going to be an absolute superstar versus, you know, this employee in this location who should be doing these tasks, is that going to make them happy and going to make them productive for a company that’s beyond the scope of federal EEO law. 

Jennifer: But.. If an AI tool passes a disproportionate number of men vs women, the agency can start investigating. And then, that question of whether the tool works or not, may become an important part of the investigation. 

Keith Sonderling: It becomes very relevant when the results of the test have a disparate impact on a certain protected characteristic. So say if a test, a cognitive test, for some reason, excludes females, as an example, you know, then the employer would have to show if they want to move forward with that test and validate that test, they would then need to show that there is a business need and is job related that the tests we’re giving is excluding females. And, you know, that is a very difficult burden for employers to prove. And it can be very costly as well.  

Jennifer: He’s contemplating something that’s called a Commissioner’s charge, which is a move that would allow him to force the agency to start an investigation… and he’s asking the public for help. 

Keith Sonderling: So if an individual commissioner believes that discrimination is occurring, any areas of the laws we enforce, whether it’s disability, discrimination, sex discrimination, or here, AI discrimination, we can file a charge against the company ourselves and initiate an investigation. Now to do that, we need very credible evidence, and we need people to let us know this is happening, whether it’s a competitor in an industry, or whether it’s an individual employee who is afraid to come forward in their own name, but may be willing to allow a commissioner to go, or many commissioner charges have begun historically off watching the news, reading a newspaper. So there’s a lot of ways that the EEOC can get involved here. And that’s something I’m very interested in doing. 

Jennifer: In the meantime, individual states and cities are starting to try to regulate the use of AI in hiring on their own… having a patchwork of laws that differ state by state can make it a whole lot harder for everyone to navigate an emerging field.  

[music]

Jennifer: Next episode… what happens when AI interviews AI? We wrap up this series with a look at how people are gaming these systems… From advice on what a successful resume might look like…to classes on YouTube about how to up your odds of getting through A-I gatekeepers. 

Narrator: My aim today is to help you get familiar and comfortable with this gamified assessment. In this game you’re presented with a number of balloons individually that you’re required to pump. Try the balloon game now.

[CREDITS]

Jennifer: This miniseries on hiring was reported by Hilke Schellmann and produced by me, Emma Cillekens, Anthony Green and Karen Hao. We’re edited by Michael Reilly.

Thanks for listening… I’m Jennifer Strong.

Main Menu