Artificial intelligence tools offer many potential benefits, but keeping up with AI’s rapidly evolving developments can seem like “Everything Everywhere All at Once,” said Victoria Lipnic, head of the Human Capital Group at Resolution Economics, referring to the Oscar-nominated film as she summarized the challenges employers face.
Lipnic, a former chair of the U.S. Equal Employment Opportunity Commission under Presidents Barack Obama and Donald Trump, moderated a Feb. 28 panel on AI at the Society for Human Resource Management’s Employment Law and Compliance Conference in Washington, D.C.
In a nutshell, AI tools offer employers “more, faster and hopefully better,” Lipnic said. The tools can evaluate large databases to determine a work outcome, such as performance, turnover, absenteeism, injury reduction or sales, explained Eric Dunleavy, an industrial/organizational psychologist and director of the Employment & Litigation Services Division at DCI Consulting.
AI technology can also be designed to achieve D&I objectives, Dunleavy said. “When the tools are developed and monitored correctly, they can work,” he stressed.
Therein lies the challenge. In the employment context, one of the major issues is that using AI tools may result in bias against certain demographic groups, management attorney Savanna Shuntich, of Fortney Scott in Washington, D.C., pointed out.
“Algorithmic bias,” as the concept is known, can take a number of forms and can occur even when the employer or the AI tool vendor hasn’t taken any intentionally discriminatory actions.
For example, algorithmic bias could occur if the data used to train an AI tool instructs it to pull information that includes criminal records, Shuntich said. Using an individual’s criminal history in making employment decisions may result in race, color or national origin discrimination in violation of Title VII of the Civil Rights Act of 1964, according to an EEOC enforcement guidance. In addition, some states have “ban the box” laws that restrict how employers can use a person’s criminal history in making an employment decision.
“Machine learning” can also cause bias. This happens when the AI tool learns bias over time, even if it’s specifically trained against that, Shuntich explained.
She gave the hypothetical example of an employer that wants to find workers like its top-notch performers, and uses its AI tool to evaluate data related to those employees. If the database isn’t sufficiently representative (e.g., it’s from a workplace that’s comprised of predominately White men), machine learning may develop a preference for these two demographics, even if programmers tell it not to select on the basis of race or gender.
A third form of bias centers on access issues for disabled applicants or candidates. One scenario involves using an AI assessment as part of the selection process, which may cause problems for people with visual or auditory impairments, Shuntich pointed out.
The bottom line: Employers should be mindful of the way their AI tools can affect people with disabilities, she said. An EEOC guidance from last year on AI and the Americans with Disabilities Act provides examples. Employers should also remember that disabled individuals have the right to ask for a reasonable accommodation, Shuntich added.
Employers can take steps to guard against bias and potential liability, the panelists emphasized. These start with notice and consent.
Notice should be provided at the outset of the process. It should state what type of AI-enabled tool is being used and provide enough information for the applicant/employee to understand what criteria is being evaluated, a handout explained.
The latter is also important for why obtaining consent matters: It’s not always clear to an individual what the AI is assessing, Shuntich clarified. She pointed to an issue that gets batted around a lot. An applicant participating in a video interview may believe they’re being evaluated on the content of their responses, but the AI tool may also be picking up how they say it, their body language and their eye contact. In the context of the ADA, if the applicant isn’t aware of this, they won’t know to ask for a reasonable accommodation, Shuntich said.
Also, AI tools conducting background checks can pull data from a much broader area than traditional selection tools, such as social media, and trigger data privacy concerns as well, the panelists explained. An applicant may not know that this will be evaluated.
At the federal level, there is no current law on notice and consent, but it’s on the Biden administration’s mind, the panelists said. They suggested employers review the White House Blueprint for an AI Bill of Rights to get an idea of breadth of the administration’s focus.
Lipnic also directed HR professionals to the Institute for Workplace Equality’s December 2022 report on the EEO and DEI considerations of AI in the workplace, which she oversaw and which Dunleavy, Shuntich and numerous other experts contributed to. The report walks employers through the fundamentals of where the current law is and provides guardrails for employers to follow through their workers’ employment cycle, Lipnic said.
Employers should pay attention as well to state law developments. Most notable is New York City’s law, scheduled to take effect on April 15. The law would require employers to conduct bias audits on automated employment decision tools, including AI tools, and provide notice about their use to employees or job candidates who reside in the city. lllinois and Maryland already have AI notice and/or consent laws. California, New Jersey, Vermont and Washington, D.C., have bills in the pipeline.
Another key prevention tool involves the employer’s relationship with its AI vendor. This starts with the employer understanding where its organization uses AI and where it should use AI, Dunleavy said. Employers can obtain this understanding by conducting a proactive review of their AI use.
When employers talk to their vender, they should have the vendor explain how the AI tool will accomplish what the employer wants it to accomplish, how the tool’s features are job-related and how the employer will be able to explain job-relatedness down the road.
Also, employers should make sure to have key issues delineated in a written agreement with the vendor. This includes addressing indemnification, future access to needed data, and the duty to notify the employer if the vendor learns of bias through its own testing or from another tool user.
Remember, “you, as the employer,” will be on the hook for discrimination, Dunleavy said.