Leaders from four federal agencies issued a joint statement last week on the use of artificial intelligence and automated systems, outlining how existing U.S. laws and regulations apply to these technologies.
The document, authored by heads of the Equal Employment Opportunity Commission, Consumer Financial Protection Bureau, Department of Justice and Federal Trade Commission, follows months of growth for generative AI platforms, including OpenAI’s ChatGPT, that are gradually making their way into the workplace — at times, right under employers’ noses.
“These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” regulators said. “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”
Regulators ramp up AI talk
The agencies’ statement discussed the implications of AI in many areas, including workplaces. EEOC linked to a 2022 technical assistance document in which the commission explained how algorithmic decision-making tools, including AI-assisted tools, may violate the Americans with Disabilities Act.
EEOC issued that document in conjunction with the DOJ. During a press call last year, DOJ Assistant Attorney General Kristen Clarke said the two agencies were “sounding the alarm” about employers’ reliance on AI, machine learning and similar processes. In last week’s statement, DOJ reiterated that its enforcement of constitutional provisions and federal statutes prohibiting discrimination apply to the workplace.
The agencies outlined three potential sources of discrimination within automated systems:
- Data and datasets.
- Model opacity and access.
- Design and use.
The statement serves as further confirmation of a “united federal intent” between the agencies to act cohesively with respect to the growth of AI tools, Niloy Ray, shareholder at management-side firm Littler Mendelson, said in an interview.
“It’s a recognition of the fact that what we’re going through right now in America and across the world is a leveling up of the availability, the use and the ubiquity of the technology,” Ray said of the agencies’ statement.
In addition to the 2022 EEOC-DOJ technical assistance document, the statement also has its roots in several other actions taken by the Biden administration, Eric Reicin, president and CEO of BBB National Programs, told HR Dive in an email. That includes the “Blueprint for an AI Bill of Rights” issued last year by the White House Office of Science and Technology Policy as well as a memo from National Labor Relations Board General Counsel Jennifer Abruzzo regarding the impact of AI and other technologies on workers’ rights.
Implications for employers
Ray said regulators are focusing on two particular AI use cases: predictive AI, which employers have been using increasingly over the last three to five years for tasks such as predicting outcomes and analyzing data bases via pattern analysis, and generative AI, the newer category of tools including ChatGPT that involves the creation of new content.
Last week’s statement does not represent the creation of new law, but rather a commitment by the agencies to enforce existing laws with additional scrutiny on techniques and technologies that rely on AI, Ray added. It also may be a sign that momentum is building for a federal standard for regulatory compliance and liability with respect to AI systems.
The latter point could be promising for employers, Ray said, given that a number of states are in the process of drafting laws that would govern the use of AI in lieu of a broader, federal standard.
California, for example, is considering passage of Assembly Bill No. 331, which would require deployers of automated decision-making tools to perform impact assessments for such tools and notify people subject to decisions made by the tools of their use, among other provisions.
Perhaps the most notable law that directly regulates employer use of AI is New York City’s Local Law 144. The law requires employers to annually audit any automated employment decision-making tools and provide prior notice to notify candidates and employees of a tool’s deployment within 10 business days. City regulators announced last month that enforcement of the law would begin July 5.
There are considerations under international law, too, because AI is not just a jurisdictional issue in the U.S.; Europe’s General Data Protection Regulation, or GDPR, limits the kinds of information that organizations may collect, interpret and analyze, Ray said.
What HR can do now
At the same time, employers “have to be careful not to let a matrix of regulations stifle or choke off innovation, because this innovation is a leveling up of our process across industries,” Ray added. Reicin said that employers can employ AI and machine learning to harness the power of data and make efficient, sound decisions in areas like recruiting and talent sourcing.
But employers that either already use AI or are considering doing so must understand why and how they are using it, Reicin said. If the tool is used to make employment decisions, Reicin added that employers also need to consider how much input the tool has in a given decision; ensure the tool is audited to check for appropriate accommodation procedures and candidate notice provisions; and the extent to which humans have oversight over the tool.
HR teams also may consider what privacy controls are present, whether the tool takes into consideration relevant state and local laws, “and frankly, does it work?” Reicin said. Employers, he continued, should conduct vendor analysis and ensure their vendors engage in algorithm de-biasing efforts.
“Employers are on the hook whether they use a vendor or not to ensure nondiscrimination under relevant law,” Reicin said. “But these are complicated and evolving technologies, so a one-sized approach might not work for everybody.”
Working with vendors is a collaborative discussion, Ray said, one that should not involve pushing the burden of due diligence around AI tools to one side of the equation.
“Particularly in this AI sector, it is so brand new, and these uses are being developed on a daily basis, that it is equally hard for vendors to know all the ways that their tools will be used in a given organization,” he continued. “It’s a collaborative thing rather than a list of questions to ask.”
Executives and top leadership may not know how and where AI is being deployed, Ray said, and that lack of knowledge can make it harder for employers to be adaptive and responsive to the various issues AI tech entails. Organizations, he added, need to build a culture of awareness so that AI can be adopted in a way that keeps them competitive with their peers.
“These are sophisticated tools that need to be adopted in a sophisticated manner,” Ray said. “Otherwise, someone else is gaining the competitive edge.”