Generative artificial intelligence use is pervasive, and human resources professionals can’t afford to ignore it, attorneys from law firm Cozen O’Connor said during a virtual conference Wednesday.
“Putting your head in the sand is not really the approach here,” said Janice Sued Agresti, an associate at the firm. It’s crucial to train employees and consider any applicable legal obligations, but it’s also prudent to write generative AI policies, Sued Agresti and others said.
Existing policies may address some concerns that could come into play, such as confidentiality and conduct. But policies that address the technology specifically can help employers mitigate other concerns like bias. To that end, HR professionals should ensure several items are included in an employer’s generative AI policy, the attorneys said.
1. Make humans responsible for outcomes
Employers should ensure that human users are ultimately responsible for their work product, Sued Agresti, told attendees.
In fact, when writing a generative AI policy, “I would include that very sentence,” she said: “‘You are ultimately responsible as a human user for the content or product that you create using generative AI,’” — or AI in general, she added.
2. Define who may use generative AI
HR can help employers decide which employees are permitted to use generative AI at work, and that information should be included in a policy, Sued Agresti said.
Some employers will decide that no employees may use the technology, and they’ll want to issue a blanket prohibition. According to Sued Agresti, that could say, “While we’re excited [about the] developing technology and what that could mean for our industry, we don’t believe it’s there yet and so we are not allowing the usage of generative AI in the workplace to create workplace content.”
Alternatively, some employers may opt to limit generative AI use to certain groups of employees. Perhaps there are certain experts who can easily evaluate AI-created content, Erin Bolan Hines, member of the firm said, adding that there are many ways to think about who should be using the technology at certain locations (perhaps because of state or local laws) or in specific fields.
3. Require prior approval
Employers should consider whether employees must obtain approval before using generative AI at work, Sued Agresti said.
This could mean certain individuals must obtain consent from a manager, for example, adding an extra layer of protection, she said.
4. Set limits on tasks
Employers should make clear if there are certain tasks that would be considered off-limits for generative AI, Sued Agresti recommended. A university, for example, earlier this year apologized for using the tech to issue a statement about a mass shooting at another school. “That obviously seems very heartless and a bit callous,” she said.
That’s why it’s important to define whether there are certain tasks that are off-limits in the employment context, Sued Agresti said; “One thing that immediately comes to mind is termination letters.” It would be easy to use generative AI to complete that task, but employers should consider how that would look to a jury, should the firing lead to litigation. “That certainly, from an optics standpoint, does not look good,” she said.
Similarly, employers in the customer service industry may deem responses to customer complaints off-limits for generative AI, she suggested.
5. Require reviews
Employers may want to require that individuals using generative AI monitor its results consistently, the attorneys suggested.
This could be required for those working in HR, for example, Sued Agresti said, to ensure that the technology isn’t introducing bias into any processes.
6. Mandate reporting
Employers should include in their generative AI policies a reporting obligation if the technology creates any discriminatory content, or if a user discovers any discriminatory content in the tool, Sued Agresti said.
7. Provide a point of contact
Finally, employers should provide a generative AI point of contact for employees, Sued Agresti recommended. It’s important to tell individuals who they should reach out to if they have questions or uncover a problem, she said.
This individual — or someone else — also should be tasked with regularly monitoring relevant tech or legal updates because policies can become stale quickly, Sued Agresti said; appointing a point of contact can “ensure that someone is keeping a constant pulse as to what’s going on with the technology.”
Correction: An earlier version of this story incorrectly identified the event speakers. The story has been updated with correct attribution.