With all the optimism about how artificial intelligence will make things more productive in the workplace — as well as the fear around its potential to displace workers — another dimension to AI in life and work is also becoming more apparent: How people are beginning to interact with the technology in new (and sometimes unsettling) ways.
HR Dive spoke with Michael Elkon, partner at Fisher Phillips, about what’s on the horizon for compliance and AI in the workplace.
Employees using AI to harass a co-worker
When the internet became commonplace in the mid-to-late ‘90s, employers marveled at how it made the workplace more efficient. At the same time, “people figured out, ‘Oh wait, there’s porn on the internet,’” Elkon said.
Suddenly, explicit images and videos could not only be accessed easily, but shared with or sent to others. That sharing led to Title VII lawsuits alleging sexual harassment.
Elkon noted the recent story of a teenager in Spain who used AI to deep-fake nude images of classmates. It isn’t much of a leap, he said, to imagine employees using the same technology and methods against co-workers.
Eventually, “lurid stories like that … are going to migrate into the workplace,” he said.
Workers becoming overly attached to AI — romantically or otherwise
Recently, a story went viral of a New Jersey man who reportedly fell in love with an AI chatbot and proposed — despite already having a wife and a two-year-old child. Far from being just the plot of a science fiction film, people are already beginning to develop seemingly unhealthy attachments to AI tools, and some of those attachments are happening at work, Elkon said.
Elkon spoke on the topic at a SHRM 2025 conference session earlier this summer alongside his wife, Andrea, a clinical psychologist. During the Q&A portion of the session, an HR leader stepped forward to say the issue had already cropped up at her workplace with a former employee.
“You have this whole overlay where you have younger people who are used to AI in their day-to-day lives. You have younger people who are specifically using AI for emotional functions, because it gets around the loneliness that a lot of people feel. And then you dump AI tools into the workplace,” Elkon said. “It then becomes something that you sort of have to address.”
Workers requesting AI as a reasonable accommodation
As workers grow more familiar with — and potentially dependent on — AI, both to accomplish tasks and for emotional support, employers are likely to see a growing set of AI accommodation requests, Elkon noted.
AI is poised to be a game-changer when it comes to accessibility, making it likely employers will contend with a variety of requests to acquire or use tools that better enable workers to do their job. While these types of requests are likely familiar territory, a worker with anxiety or autism could similarly request constant access to a chatbot for emotional or social support — something most employers likely haven’t dealt with before.
While these new concerns may introduce new situations for employers, going back to the basics can help them begin to develop policies and make decisions. For example, whatever the request, the interactive process is still the gold standard for settling on accommodations: “Courts have frameworks already,” Elkon said. AI is “just a new piece of technology that can potentially get involved in a reasonable accommodation analysis.”
And for more novel circumstances, like AI situationships in the workplace? There’s no real playbook quite yet.
“I think we’re sort of on the cusp of a lot of things,” Elkon said. “Like any other technological advance, people are going to have to figure out, you know, what are the right and wrong ways to use it.”