close
close

More Complaints, Worse Performance as AI Monitors Employees

Supervision tools

Source: Pixabay/CC0 Public Domain

Organizations that use artificial intelligence to monitor employee behavior and productivity can expect employees to complain more, be less productive, and be more likely to leave their jobs — unless the technology is presented as helping them grow, according to research from Cornell.

Research shows that surveillance tools, which are increasingly used to track and analyze physical activity, facial expressions, tone of voice, and verbal and written communication, cause people to feel a greater loss of autonomy than human surveillance.

Companies and other organizations that use rapidly changing technologies to assess whether people are slacking off, treating customers well, or potentially committing fraud or other wrongdoing should consider the unintended consequences that could create resistance and hurt productivity, the researchers say. They also suggest that buy-in can be gained if regulators feel that the tools are there to help, not to assess their performance—assessments they fear will lack context and accuracy.

“When AI and other advanced technologies are implemented for development purposes, people like the idea of ​​learning from them and improving their performance,” said Emily Zitek, assistant professor of organizational behavior at the ILR School. “The problem comes when they feel like the assessment is automatic, directly from the data, and they can’t contextualize it in any way.”

Zitek is the co-author of the book “Algorithmic Versus Human Surveillance Leads to Lower Perceptions of Autonomy and Increased Resistance,” published June 6 in Communication PsychologyRachel Schlund, Ph.D. ’24, is first author.

Algorithmic surveillance has already sparked a backlash. In 2020, an investment bank quickly pulled a pilot program testing software to monitor employee activity, including notifying them if they took too many breaks. Schools’ monitoring of virtual tests during the pandemic has sparked protests and lawsuits, with students saying they feared every move would be misinterpreted as cheating.

On the other hand, people may perceive algorithms as more efficient and objective. And studies have shown that people are more tolerant of behavior-tracking systems like smart badges or smartwatches when they provide feedback directly rather than through someone who might make negative judgments about the data.

In four experiments involving a total of almost 1,200 participants, Schlund and Zitek investigated whether it matters whether surveillance is conducted by humans or by artificial intelligence and related technologies, and whether the context in which it is used—to assess performance or support development—influences perceptions.

In the first study, when participants were asked to recall and describe situations in which they were monitored and evaluated by one type of surveillance, they reported feeling less autonomy in the face of AI and were more likely to engage in “resistance behaviors.”

Then, simulating real-world surveillance, a pair of studies asked participants to work in groups to come up with ideas for an amusement park, and then individually to generate ideas for one segment of the park. They were told that their work would be monitored by either a research assistant or an artificial intelligence, the latter represented in Zoom videoconferencing as “AI Technology Feed.”

After a few minutes, either the human assistant or the “AI” delivered messages that participants didn’t have enough ideas and should try harder. In surveys conducted after one study, more than 30% of participants criticized AI supervision, compared with about 7% who criticized human monitoring.

“AI amplification has made the situation more stressful and less creative,” one participant wrote.

In addition to complaints and criticism, researchers found that people who believed they were being monitored by AI had fewer ideas, indicating poorer outcomes.

“Although participants received the same message in both conditions that they needed to generate more ideas, they received it differently when it came from the AI ​​than from a research assistant,” Zitek said. “AI supervision caused them to perform worse on multiple studies.”

In the fourth study, participants who imagined themselves working in a call center were told that humans or AI would analyze a sample of their calls. For some, the analytics would be used to evaluate their performance; for others, it would be used to provide feedback on development. In the development scenario, participants no longer perceived algorithmic surveillance as a greater infringement on their autonomy and reported no greater intention to leave.

The results suggest an opportunity for organizations to implement algorithmic surveillance in a way that could gain the trust of those being supervised rather than provoke resistance.

“Organizations trying to implement this type of oversight need to recognize the pros and cons,” Zitek said. “They need to do what they can to make it more extensible or make sure people can add context. If people feel like they don’t have autonomy, they’re not going to be happy.”

More information:
Rachel Schlund et al., Algorithmic versus human surveillance leads to lower autonomy and greater resistance, Communication psychology (2024).DOI: 10.1038/s44271-024-00102-8

Provided by Cornell University

Quote:Study: More complaints, worse performance when AI monitors employees (2024, July 2) retrieved July 2, 2024, from https://phys.org/news/2024-07-complaints-worse-ai-employees.html

This document is subject to copyright. Apart from any fair use for private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.