close
close

50% more professionals rate data privacy as GenAI’s top issue in 2024

A new report from Deloitte shows that data privacy concerns have increased around generative artificial intelligence. While last year only 22% of professionals ranked the company among their top three concerns, this year that number increased to 72%.

The next highest concerns about GenAI’s ethics were transparency and data provenance, with 47% and 40% of professionals ranking them in the top three this year. Meanwhile, only 16% expressed concern about changing their job.

Employees are increasingly curious about how AI technology works, especially when it comes to sensitive data. A September study by HackerOne found that nearly half of security professionals believe AI is risky, and many see training data leaks as a threat.

Similarly, 78% of business leaders ranked “safety and security” as one of their top three ethical principles in technology, an increase of 37% compared to 2023, further demonstrating how important security is.

The survey results come from Deloitte’s 2024 “State of Ethics and Trust in Technology” report, which surveyed over 1,800 business and technology professionals around the world about the ethical principles they apply to technology, particularly GenAI.

High-profile AI security incidents are likely to attract more attention

Just over half of respondents to this year’s and last year’s reports said cognitive technologies such as artificial intelligence and GenAI pose the greatest ethical risks compared to other emerging technologies such as virtual reality, quantum computing, autonomous vehicles and robotics.

This new approach may be related to a broader awareness of the importance of data security following well-publicized incidents such as the ChatGPT OpenAI bug that exposed the personal information of approximately 1.2% of ChatGPT Plus subscribers, including names, email addresses and partial payment details.

Trust in the chatbot has certainly been shaken by the news that hackers broke into an online forum used by OpenAI employees and stole confidential information about the company’s artificial intelligence systems.

SEE: Ethical policy regarding artificial intelligence

“The widespread availability and adoption of GenAI may have increased respondents’ familiarity with and confidence in the technology, generating optimism about its positive potential,” Beena Ammanath, Global Deloitte AI Institute and leader of Trustworthy AI, said in a press release.

“The persistent warning sentiment about apparent threats underscores the need for specific, developed ethical frameworks that enable positive impact.”

AI regulations are impacting the way organizations operate around the world

Naturally, more employees are using GenAI at work than last year, and the percentage of professionals saying they use it internally has increased by 20% in Deloitte reports year over year.

A whopping 94% said their companies had ingrained this into their processes in some way. However, the majority indicated that it was still in the pilot phase or its use was limited – only 12% said it was widely used. This is in line with recent Gartner research, which found that most GenAI projects do not progress beyond the proof-of-concept stage.

SEE: IBM: While the use of artificial intelligence increases in enterprises, barriers limit its use

Regardless of its widespread use, policymakers want to be sure that the use of artificial intelligence will not get them into trouble, especially when it comes to legislation. The highest-rated reason for having ethical technology policies and guidelines was regulatory compliance, cited by 34% of respondents, while regulatory penalties were among the three most frequently reported concerns for non-compliance with such standards.

The EU Artificial Intelligence Act entered into force on August 1 and imposes strict requirements on high-risk artificial intelligence systems to ensure security, transparency and ethical use. Failure to comply could result in financial penalties ranging from €35 million ($38 million) or 7% of global turnover to €7.5 million ($8.1 million) or 1.5% of turnover.

More than a hundred companies, including Amazon, Google, Microsoft and OpenAI, have already signed the EU Artificial Intelligence Pact and volunteered to start implementing the bill’s requirements ahead of the legal deadlines. This both demonstrates their commitment to the responsible implementation of AI among society and helps them avoid future legal challenges.

Similarly, in October 2023, the United States published an Artificial Intelligence Executive Order with broad guidelines for maintaining security, civil rights, and privacy in government agencies while promoting AI innovation and competition across the country. While not a law, many companies operating in the U.S. can make policy changes to comply with changing federal oversight and public expectations around AI security.

WATCH: G7 countries establish voluntary code of conduct on artificial intelligence

The EU’s Artificial Intelligence Act has had an impact in Europe: 34% of European respondents said that in response, their organizations had made changes to their use of AI over the past year. However, the impact is more widespread, as 26% of South Asian respondents and 16% of North and South American respondents also made changes due to the bill’s tranche.

Additionally, 20% of U.S.-based respondents said they had made changes to their organizations in response to the Executive Order. A quarter of South Asian respondents said the same, followed by 21% in South America and 12% in Europe.

“Cognitive technologies such as artificial intelligence are recognized to have both the greatest potential to benefit society and the greatest risk of abuse,” the report’s authors wrote.

“Accelerated adoption of GenAI may exceed an organization’s ability to manage the technology. Companies should prioritize both implementing ethical standards for GenAI and meaningfully selecting the use cases to which GenAI tools are applied.