close
close

Only 22% of Indians use GenAI for professional purposes

According to an Elsevier report titled ‘Insights 2024: Attitudes toward AI’, only 22% of Indians are using generative AI for professional purposes in healthcare and research, while 76% plan to use it in the next two to five years.

Compared to North America (30%), a greater number of respondents from the Asia-Pacific region, including India, have used AI for work-related purposes (34%).

The study, which surveyed almost 3,000 people from around the world working in the research and healthcare sectors, further found that while 95% of them believe it is an excellent source of knowledge, 87% believe that using generative AI tools can improve the quality of work.

Recently, CP Gurnani, former head of technology at Mahindra, said: OBJECTIVE MachineCon GCC Summit: “I can only say that there is no human being, including you, who is not working on generative AI.”

Can generative artificial intelligence increase productivity?

Most studies conducted worldwide have shown that AI, or rather generative AI, can improve employee productivity by giving them time to focus on other areas of work as well.

A new study from Capgemini, also published today, shows that generative AI will play a key role in driving employment in the software industry, supporting more than 25% of software design, development and testing work over the next two years.

However, this contradicts Genpact’s recently published The GenAI Countdown report, which found that 52% of respondents expressed concerns that too much focus on productivity could have a negative impact on employee experience.

The challenges continue

In India, many healthcare companies are actively leveraging the potential of generative AI. From startups like Practo and Healthify to hospital chains like Apollo and Narayana, all are exploring this segment.

But in India, as well as globally, the primary concern about the technology is disinformation. According to the survey, about 94% of people are worried that AI will be used for disinformation, 86% are worried about critical errors or blunders, and 81% are worried that AI will blunder critical thinking.

However, there is a clear need for transparency and credible sources to build trust in AI tools, as AI is expected to rapidly increase the number of scientific and medical studies. Thus, 71% expect AI tool results to be based on high-quality, trusted sources.

Mira Murati, CTO at OpenAI, also admitted in a recent interview that the same risks and concerns lead to bias in LLM-based products.