close
close

Urgent need for AI regulation

ISLAMABAD:

After 9/11, the U.S. realized that its inability to develop common data standards and templates had led to top-down systems, limited information sharing, and poor decision-making. Fast forward to 2020, the U.S. has not only solved data fusion problems but also strengthened AI tools that can predict events before they happen. Enter Project Codename “Raven Sentry,” an open-source intelligence (OSINT) tool that predicted the ISIS attack on Jalalabad, Afghanistan, a month before it happened in August 2020.

The tool mined data from satellites, social media, messaging apps, and news reports—sources that would be subject to numerous laws in the U.S., including the Electronic Communications Privacy Act (ECPA), the Wiretap Act, the Computer Fraud and Abuse Act (CFAA), and the Digital Millennium Copyright Act (DMCA). In fact, such an artificial brain that combs social media for predictions would likely be invalidated by U.S. courts, as seen in Facebook v. Power Ventures (2016).

The Soviets had a similar intelligence program known as RYaN in the 1980s, designed to predict the outbreak of a likely nuclear war six months in advance. It was fed intelligence on the locations of U.S. nuclear warheads in GIS, visa data for U.S. personnel, activities at U.S. embassies, military exercises, and even soldiers’ leave policies. Unlike today’s AI systems, RYaN was not automated, and the data had to be entered manually by mathematicians and analysts.

Today, AI tools are not only used to analyze data and make better decisions; they are also used to predict and preempt adversary actions – a whole new dimension of warfare and intelligence.

This evolution underscores the urgent need to regulate AI weapons and AI systems in general, as they increasingly undermine our privacy and representative democracies. In January 2024, voters in the US received robocalls impersonating President Joe Biden’s voice—a disturbing echo of the numerous fake news reports seen during the Pakistani election. More recently, Elon Musk posted a deepfake video of Vice President Kamala Harris without disclosing that it was generated by AI, reigniting debate about how effectively social media companies can self-regulate. In response, Governor Gavin Newsom has begun lobbying for stricter regulations on AI-generated content, with the support of like-minded senators like Amy Klobuchar.

The need for an international treaty on the civilian and military uses of AI is clear. In June 2024, the UN General Assembly passed a Beijing-backed resolution that aims to ensure that AI is “safe, secure and trustworthy,” respects human rights, promotes digital inclusion and supports sustainable development. But there is still no resolution on the military dimension of AI or lethal autonomous weapons. While the United States supported China’s resolution on AI, it has also introduced a new policy to monitor and restrict American investment in AI and computer chips in China—a move that some see as too little, too late. Chinese companies are already far ahead of the pack in AI development; TikTok owner ByteDance, for example, recently introduced the Doubao large-language model, which costs 99.8% less than OpenAI’s GPT-4 model.

As these events show, AI advances are part of a larger Sino-American geopolitical race. When Beijing announced its goal of becoming the world leader in AI by 2030, the U.S. Defense Advanced Research Projects Agency (DARPA) responded by pledging $2 billion for AI development. Now, OpenAI has begun restricting access to its tools and APIs in the Chinese market, but local players have quickly stepped in to fill the gap.

Pakistan’s AI policy, however, lags behind. It lacks a detailed regulatory framework, as found in the EU AI Act or Singapore’s Data Protection Laws. Pakistan’s AI policy requires comprehensive regulation to address ethical issues, data privacy, and liability. The policy does not mention collecting data from Pakistani sites to train AI models, as Pakistan’s Prevention of Electronic Crimes Act (PECA), 2016, applies only if all stakeholders are residents of Pakistan. If a Facebook app mines our data to train its AI model, there is currently nothing we can do under the existing legal framework. Given that most AI models are deployed in the cloud, Pakistan’s AI policy will remain incomplete unless it is supported by a solid cyber and internet governance policy — not to mention the need to sign international agreements such as the Budapest Convention and the new Cybercrime Convention (2022).

In addition, policies should mandate disclosure of AI deployments, whether in marketing, social media, e-commerce, or political campaigns. The regulatory landscape should balance innovation with regulation while ensuring privacy, ethics, and transparency. Pakistan could learn from China’s approach to regulating generative AI. China’s 2022 generative AI policy includes strict data quality requirements for training models and clear guidelines for AI-generated content to control disinformation. Without similarly specific fair use policies and supporting regulations, Pakistan risks becoming a breeding ground for disinformation and a testing ground for foreign AI models—putting the privacy of its 225 million citizens at risk.

THE WRITER IS A CAMBRIDGE GRADUATE AND WORKS AS A STRATEGIC CONSULTANT