close
close

AI Alliance: Canada should prioritize regulation and open source

When more than 50 technology companies, universities and startups from around the world united to form the AI ​​Alliance last December, much of the world was still unaware of the rapid advances in artificial intelligence.

As regulators scrutinize the technology and questions swirl about whether its use will reinforce bias and discrimination, take away people’s jobs or even spell the end of humanity, an industry group was tasked with analyzing concerns and finding practical ways to continue advancing artificial intelligence.

About seven months later, an organization led by IBM and Meta Platforms Inc. it has about 100 members and has established working groups that cover everything from artificial intelligence skills to security.

The Canadian Press asked members what measures Canada should prioritize as artificial intelligence evolves.

Greater risk, greater reward

Abhishek Gupta, founder of the Montreal Institute for AI Ethics, considers Canada the “original home of artificial intelligence.”

Some of the technology’s pioneers, including Yoshua Bengio and Geoffrey Hinton, did much of their work domestically. Long before artificial intelligence became popular, Canada was a hotbed for research in this sector.

Gupta, however, worries about the country’s ability to turn AI into profits.

“Unfortunately, we have started to lose our edge in the commercialization process,” he said.

That’s partly due to Canadian talent seeking higher salaries in the U.S. and other countries, where Gupta has heard of engineers earning less than $1 million a year. U.S. venture capital firms with deeper pockets – and often bolder approaches – may spend more than Canadian ones, further crowding out domestic firms, he said.

This pattern continues when investors sell some or all of their shares in the company. Many Canadian founders have chosen to go out and take their company to a non-Canadian company because of how much money buyers are willing to pay elsewhere.

As an example of the leakage of AI talent from the country, Gupta points to Element AI, a Montreal-based company that created AI solutions for large organizations that was sold to California-based ServiceNow in 2020.

“It’s not great that it hasn’t remained a Canadian company … because the most important thing we want to see is obviously the research being translated into commercial success,” he said.

Jeremy Barnes, former chief technology officer of ElementAI and now vice president of AI at ServiceNow, similarly laments that Canada has not been able to capitalize on the advantage it once had.

To turn things around, he believes the country needs to stop being so conservative, and VC firms need to focus less on protecting against losses and more on how to “share the benefits” of startups.

“To be able to win the jackpot, you have to put your chips into the game,” he said.

Canada must look beyond “high-profile companies” and support disruptive companies that are receiving less attention but have high potential, Barnes said.

Right rails

When the Alliance was founded, countries were already shaping their regulations regarding artificial intelligence.

U.S. President Joe Biden issued an executive order requiring AI developers to share security test results and other information with the government, and the European Union has implemented stringent compliance requirements.

Manav Gupta, vice president and chief technology officer at IBM Canada, likes the deliberateness of the U.S. government and the EU’s policy because it is a multi-layered approach that recognizes that artificial intelligence systems associated with weapons, for example, carry with them a completely different risk to systems involved in tasks such as processing welfare applications.

He believes these two policies have “led the way” for other countries, providing a benchmark for what AI regulation should look like around the world.

Canada tabled a bill focusing on artificial intelligence in 2022, but it won’t be implemented until at least 2025, so the country opted for a voluntary code of conduct, which IBM and dozens of other companies have signed up to in the meantime.

Gupta said any policy the country relies on should have a “well-defined framework” with a multi-layered approach to risk.

“The greater the risk associated with the technology, the higher the risk assessment, and therefore the greater the regulation and the greater the transparency,” he said.

The country should also be careful not to stray too far from the direction of global regulations, said ServiceNow’s Barnes.

“If done wrong, it will create friction that will make it harder for Canadian companies to compete with others, so to some extent Canada’s role cannot be to act alone.”

Focus on open source AI

As artificial intelligence becomes more common, Kevin Chan, director of global political campaign strategy at Meta, the owner of Facebook and Instagram, recommends that the tech industry adopt an open source model.

Open source models mean that the code underlying an AI system is freely available to anyone to use, modify and extend it, thereby increasing access to AI, supporting development and research, and even ensuring transparency of the technology.

“That’s actually how innovation happens,” Chan said of the open source philosophy.

“We want to make sure people can choose to use open models so we can innovate faster and democratize this technology to more people.”

Open source models have their drawbacks, however – people can exploit them to do harm, and when vulnerabilities come to light, hackers can attack multiple systems at once – but Chan sees this approach as an opportunity.

“Open models are great for countries like Canada that may not have… the resources to build their own border models,” he says.


This report by The Canadian Press was first published June 21, 2024.