- Cole Martin

- Mar 31
- 5 min read
Updated: Apr 6
by Cole Martin for The 44 North, Contributing Writer - Politics

"If Canada is all-in on AI, why haven’t law and policymakers been more on top of protecting Canadians from AI-related harms?"
If, like me, you’re guilty of scrolling social media way past your bedtime (“doom-scrolling” if you prefer), odds are you’ve encountered an artificial intelligence (AI)-generated photo/video, if not a deluge of them. Most of the time, they’re easy to spot, particularly those of fantastical flavour: a dancing dog; a baby driving a bus; a photo-realistic SpongeBob delivering a State of the Union address. However, it’s not difficult to imagine the harms that arise when the tech is wielded with verisimilitude.
Public skepticism towards AI reflects such apprehensions. According to a 2025 survey, “Half of U.S. adults say the increased use of AI in daily life makes them feel more concerned than excited.” Given that some of our most cherished media explicitly detail the dystopian threats of artificial intelligence (Black Mirror, Minority Report, Terminator, etc.), this statistic isn’t exactly surprising. But I think there’s more to it than mere prejudice.

AI is a strange commodity. Capitalist markets purport to follow supply and demand: increased demand for a product leads to increased production, and so forth. Yet, in the case of AI, this relation appears inverted. Every company is chomping at the bit to inject AI into its products and services, hoping to capitalize on the emergent technology, even if shoving it into people’s faces only seems to make them more obstinate.
The problem is that the major players in AI (Google, Meta, OpenAI, Nvidia, etc.) have too much money invested in the technology to turn back. AI has been the darling of the stock market for years, and now, with 10 AI-adjacent stocks accounting for over a third of the S&P 500 earnings, there’s cause to compare it to the dot-com economic bubble of the late 90s. If confidence in AI’s profitability wanes, then the bubble could burst, and the whole market could go down with it. Consequently, these companies need users to justify their AI investment to shareholders, and if people aren’t volunteering to be users, companies are forcing them to be: Apps and websites that worked just fine before the AI craze are now inundated with AI search functions and chatbots, and consumers are understandably disgruntled (I know I am).
It’s not solely companies that are shackled to the runaway AI train, either; Canada itself is a notable player in the AI race, with Prime Minister Mark Carney proposing over $1 billion of the federal budget towards strengthening Canada’s position in the field, and Bell Canada set to construct Canada's largest AI data centre in Regina.
Whether the AI economy booms or busts, only time will tell, but I think the language companies and governments use to discuss AI is telling. Evan Solomon, Canada’s Minister of Artificial Intelligence and Digital Innovation, made Canada’s intentions explicit when he called artificial intelligence “one of the greatest economic opportunities of our time.” It’s the sales strategy: Tout the technology's economic potential while downplaying or outright ignoring its harms.
Troubles have been brewing with AI for a while (environmental tolls, copyright issues, and psychological dangers, to name a few), and Canada is already falling behind on mitigating them. A recent example occurred in Nova Scotia, where a man was acquitted of charges for using AI to generate and publish nude images of his high-school classmates without their consent.
While section 162.1 (1) of the Criminal Code states that “everyone who knowingly publishes [...] intimate image of a person knowing that the person depicted in the image did not give their consent” is guilty of a criminal offence, the judge attached to the case stated that “existing legislation does not adequately cover what the accused did, because it didn’t meet the current legal definition of intimate images.”
Cases like this are, unfortunately, far from novel in the realm of machine-learning technology. Deepfakes have been around for over a decade, spanning back to at least 2014 with the advent of the Generative Adversarial Network (GAN). Even in its nascent stages, the technology was used to generate pornographic content, leading to a flurry of websites that, like Grok (X’s proprietary AI that reportedly produced over 6000 sexualized deepfakes an hour), allowed users to submit images for the tech to undress.
In December 2025, Bill C-16, the Protecting Victims Act, was introduced to parliament, with one of its express goals being to “expand the offence prohibiting the non-consensual distribution of intimate images to ensure that it applies to non-consensual deepfakes” (“deepfakes” being the colloquial term for images/videos created using deep-learning AI to resemble real people). This is certainly a welcome step, but given that it has yet to be voted on, one can’t help but feel that it’s arrived much too late.

If Canada is all-in on AI, why haven’t law and policymakers been more on top of protecting Canadians from AI-related harms?
The answer is couched in the question. As mentioned above, Canada is, like much of the world, economically entrenched in AI. Alongside Carney’s proposed investment, most Canadians who invest in the stock market—whether through personal investments, group RRSPs, etc.—are deeply entangled with AI, and it could spell disaster if the bubble bursts. In other words, there is a vested interest in AI’s economic success.

That said, a more circumspect approach is not impossible. The EU was relatively quick to regulate AI, and Canada even tried to follow their lead with the Artificial Intelligence and Data Act (AIDA) in 2022, but the attempt was regarded as lacklustre, and the bill ultimately died when parliament was prorogued in 2025. Despite the death of AIDA, the desire of Canadians is clear: per a 2025 poll, 85% of Canadians believe the government should regulate AI.
Canada has had a storied career of not seeing the forest for the trees, and its habit of ceding to lobbyist pressure and choosing short-term economic windfalls over regulations that protect citizens and strengthen workers has had lasting damage, and is one of the principal reasons for our becoming economically reliant on the U.S in the past. And, despite Carney’s defiant speech at the World Economic Forum, Canada seems determined, like a snake eating its own tail, to throw caution to the wind once more with AI.
Canadians deserve a government that prioritizes their well-being as much as it does its economic interests; one needn’t always be sacrificed for the other. If Canada is doubling down on AI, it should also be doubling down on protecting Canadians.

Cole Martin is a writer from Atlantic Canada. He can be found on Bluesky @coleboy.bsky.social












