At its most harmful, the same technology enables deepfakes, which can be used to fabricate political clips designed to mislead voters and create sexualized replicas, usually of women.
Canada has no clear legal framework to prevent or punish these abuses yet, beyond narrow privacy or defamation claims that place the burden on victims to bring lawsuits. It is not clear whether a new version of the federal government’s previously proposed Online Harms Act will materialize in the current Parliament.
Elsewhere, regulatory progress is happening. The European Union’s AI Act instituted labelling rules for content that is artificially generated or manipulated. Denmark has proposed a bill that would offer broad copyright protections to digital identities, giving victims of deepfakes or AI copies some recourse. Illinois recently prohibited the use of AI chatbots in mental health therapy. Advocacy groups like Common Sense Media are calling for outright bans on companionship chatbots for minors, citing the emotional vulnerability of that demographic.
Canadian law has yet to draw similar red lines.
Should we also require mandatory labelling when a service includes an AI-mediated interaction? Should individuals have the right to licence and protect their own likeness from being cloned? And what guardrails are needed to ensure AI avatars don’t become vehicles for fraud, exploitation, or manipulation?
The Criminal Code of Canada criminalizes counterfeiting money and forging fake documents. The Trademarks Act protects registered trademarks from being copied or imitated. The Copyright Act protects original works. And under common law, victims can sue for the misappropriation of their personality, allowing public figures control over their likeness being used for commercial purposes without consent.
There is no clear boundary for “fake” humans.
The U.S. is attempting to reshore jobs that were lost during a massive period of globalization — but at least those jobs went somewhere else. When and how will we “reshore” roles that get handed off to AI in the near future?
Beyond job losses, this masquerade erodes trust in digital spaces. A future where Canadians cannot easily tell if they are talking to a person or a program is one where both democratic debate and consumer choice are undermined.
If pipelines and patents are national assets, so are our faces and voices. And in 2025, those assets don’t just sit in contracts or court filings, they sit on servers.
Tilly Norwood’s acting career makes what AI technology can do overt: scrape likenesses and voiceprints, use them as inputs to create digital performers, and distribute them back to us as entertainment, convenience or gatekeepers.
The sovereignty question isn’t just about jobs anymore. It runs on two tracks at once: who owns the human inputs, and who controls the machines that capture, train and monetize them.
How these policy issues are addressed will define the character of our digital economy. Without deliberate rules, Canada risks outsourcing not just jobs, but trust.
These imitation humans are already becoming indistinguishable from the real thing. Our job is to quickly decide where the boundary should be between authentic human participation in digital economies and counterfeit digital imitation — before it is blurred entirely.