The idea that a hit Spotify artist might not even be human feels like a satire of the attention economy itself: an ecosystem once based on authenticity and connection is now being topped by a synthetic voice engineered for maximum uplift. What does “soul” even mean when it’s made by an algorithm that was trained on real music?
In a year when other “ghost artists” like Velvet Sundown also made headlines, Canada is reigniting the CanCon conversation — again. This time it’s about platforms, algorithms and the ever-abstract question of “Canadian content.” Last week, the federal broadcast regulator, the CRTC, released a new definition of CanCon, and it says that AI doesn’t qualify. That makes sense! But it doesn’t clarify what AI-generated artworks should be classified as.
As we gear up for the CRTC’s latest round of definitional parsing, something more foundational calls for examination: the composition of the content itself.
Before we can decide what qualifies as being “Canadian,” we need to have a way to be sure the content itself is actually human-made at all.
This month brought a wave of stories reminding us just how impossible that’s becoming. AI-generated pop songs topping Billboard, “pitch perfect” AI tracks celebrated by the BBC, The Local uncovering an alleged journalist-impersonation scam powered by AI, and that viral Bloomberg column begging Spotify to “stop the slop” before it reaches our ears.
People are inclined to reject fake content, and a recent survey found that Canadians are increasingly “concerned” about AI-generated material. Consumers appear willing to revolt.
So, will the market just magically correct for the infusion of computer-made material in our feeds? In some cases, that’s where the wind is blowing. YouTube announced this summer that it will not monetize content that is generated by AI, while Vine is relaunching with a proudly no-AI policy. This shows some platforms are beginning to draw real boundaries around AI-made content.
However, in other places online — like Spotify — the distinction is non-existent. The music-streaming platform has no obligation, and currently no ability, to guarantee that your favourite song was sung by an actual person. Meanwhile, music labels are striking deals with AI-first streaming platforms and mountains of “sloppified” songs are dominating TikTok. Pitchfork is already calling it a crisis.
This “slopification” of content parallels the “firehose of falsehood” method of spreading disinformation: overwhelm the system with vast quantities of low-quality synthetic material so authenticity becomes impossible to discern, platforms default to whatever is cheapest and most scalable, and public trust erodes (not just in the content itself, but with the institutions tasked with curating it).
An optimist may argue that the free market will correct this: audiences will demand human-made work, simple interventions like labels will differentiate material and platforms will adapt.
But market discipline only works when consumers can make informed choices, and that requires one thing we don’t currently have: knowability.
Information asymmetry is being tackled in different ways depending on the context.
There has long been concern about the use of AI in government decision-making. Currently, Quebec — through its Privacy Act — is the only jurisdiction in Canada that requires public agencies to disclose the use of AI in their decision-making processes. In the U.K., government agencies post their use of algorithms on a public hub and complete AI transparency reports. The OECD’s recent report on the use of AI in core government functions similarly encourages governments to act transparently and with due regard for the public good.
Other jurisdictions are also recognizing that the information asymmetry between internet users and the generators of AI content is a consumer protection issue.
With this recognition comes efforts to impose transparency, traceability and accountability obligations on businesses that create and deploy AI content.
California’s landmark Generative Artificial Intelligence: Training Data Transparency Act goes into effect on January 1, 2026, requiring developers of generative AI systems to publish the source and ownership of their datasets; whether the data includes copyrighted works, or personal and aggregate consumer information; any synthetic data used; and how the data supports their AI model’s purpose. The bill aims to overcome AI’s “black box” problem by making its inputs transparent. The law is part of a broader push from California to govern AI’s foundations, not just its outcomes.
While inputs are important, what matters most for the average internet user are labels on AI content that they regularly consume on their feeds.
The European Union’s AI labelling requirements come into force in August 2026. With the goal of enhancing transparency and preventing deception, Article 50 of the EU regulation imposes a binding requirement to label content created or manipulated by AI systems that could be perceived as real or human-made, including text, images, voices and videos.
Several states in the U.S. are also considering laws that impose mandatory disclosure and labelling of AI-generated content. In Pennsylvania, a proposed law would mandate “clear and conspicuous disclosure” when written text, images, audio or video is created with AI. Georgia also proposed a law requiring disclosures whenever AI-generated content is used in advertising or commerce. In Massachusetts, the Artificial Intelligence Disclosure Act, introduced in February 2025, would require clear and permanent disclosures identifying content as AI-generated.
Back to CanCon
Traditionally, the goal of promoting CanCon has been to ensure the ongoing delivery of compelling, high-quality Canadian-made creative content. The phrase “Content Made by Canadians” is a guiding principle.
That is reaffirmed in the CRTC’s recently published regulation, which outlines that humans, not AI, must occupy key creative roles in a production to meet the requirements under CanCon rules.
If the regulator can create these standards, how else can the Canadian government help us figure out when material is slop?
The first step to solving the problem of “slopification” is to treat the issue as a consumer protection priority.
Canadians should be able to trust the content they consume in Canada — not have to guess whether something they hear, watch or read is authentic.
Following in the AI transparency footsteps of other jurisdictions would be a strong step towards meeting the overarching goals of CanCon regulation: enhancing cultural and economic sovereignty.
CanCon emerged during a distinctly sovereignty-driven era in Canadian policymaking. In the 1960s and ‘70s, Ottawa worried that American broadcasters, studios and cultural products were overwhelming Canada’s airwaves and shaping Canadian identity.
CanCon quotas, the CRTC, the Broadcasting Act, Telefilm and the CBC’s mandate were all established as tools of cultural self-determination — a deliberate attempt to assert sovereignty over a media ecosystem increasingly dominated by the United States.
They were designed to ensure Canadian stories had space to exist and compete inside a market where foreign players (especially U.S. networks) held disproportionate control.
“Slopification” is a cultural annoyance and a contemporary sovereignty issue. Sovereignty depends on knowability (what’s real), traceability (where it came from), accountability (who benefits), and the basic ability to preference domestic content that actually exists.
Traditional CanCon was designed to protect Canadian stories from foreign studios; today the threat is foreign models — and a flood of synthetic content that collapses the very meaning of “Canadian” in the first place.
Until next time,
Vass Bednar