Canada’s proposed Artificial Intelligence and Data Act died on the order paper in January 2025 when Parliament was prorogued. The April 2025 federal election buried it further. No new government has moved to revive it, and the current federal position is that AI will be regulated through existing privacy legislation and policy rather than a dedicated AI statute.
For organizations that have been waiting on Ottawa before building internal AI governance, this creates a comfortable-sounding rationale for continued inaction: there’s no law, so there’s no requirement.
That logic doesn’t hold up.
The OPC is not waiting
The Office of the Privacy Commissioner of Canada hasn’t paused its AI oversight posture while Parliament figures out its legislative agenda. Under PIPEDA — Canada’s federal privacy law, now 25 years old — the OPC has taken active positions on AI.
The Commissioner joined provincial and territorial counterparts to publish principles for responsible, trustworthy, and privacy-protective generative AI technologies. The OPC conducted a joint investigation into ChatGPT. And as of December 2025, the OPC launched a consultation to modernize its guidance processes — a clear signal that updated AI-specific interpretive guidance under PIPEDA is coming.
The current government’s stated position — regulate AI through privacy legislation rather than a standalone act — means PIPEDA is the law that applies today. Organizations operating AI systems that process personal information are already subject to it. The OPC’s published AI principles represent the Commissioner’s interpretation of those obligations in an AI context. When investigations happen, they will be conducted against that interpretive framework.
No dedicated AI statute doesn’t mean no obligation. It means the obligation lives in a 25-year-old privacy law that most organizations haven’t read in years.
The extraterritorial reach problem
The absence of Canadian AI legislation doesn’t create a governance exemption from other jurisdictions’ laws.
The EU AI Act reaches full applicability on August 2, 2026 — three months from now. Its extraterritorial scope mirrors the GDPR model: if your AI system produces outputs used in the EU, or if you place an AI system on the EU market, the Act applies regardless of where you are incorporated. For Canadian organizations with EU customers, EU employees, or EU-facing products, the August 2026 deadline is not someone else’s problem.
U.S. state-level AI legislation is also accelerating in ways that affect cross-border operators. California’s AI accountability law, effective January 1, 2026, eliminates autonomous operation as a liability defense: if your AI system causes harm, the fact that it acted autonomously is not a valid shield. Colorado’s AI Act, effective June 2026, requires deployers of high-risk AI systems to conduct impact assessments and maintain active risk management programs.
None of this requires a Canadian federal AI law to apply to Canadian companies that operate across these jurisdictions. The map of exposure is drawn by operations, not by incorporation address.
What clients and auditors are already asking
Regulatory exposure is one dimension of risk. Commercial pressure is the other, and in practice it moves faster.
Enterprise procurement teams, financial sector auditors, and insurance underwriters are adding AI governance questions to their vendor due diligence. The questions are variations on a consistent theme: what AI tools do you use, how do you govern them, what happens when they produce a bad output, and who is accountable for the consequences?
A Canadian mid-market company with no AI governance program has a straightforward answer to each of those questions: nothing formalized, no designated owner, and we haven’t defined a process. That answer doesn’t lose the conversation every time. But it loses it often enough to matter — particularly as enterprise buyers get more sophisticated about supply-chain AI risk.
The interim framework
With AIDA gone and no replacement announced, Canadian organizations need a proxy standard — a framework rigorous enough to demonstrate credible governance to regulators, clients, and auditors, while remaining adaptable to whatever federal legislation eventually arrives. We recommend a three-layer approach.
Layer one: PIPEDA compliance as the floor
Any AI system that processes personal information is already subject to PIPEDA’s accountability, purpose limitation, and accuracy obligations. The OPC’s generative AI principles translate these into practical requirements: document the purpose before deployment, minimize personal data in training and inference pipelines, monitor consequential outputs for accuracy, and build meaningful human oversight into high-stakes decisions. Mapping your AI systems against these principles is the minimum credible governance position for a Canadian organization today.
Layer two: EU AI Act risk classification as a ceiling
The EU AI Act’s risk taxonomy — prohibited, high-risk, limited-risk, minimal-risk — is the most comprehensive AI risk classification framework published by any major regulator to date. Even for organizations with no EU regulatory nexus, applying this classification to your AI inventory is a useful discipline. It surfaces system-level accountability questions that PIPEDA alone doesn’t require, and it aligns your internal framework with the direction most international standards are moving. Organizations that have done this work will need minimal adjustment when Canadian legislation eventually catches up.
Layer three: NIST AI RMF as the operating structure
The U.S. National Institute of Standards and Technology’s AI Risk Management Framework — organized around Govern, Map, Measure, and Manage functions — provides a practical operating structure that works independently of any single regulatory jurisdiction. Adopting it as an internal framework means your governance program is translatable to any regulatory environment, including a future Canadian one. It also provides audit-legible documentation of governance activities, which is increasingly what clients and insurers want to see.
The retrofit problem
When Canada does eventually legislate — and it will, in some form — organizations that have been operating AI without governance infrastructure will face a retrofit. Retrofitting governance onto deployed AI systems is significantly more expensive and disruptive than building governance alongside deployment. The audit trail isn’t there. Vendor contracts lack the necessary accountability clauses. Accountability owners were never designated. Incident records don’t exist. Model documentation was never created because no one asked for it.
The cost of that retrofit compounds with time. Every quarter that AI systems operate undocumented is a quarter of governance history that can’t be reconstructed. When a regulator, client, or auditor asks for evidence of governance going back eighteen months, the honest answer from an organization that started last week is: we don’t have it.
Organizations building governance now are building an asset. Organizations waiting are accumulating a liability.
The practical question
AIDA’s collapse removed the legislative forcing function. It didn’t remove the governance requirement — it made the requirement less visible and easier to defer. For organizations inclined to move only when the law explicitly compels them, that’s a comfortable position until an investigation, a failed RFP, or a client incident makes it uncomfortable.
The OPC is actively interpreting PIPEDA against AI deployments. International regulators with extraterritorial reach are enforcing their own frameworks. Clients and auditors are asking today. The question for Canadian organizations isn’t whether AI governance is required. It’s whether they would rather build it deliberately, on their own terms and timeline, or reconstruct it under pressure after something goes wrong.
Sources: McInnes Cooper — The Demise of AIDA: 5 Key Lessons; OPC — Privacy and Artificial Intelligence; Osler — Canada’s 2026 Privacy Priorities; Venable — Agentic AI: Legal, Compliance, and Governance Risks; European Commission — EU AI Act.