top of page
Search

Does AI Need Its Own Criminal Code? - November 21, 2025

What Are The Dangers of AI & How Do We Protect Ourselves?



ree


Artificial Intelligence is front of mind these days. Much of the current news is dominated by stories about how much money is being spent developing AI (the tools, agents and infrastructure), and how much influence the AI phenomenon has on the stock markets.  We are fortunate that there haven’t been many stories about AI being used to scam people; or, being used to cause physical harm, apart from the one allegation where an AI tool is accused of encouraging and aiding an American youth to take their own life; that story is heart-breaking and tragic, and needs further investigation to be brought to a well thought-out conclusion. 


Because AI is in the news every day I spend time thinking and reading about the benefits of AI, and its dangers on our society.  Like anything I read, I spend time considering what is fact and what is fantasy/folklore.  This has got me thinking that if AI agents can indeed develop the characteristics of sentient beings, then they could act on their own, making decisions independently of their human creators/managers. If this happens, then does AI need its own set of laws, including a criminal code?  


Our existing laws won’t help us here because if a human breaks a law or does not conduct themselves in accordance with applicable regulations, there are consequences. These include loss of freedom (jail time) and monetary penalties (paying fines, or paying restitution to those harmed).  If an AI agent commits any transgressions, independent of any human instruction or guidance, then who's going to jail…the software code, or the computer/server hosting the AI software?  No, of course not.  Perhaps we could incarcerate a robot if they were the guilty party. Similarly, if AI violates a regulation, who pays the fine?  Again, it’s not going to be a piece of software code or a computer? Does a robot even have a bank account from which to pay fines?


Let’s back-up a bit and establish a common understanding of what artificial intelligence is. In doing that let's also establish the benefits of AI and the risks from AI to our society, some of which we can manage, and those risks that require a separate set of laws and regulations.   


What Is Artificial Intelligence (“AI”)?

Artificial intelligence is the field of computer science focused on creating machines and software that can perform tasks that normally require human intelligence. AI is a system that can sense, understand, learn, and act — often improving over time without being explicitly programmed for every step. AI exists in software programs, in computer chips and computer servers, and can exist in robots that take the form of human beings or robotic animals. 


What Are The Benefits Of AI?

  1. Productivity Gains:

AI can automate repetitive tasks, assist with complex work, and augment human capabilities; examples of this include: automating paperwork and scheduling; faster software development; automated customer support; real-time financial analysis - The economic impact includes potential productivity growth and labour cost savings.

  1. Medical Breakthroughs & Health Improvements:

AI is transforming healthcare by improving diagnosis, treatment, and system efficiency; examples include: early detection of cancer and heart disease; personalized medicine; faster drug discovery; more efficient hospital operations; AI companions for mental health support. AI could save lives and extend healthy lifespans.

  1. Improved Education

AI can personalize learning to each student’s strengths, weaknesses, and pace. The benefits include: customized tutoring; adapting the curriculum based on students' reactions/needs; automated grading; access to high-quality instruction for remote or underserved groups. All of this can give antiquated education systems the refresh and updating they sorely need.

  1. Economic Growth & New Industries

AI enables entirely new industries and business models, such as: autonomous systems; biotech/AI-driven drug discovery; generative media; robotics; personalized AI assistants; micro-manufacturing. The result is newly-created jobs, innovation, investment, and economic expansion.


What Are The Dangers Of AI?

  1. Job displacement & economic disruption

AI automation can replace or restructure many tasks, especially: administrative work; customer service; accounting; software development; logistics & operations; basic legal work. The risk to society from displacing so many entry-level jobs includes: loss of early-career training opportunities; widening inequality; wage pressure; and faster economic change than workers can adapt to - we saw this last issue with the introduction of the internet in the mid-to-late 90s…many people were left behind as result of that huge technical innovation. 

  1. Bias and discrimination

AI systems learn from data—this data can reflect human biases; and online data, in particular, contains many falsehoods and half-truths. As a result, the consequences may include rulings and decisions based on false information, such as: unfair hiring; racial or gender discrimination; skewed legal assessments. The scope of these biases and incorrect decision-making can scale bias to millions of decisions per day.

  1. Privacy erosion & surveillance

AI enables: facial recognition at scale; voice identification; predictive behaviour analytics; mass data extraction. The implications are that governments, corporations and bad actors can track people in ways that were previously impossible.

  1. Misinformation, manipulation & deepfakes

AI can generate: synthetic voices; ultra-realistic videos; fake news; targeted political persuasion. The internet and social media is already full of such false content - viewers have to look closely and really digest the message being delivered to recognize that it’s fake content.  Risk: breakdown of trust in information, elections, public institutions, and even personal relationships.

  1. Security vulnerabilities

AI introduces new types of cyber risks: AI-generated phishing; automated hacking; model manipulation; data poisoning. With these automated capabilities at their disposal, adversaries and bad actors can use AI to scale attacks dramatically.

  1. Concentration of power

A small number of companies and governments may control the largest data-gathering models, the computing infrastructure, and the global AI standards. This results in: geopolitical imbalances; reduced competition; increased dependency of businesses and countries on hyperscalers (Google, Amazon, Meta, Microsoft).

  1. Lack of transparency

Many AI systems—especially deep learning models—are “black boxes,” making it hard to: understand the decisions the AI is making; debug failures; audit fairness; ensure accountability. This erodes trust and makes regulation of AI systems difficult.

  1. Autonomous systems failure

Self-driving cars, robots, industrial automation, and autonomous weapons can fail in dangerous ways with potentially severe consequences on human life : unexpected edge cases; wrong environmental predictions; compounding errors; catastrophic accidents. These systems require extremely high reliability and a high degree of trust.

  1. Weaponization

AI can be used in: autonomous weapons; drone swarms; cyber warfare; bio-weapon design assistance; targeted propaganda. The speed and scale of AI make these threats harder to contain, harder to police, and requires a larger degree of trust. 

  1. Societal disruption & psychological effects

AI could reshape: human relationships; education; mental health; political discourse; workplace dynamics. Potential issues include reduced human skills; human isolation/lack of social interaction due to AI companions; psychological manipulation of young and immature people through personalized content. 


Given the implications of this new technology there is lots of discussion of what the regulations and laws are going to be around the uses and control of AI tools or agents.  And there is plenty of discussion around who is going to develop these laws and regulations. Here is a summary of where Canada, Europe, and the United States currently stand on the development and implementation of AI laws:

Artificial Intelligence Regulation Comparison: Canada vs. European Union vs. United States:

🇨🇦 Canada

Status: In progress / partially implemented

Main Frameworks

  • AIDA (Artificial Intelligence and Data Act) — part of Bill C-27

    • Not yet fully enacted (progress slowed due to suspension of Parliament, the recent Federal Election and a change in the country’s leadership).

    • Focuses on "high-impact" systems, risk management, transparency, and fairness.

  • Voluntary Code of Conduct for Generative AI (soft law).

Strengths

  • Early attempt at creating a national AI law.

  • Includes creation of an AI & Data Commissioner.

  • Defines obligations for high-risk systems (risk mitigation, audits, transparency).


Weaknesses

  • Not in force; key details left to later regulations.

  • Does not cover government/public-sector AI sufficiently.

  • Much of the regulations have voluntary participation giving it limited enforceability.

  • Slower progress than in the European Union.


🇪🇺 European Union

Status: Fully legislated (AI Act adopted)

Main Framework

  • EU AI Act (2024/2025 implementation) — the world’s first comprehensive AI law.

Key Features

  • Full risk-tiered system:

    • Unacceptable AI (banned): social scoring, manipulative AI, real-time biometric ID (mostly).

    • High-risk AI: strict documentation, human oversight, audits.

    • Limited-risk: transparency rules (e.g., bots must identify as bots).

    • Minimal-risk: allowed with no restrictions.

  • Heavy penalties (similar to GDPR levels; GDPR = “General Data Protection Rules”).

  • Obligations apply across the entire AI value chain: developers, deployers, importers, distributors.

Strengths

  • Strongest global framework.

  • Strict human-rights protections.

  • Enforcement mechanisms and high fines.

Weaknesses

  • Complex and resource-intensive for businesses.

  • Risk of slowing innovation, causing European countries to miss out on commercial opportunities.

  • Many rules apply extraterritorially — even foreign companies serving EU markets must comply, making European countries less appealing for foreign business to transact.

🇺🇸 United States

Status: Fragmented, regional, enforcement-through-agencies

Main Elements

  • No national AI law.

  • Executive Order on AI (Biden, 2023) — sets safety standards, federal AI usage rules, reporting requirements for early models.

  • Regulators use existing authority:

    • FTC → deceptive AI, unfair competition

    • SEC → AI in financial markets

    • FDA → AI in medical devices

    • DoD → autonomous systems

  • State-level actions:

    • California: strong data rules, algorithmic accountability proposals

    • Colorado: first statewide AI fairness law (2024/25)


Strengths:

  • None

Weaknesses

  • Patchwork → inconsistent standards.

  • No comprehensive protections for high-risk systems.

  • Heavy reliance on self-regulation by tech companies developing AI.

Summary: 

  • The European Union is the global leader in binding AI regulation.

  • Canada is trying to follow, but is lagging behind and still relies heavily on voluntary codes.

  • The United States focuses on innovation and enforcement rather than pre-emptive law — powerful market influence but weak formal regulation.


As you can surmise from the information above we are not in a position of unanimity or consistency in regulating AI.   In the absence of international or country-wide laws and regulations, you have the companies who are developing AI technology trying to put safeguards in place within their own organizations and their own software products, to prevent the misuse of AI tools and agents by the AI itself or by humans.   


While the Europeans seem to be leading the way in regulating AI, many Europeans are not happy with their current stringent rules.  They worry about regulations that are too strict causing Europe to fall behind in capitalizing on the benefits of AI.  


Which brings us to the opening question of this article: does AI need its own criminal code? 


In an act of irony, I put this exact question to ChatGPT, here’s what the AI tool said, with my comments in italics:


Why AI doesn’t currently need its own criminal code:

  1. AI cannot form criminal intent (“mens rea”). Criminal law is built on the idea that a person chooses to commit a crime. AI has no intent, consciousness, or moral agency.  

Shail - it’s too early to tell whether AI has the ability to formulate its own intent, criminal or otherwise; the danger in waiting is that the precedent event may in itself be catastrophic, so, do we wait for that to occur, or set penalties in place now for the human using or in-charge of the AI tool?

  1. Existing laws already apply to humans and organizations who build or deploy AI.

    • Fraud using AI → still fraud by the person using it.

    • Dangerous autonomous system → this leads to corporate liability.

    • Privacy violations → we have existing privacy and data laws.

 Shail - yes, but the existing laws need to be re-written to include acts of AI under someone’s control and penalize them for harm caused.

  1. Assigning criminal guilt to a non-human agent breaks the core logic of criminal justice. You can’t imprison or morally punish a machine.   

Shail - true, however whomever controls the AI needs to be held responsible.


ChatGPT’s conclusion - Why we may eventually need AI-specific criminal-like rules:

Even if AI isn’t a “person,” society likely needs new categories of legal responsibility, because traditional law doesn’t fully cover situations where AI systems act with autonomy or unpredictability.  

Shail - even ChatGPT agrees that existing laws need to be re-written to cover actions taken by AI.


What a future “AI Criminal Code” might look like:


Even though people will argue that advanced AI acts like a sentient human being, it is still a piece of software code, a processing semiconductor or a robot, all of which are inanimate objects; thus we couldn’t criminalize AI.  We need to create criminal liabilities and regulatory obligations for people and organizations who:

1. Build dangerous AI systems:

  • Knowingly deploy systems capable of causing large-scale harm

  • Fail to include required safety testing, logging, or human control layers

2. Use AI to commit crimes at scale:

  • Automated fraud

  • Creation of synthetic identities

  • Deepfake extortion

  • Autonomous malware generation

3. Engage in reckless AI development:

  • Training models on restricted biological data

  • Developing self-propagating agents without containment

  • Deploying systems that can’t be shut down

4. Violate AI safety compliance rules: Similar to aviation or nuclear industries:

  • mandatory audits

  • licensing for high-power systems

  • required kill switches

  • transparency and traceability rules


AI should not be treated as a criminal. It is the humans and corporations that remain responsible, as AI is merely a tool in its current format. But AI creates new kinds of risks that may require new criminal and regulatory laws in the next 5–10 years.


This is my third article dealing with the impact of Artificial Intelligence on our society.  I wrote an article in June 2025 about what I thought at the time were the real dangers of AI; that being, the amount of electricity/power AI infrastructure consumes - that article can be read here . Last week I wrote about Wall Street’s recent creativity in financing AI infrastructure builds, and the danger of this creativity; that article can be read here .  


ree


 
 
 

Comments


Email: spaliwal44@gmail.com

Text or call: 613-851-8666

©2023 by Paliwal Professional Writing. Proudly created with Wix.com

bottom of page