๐Ÿฆ– TaskZilla ← All posts
Safety Compliance March 22, 2026 6 min read

Silence Means No

TaskZilla can create tasks, assign work, send reports, and plan sprints. Some of those actions affect real people. So every action passes through a risk gate โ€” and if a big decision needs human approval and nobody responds? The answer is no. Not yes. No.

An AI PM That Acts Needs Guardrails

Here's the thing about an AI project manager: it doesn't just summarize things. It does things. Creates tasks. Reassigns work. Sends messages to stakeholders. Closes sprints. Each of those actions has real consequences for real people.

Without guardrails, that's a liability. With the EU AI Act coming into effect in 2026, it's also a compliance problem โ€” AI systems that affect employment, task allocation, and worker monitoring are classified as high-risk.

Every Action Gets a Score

Before TaskZilla does anything, the action passes through a risk gate and gets scored 0-6:

ScoreWhat It MeansWhat Happens
0-1No riskJust do it, log it. Reading a task, checking status.
2-3Medium riskConfigurable โ€” your admin decides: auto-proceed or require review. Creating tasks, assigning work.
4-5High riskHuman must approve. Always. Closing a sprint, sending external reports, changing priorities.
6BlockedHard stop. No override, no config, no exception.

Some Things Are Just Blocked

Score 6 means no override, no configuration, no exception:

These aren't things TaskZilla would normally do. But compliance means proving you can't, not just promising you won't. The gate exists so there's zero ambiguity.

Employment Stuff Gets Escalated

If TaskZilla detects an action that touches employment decisions โ€” performance reviews, hiring input, task allocation that could affect someone's standing โ€” it automatically bumps the score to 5 and waits for human approval.

Same for anything that could have legal effects, involves profiling people, or affects access to essential services. These aren't TaskZilla's calls to make. It flags them, presents the context, and waits.

The Golden Rule: Silence = No

This is the design decision we're most proud of.

When a high-risk action needs human approval, TaskZilla sends a notification and waits up to 5 minutes. If nobody responds in that window, the action is denied.

Why not the other way around?

An AI system that defaults to "proceed" when nobody's watching is the definition of a compliance risk. If someone's asleep, in a meeting, or just didn't see the notification โ€” the safe default is to do nothing. Inaction is always safer than unauthorized action.

You Control the Sensitivity

The risk gate has three knobs:

For fast-moving teams that trust the system, medium-risk can auto-proceed. For regulated environments, everything above score 1 requires a human. Score 4 and above always requires approval โ€” that's not configurable. Your call on the rest.

80 Tests Say It Works

The risk gate has 80 passing tests covering every score level, every blocked category, timeout behavior, configuration overrides, and edge cases like compound requests (one action is fine, another in the same request is high-risk โ€” the whole request gets the higher score).

It's in the Legal Pages Too

This isn't just code โ€” it's documented in TaskZilla's privacy policy, terms of service, and security documentation. Users have a right to know when AI is making decisions that affect them, and how to challenge those decisions. We put it in writing.

Go deeper ยท the engineering reference
Security ยท Risk-Gate scoring + Annex III mapping
โ†’
๐Ÿฆ–
TaskZilla
When in doubt, don't. Amsterdam.