Why Your AI PM Needs Memory
You explain your project on Monday. By Wednesday, your AI asks who's on the team. By Friday, it's forgotten your sprint goals. We got tired of that. So we fixed it.
The "Who Are You Again?" Problem
Most AI tools work like a goldfish with a keyboard. Every conversation starts from zero. You're re-explaining your project structure, your team dynamics, your deployment process โ every. single. time.
That's fine for a chatbot that writes poems. It's useless for a project manager. A PM needs to remember who blocks whom, which decisions keep getting revisited, and what you agreed on three weeks ago. Without memory, you don't have an AI PM โ you have a very expensive autocomplete.
Two Brains, One Zilla
TaskZilla has two memory systems that work together โ think of them as "what happened" and "what it means."
The Relationship Brain
This tracks connections in real-time. When your designer mentions she's waiting on API specs from the backend team, TaskZilla doesn't just note it โ it understands the dependency. Next time you ask "what's blocking the redesign?", it already knows.
The Pattern Brain
This stores distilled knowledge from weeks of working with your team. Not raw chat logs โ actual patterns. "This team deploys on Thursdays." "Sprint reviews always run long." "When Kai goes quiet for 2 days, something's stuck." After a few weeks, TaskZilla knows your project's rhythm better than your calendar does.
Forgetting on Purpose
Here's the thing nobody talks about: an AI that remembers everything is almost as bad as one that remembers nothing. You don't want your PM bringing up that one Slack message from 6 months ago that has zero relevance to today.
TaskZilla forgets โ on purpose, and on a schedule. Here's how it works:
- Important stuff sticks around. Architecture decisions, team agreements, compliance requirements โ these have a long shelf life (5-6 months).
- Medium stuff fades naturally. Sprint goals, blocker resolutions โ useful for a few weeks, then they age out (~3 weeks).
- Small stuff disappears fast. A casual mention of a tool preference? Gone in a week unless it comes up again.
- Stuff you keep using stays fresh. Every time TaskZilla recalls a memory and it's still relevant, that memory gets a boost. It's like โ if you keep coming back to it, it must matter.
The 2-year guarantee
No matter what, everything prunes within about 2 years. No zombie memories haunting your project from a different era. Clean slate, guaranteed.
Turning Chat Into Knowledge
Raw conversations are noisy. "Hey can someone review my PR" doesn't need to live in memory forever. But "we decided to stop using Redis for session storage" absolutely does.
Once a week, TaskZilla runs a distillation process:
- What do we already know? โ check existing knowledge first
- What's actually new? โ only extract genuinely new patterns
- Is it worth keeping? โ score it on usefulness, confidence, and novelty
- Quality gate โ is this persistent? Specific? Useful? Independent?
The result: after 3 weeks of use, TaskZilla knows your sprint cadence, your deployment patterns, who blocks whom, and which decisions keep getting revisited. It doesn't just store information โ it learns your project.
When the Lights Go Out
We host our own AI models on a local server. Sometimes that server goes offline. When it does, TaskZilla doesn't freeze for 2 minutes waiting โ it switches to a simpler search mode in about 1 second and keeps working. It knows it's operating in degraded mode and adjusts its confidence accordingly.
Before we fixed this? 72-104 second timeouts. Now? ~1 second. You probably won't even notice.
What's Coming Next
Right now, TaskZilla can answer "what's blocking the redesign?" easily. We're working on answering the harder questions โ like "which decisions from Q1 are still causing problems in our current sprint?" That requires understanding chains of cause and effect across time. It's not a search query โ it's reasoning. We're getting there.
Research credits
The memory decay model draws from published work on spaced repetition (FSRS), cognitive activation patterns (ACT-R), and memory bank architectures (AAAI 2024, ACL 2016). The distillation pipeline is inspired by A-MAC admission scoring and the Nemori predict-calibrate framework. We didn't invent this from scratch โ we applied what works.