I Replaced Notion with an AI-Powered Life OS
I used Notion as my second brain and life OS for years. I even sold Notion templates — more than $25k in total. I was an advocate.
But a few months ago, I started experimenting with AI-powered coding environments — first Google’s Project Antigravity, and then Claude Code. And I realized something: these tools aren’t just for writing code. They can run workflows, manage files, maintain memory across sessions, and operate as a genuine personal AI assistant.
So I built a system. It replaced my entire Notion setup. This article is the companion piece to my full video walkthrough — here I’ll cover the architecture, the reasoning, and the parts that are hard to show on screen.
The System at a Glance
The setup has a few core layers:
- The IDE — VS Code with Claude Code as the AI backbone
- Skills — Custom workflows I invoke with slash commands
- Projects & Tasks — AI-managed, always in sync
- Context — A local file system the AI queries directly
- Memory — Persistent session logs stored in a database, retrieved semantically
- Reviews — Daily and weekly self-inquiry, powered by AI
- Coding — Spec-Driven Development for building real products
- Sync — Git-based sync across Windows, Mac, and iOS
No SaaS dashboards. No subscription fatigue. Just local files, a capable LLM, and workflows that run consistently.
Why This Beats Notion
I like Notion’s UX. But for a life OS with a second brain, local files paired with a capable LLM is far superior. Here’s why.
Manual maintenance is gone
In Notion, you manage tasks and projects by hand. You have to be disciplined to keep the system running. With AI, I say “capture this as a task” during a conversation and it happens. I can filter tasks by priority, update project files when milestones are reached, and archive completed work — all without opening a separate app.
Context stays fresh automatically
When you separate your AI conversations from your knowledge base, you’re always copying things back and forth. After a deep session with Claude, I used to paste summaries into Notion or save files to Google Drive. Now, the AI writes directly to local files. Next session, it reads them. No manual syncing.
Memory persists across sessions
Every conversation is summarized and stored. When I start a new session, the AI knows who I am, what we worked on recently, what my high-priority tasks are. It’s like picking up a conversation with a colleague who actually remembers.
No vendor lock-in
The system is agnostic. It works with any LLM that can access files and run workflows. I could swap Claude for Gemini or a future model and the architecture stays the same.
One subscription, not three
If you’re already paying for Claude Code (or ChatGPT Codex, or Antigravity), why pay again for AI inside Notion? This setup consolidates everything into one tool.
How It Actually Works
The IDE as Command Center
If you’re not familiar with VS Code: file navigator on the left, file preview in the middle, and a side panel I use for the AI chat.
Claude Code works with directories. All my non-coding files — projects, tasks, reviews, context — live in Claude’s root folder. Every session has access to the full system. When I open a separate coding project, I still have access to my second brain because Claude can always reference its root directory.
One rule: non-coding projects live in the Claude folder. Coding projects live in their own repos. Both have access to everything.
Skills: Repeatable AI Workflows
Skills are slash commands that trigger specific workflows. I use them in two ways:
Workflow automation. /start boots a session — loads my profile, recalls the last session, primes the memory system, shows carry-forward tasks. /end closes it — summarizes the session, extracts patterns, updates project files, pushes to the remote repository. These run the same way every time.
Domain knowledge. /plan contains a specific methodology for breaking projects into phases. The skill teaches the AI how I want planning done — not generic project management, but a spec-driven approach with verification steps. The output is consistent because the knowledge is embedded in the skill, not in my prompt.
Projects and Tasks
Project files live as markdown in a structured folder. Each project has a spec, roadmap, state file, and phase-by-phase plans. The AI reads and updates these directly.
Tasks are extracted from conversations, tracked in a task system, and surfaced at session start. No manual entry. No Kanban boards. Just “here are your high-priority items” when I sit down to work.
Context: The Local Knowledge Base
This was the breakthrough. Instead of a cloud-based knowledge base that the AI can’t access, everything lives as local files. Project briefs, research notes, client documents, competitive analysis — all queryable by the AI in real time.
When I’m in discovery mode — researching a market, exploring a product idea, analyzing competitors — the insights get saved to the file system as we go. Next session, the AI finds and loads them.
Memory: Semantic Retrieval
Beyond local files, I built a persistent memory layer. Each session is summarized and stored in a database. When I start a new conversation, the AI performs a semantic search across past sessions to surface relevant context.
The /start workflow loads my profile, the last few session summaries, and any carry-forward items. The /end workflow extracts learnings, saves them, and pushes everything to a remote repository.
Every conversation, every decision, every pattern — logged and retrievable.
Reviews: AI-Assisted Self-Inquiry
Daily and weekly reviews were always the hardest habit to maintain. With AI, the process is guided. All my daily reviews include session summaries, so I can quickly see what I worked on.
But the real value is in the questions. The AI doesn’t just ask “what did you accomplish?” — it asks about patterns, about whether actions were intentional or reactive, about where old habits took over. It’s not a journal. It’s an inquiry into how I’m actually spending my time.
Spec-Driven Development
When it’s time to build something, I use a structured framework:
SPEC → ROADMAP → PLAN → EXECUTE → VERIFY
The spec file is the source of truth — a detailed document that answers every product question. I use Claude as an interviewer to pressure-test the idea. Then we build the roadmap, plan each phase, execute, and verify against the spec.
This keeps the AI focused. Without a spec, you get generic code. With one, you get code that solves the actual problem.
Sync: Git Across Devices
Since everything is local files, I use Git for sync. Push from my Windows machine, pull on my Mac. The same system runs on both.
On iOS, I open the repository to review files, check tasks, or do light work while traveling. GitHub Actions handle cloning the context to other coding projects. One repository, multiple machines, always in sync.
Example project: MemPal
Here’s a concrete example.
I wanted to improve my German vocabulary but couldn’t find the features I wanted in existing apps. So I built MemPal — a flashcard app that uses memory palace techniques for more efficient learning.
Inside the /project folder:
SPEC.md— what I’m buildingROADMAP.md— the phasesSTATE.md— where we left offphases/— implementation plans
It currently has 100 verbs with full conjugations, example sentences, and whimsical watercolor images generated with AI. Each image is designed to be memorable — strange, vivid, linked to the word’s meaning.
It took me around 1 a week from idea to working prototype. The spec-driven approach kept the scope tight and the AI focused.
Who This Is For
This setup isn’t for everyone. Right now, I think it’s best for:
- Entrepreneurs and indie makers who already use AI for work
- Engineers and creators who want comprehensive orchestration, not just chat
- People who build — both code and non-code projects
If you want a beautiful dashboard, this isn’t it. If you want AI to handle the boring stuff so you can focus on the work that matters, this is the system.
For less technical users, tools like Claude’s web interface or Notion with built-in AI might be more appropriate. But if you’re already in an IDE — or willing to learn — the leverage here is significant.
What’s Next
This is the foundation. In upcoming posts, I’ll go deeper into each layer — how the memory system works, how to write effective skills, how spec-driven development plays out on a real project.
If you want to follow along, subscribe below. And if you want to see the system in action, watch the full video walkthrough.