Intelligence is foundation
Podcast Subscribe
Artificial Intelligence Monday, 30 March 2026

The AI Agent That Writes Its Own Operating Manual

Share: LinkedIn
The AI Agent That Writes Its Own Operating Manual

Most AI agents are black boxes. You give them a task, they spin their wheels somewhere in the cloud, and eventually they return a result. Maybe it worked. Maybe it hallucinated. Maybe it deleted your production database. You won't know until after the fact.

A developer named Jonah Reed decided that model was backwards. So he built OpenFiend - an AI agent that stores its entire decision-making process in Notion. Not logs. Not a transcript of what happened. The actual operating environment where the agent does its thinking.

The Trust Problem with Agents

Here's the core issue with autonomous AI: the more capable they become, the less we trust them with real work. An agent that can edit files, send emails, and interact with APIs is genuinely useful. It's also genuinely dangerous if it misunderstands your intent.

The usual solution is sandboxing - limit what the agent can access, review outputs before execution, keep a human in the loop. But that breaks the promise of automation. If you're reviewing every action anyway, you might as well do it yourself.

OpenFiend inverts the model. Instead of asking for permission after the fact, it writes proposals before executing anything risky. Those proposals live in Notion as pages tagged "pending_approval". You can read them, edit them, approve them, or kill them - all in a tool you already use.

Notion as an Operating System

The clever bit isn't just that the agent uses Notion for storage. It's that Notion is the agent's brain. Every decision, every piece of context, every pending action exists as a structured page in a workspace. The agent doesn't have hidden state. Its entire cognitive process is legible.

This matters for two reasons. First, you can see exactly what the agent is thinking and why. If it's about to do something stupid, you'll know before it happens. Second - and this is the really interesting part - you can edit its thinking process. You can change the prompt, adjust the context, or rewrite the proposal entirely. Then tell it to proceed.

It's not just auditable. It's collaborative. The agent drafts. You refine. It executes.

What This Means for Builders

Most autonomous agent frameworks treat humans as obstacles to be worked around. The goal is full automation - no human intervention, no friction, just pure machine efficiency. That sounds great until the agent interprets "clean up old files" as "delete everything created before yesterday".

OpenFiend suggests a different path: make the agent's decision-making legible, not just logged. Logs are forensic - they tell you what went wrong after the damage is done. Legible thinking lets you intervene before the action happens.

For developers building internal tools, this is immediately practical. Imagine an agent that drafts database migrations, writes API documentation, or generates test cases - but every action sits in a Notion page waiting for sign-off. You get the speed of automation with the safety of human review, without building a custom approval UI.

For business owners, it changes the risk calculation. You can deploy an AI agent to handle repetitive work without worrying it'll misinterpret instructions and cause chaos. The agent does the grunt work. You make the final calls.

The Bigger Pattern

This fits into a wider shift in how we're thinking about AI systems. The early excitement was all about full autonomy - agents that operate independently, make their own decisions, and deliver results without human involvement. That's still the dream for narrow, well-defined tasks.

But for messier, higher-stakes work, the model is evolving. Instead of full autonomy, we're seeing augmented decision-making. The AI does the heavy lifting - research, drafting, analysis - but presents its work in a format that invites human refinement. Not micromanagement. Collaboration.

OpenFiend is a working example of that model. It's not trying to replace human judgment. It's trying to make human judgment more efficient by doing the boring parts and making the interesting parts legible.

The risk with autonomous agents has never been that they're too powerful. It's that they're powerful and opaque. Make them legible, and suddenly they're a lot more useful.

More Featured Insights

Quantum Computing
The Hidden Variable in Quantum Error Correction
Web Development
Let the Server Do the Work

Today's Sources

Dev.to
I Built an AI Agent That Thinks in Notion (And Can Give His Brain a Makeover)
arXiv cs.AI
BeSafe-Bench: Unveiling Behavioral Safety Risks of Situated Agents in Functional Environments
TechCrunch
Why OpenAI really shut down Sora
OpenAI Blog
Helping disaster response teams turn AI into action across Asia
arXiv cs.AI
AutoB2G: LLM-Driven Agentic Framework For Automated Building-Grid Co-Simulation
arXiv cs.AI
Semi-Automated Knowledge Engineering and Process Mapping for Total Airport Management
arXiv – Quantum Physics
Decoder Dependence in Surface-Code Threshold Estimation with Native GKP Digitization
arXiv – Quantum Physics
Catalytic Coherence Amplification for Quantum State Recovery
arXiv – Quantum Physics
Typical entanglement in anyon chains: Page curves beyond Lie group symmetries
Dev.to
Build a Real-Time ISS Tracker with Quarkus, SSE, and Qute
InfoQ
FOSDEM 2026: Intro to WebTransport - the Next WebSocket?!
InfoQ
Google Unveils AppFunctions to Connect AI Agents and Android Apps
Dev.to
I Built an AI Agent That Thinks in Notion (And Can Give His Brain a Makeover)
InfoQ
Java News Roundup: GraalVM Build Tools, EclipseLink, Spring Milestones, Open Liberty, Quarkus
Dev.to
My first Python project: Excel to SQL pipeline (feedback welcome)

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed