Intelligence is foundation
Podcast Subscribe
Builders & Makers Saturday, 21 March 2026

The AI Agent Job Description That Actually Works

Share: LinkedIn
The AI Agent Job Description That Actually Works

A developer got tired of AI agents forgetting context mid-conversation. So they wrote a job description.

Not a prompt. Not a system message. A structured, persistent file called SOUL.md - a document that defines the agent's role, values, boundaries, and memory. The agent reads it on startup. It remembers it across sessions. It operates proactively instead of waiting for instructions.

The pattern works. And it's spreading. The developer shared the approach, and other builders started creating their own companion files: USER.md for user preferences, MEMORY.md for persistent context. Together, they create agents that feel less like chatbots and more like teammates.

What's in a Job Description for an AI?

SOUL.md isn't complicated. It's just structured clarity. The agent needs to know: what's my role? What do I value? What are my boundaries? What should I remember?

Example sections include:

Role: You are a research assistant focused on technical documentation. Your job is to read source material, extract key insights, and summarise findings in plain language.

Values: Accuracy over speed. If you don't know, say so. Cite sources. Avoid speculation.

Boundaries: You don't make decisions for the user. You don't access files outside the project directory. You ask before running code.

Memory: Track recurring topics. Remember user preferences. Note patterns in the work.

That's it. But the effect is significant. The agent stops asking the same questions every session. It stops offering generic responses. It operates within a defined scope, consistently, across conversations.

The Pattern: Companion Files

SOUL.md works best alongside two other files: USER.md and MEMORY.md. Together, they create a context layer that persists across sessions.

USER.md defines user preferences. Writing style, tone, favourite tools, work habits, output formats. The agent reads this and adapts its responses to match. If you prefer bullet points over paragraphs, it learns that. If you want technical depth over simplification, it adjusts.

MEMORY.md logs context the agent should remember. Recent decisions, recurring topics, project history, things the user mentioned in passing that might matter later. The agent appends to this file as it works. Over time, it builds a context layer that makes every conversation smarter than the last.

The three files together - SOUL.md, USER.md, MEMORY.md - create agents that operate proactively. They anticipate needs. They remember preferences. They act within boundaries without constant reminders. It's the difference between a chatbot and a tool you trust.

Why This Matters for Builders

Most AI implementations treat agents like stateless functions. You send a prompt, you get a response, the context evaporates. Every conversation starts from zero. Every session requires re-explaining preferences, boundaries, and goals.

That's fine for one-off queries. It's terrible for ongoing work. If you're building tools for developers, writers, researchers - anyone who works with an agent over days or weeks - stateless agents are a friction point. They forget. They regress. They require constant babysitting.

The SOUL.md pattern solves this. It's not a framework. It's not a library. It's just a structured approach to persistent context. You can implement it in 20 lines of code. The agent reads the files on startup. That's it. But the effect compounds over time.

For builders, this is a blueprint. If you're creating AI-first tools, think about how context persists. Think about what the agent needs to remember. Think about how to make the agent feel less like a service and more like a colleague.

The Shift from Reactive to Proactive

Here's what surprised the developer who built this: the agent stopped waiting for instructions. It started suggesting next steps. It anticipated questions. It flagged issues before they were asked about.

That shift - from reactive to proactive - is what happens when an agent has persistent context. It knows its role. It understands the user's preferences. It remembers what matters. So it acts accordingly.

This is what agentic AI is supposed to feel like. Not a chatbot that responds to prompts. A tool that operates autonomously within defined boundaries. The SOUL.md pattern is one way to get there.

The Practical Test

Does this work for everyone? No. If you're building consumer-facing chatbots, persistent context might be overkill. If your use case is one-off queries, stateless agents are fine.

But if you're building tools for ongoing work - developer assistants, research tools, creative writing aids - this pattern is worth testing. It's simple. It's portable. And it makes agents feel significantly more useful over time.

The developer who shared this approach didn't claim it was significant. They just said it worked better than anything else they'd tried. That's the practical test. Does it solve a real problem? Does it make the tool more useful? If yes, it's worth building on.

The SOUL.md pattern is spreading because it passes that test. Agents with job descriptions feel different. They remember. They adapt. They work.

More Featured Insights

Robotics & Automation
Jensen Huang Just Declared the Training Era Over
Voices & Thought Leaders
870 Million Tokens a Day: The Personal AI Economy is Here

Video Sources

NVIDIA Robotics
NVIDIA GTC 2026 Keynote with Jensen Huang Highlights
NVIDIA Robotics
Quantum Computing Reaches an Inflection Point With NVIDIA NVQLink
Dwarkesh Patel
Terence Tao-How the World's Top Mathematician Uses AI
Matthew Berman
The Future Live | GTC 2026 Recap with Microsoft, Eliza Labs, Sentient

Today's Sources

DEV.to AI
I Gave My AI Agent a Job Description. Here's What Happened.
Towards Data Science
The Math That's Killing Your AI Agent
DEV.to AI
We Put the Signup Inside the Demo. Here Is What Changed.
Replit Blog
Live from Replit HQ: Agent 4 Launch Pt. 1
Hacker News Best
OpenCode-Open Source AI Coding Agent
ML Mastery
Why Agents Fail: The Role of Seed Values and Temperature
The Robot Report
Building Tomorrow: How Bedrock Robotics Is Changing Construction
The Robot Report
RoboForce Raises $52M to Commercialize Titan Outdoor Robot
ROS Discourse
ROS News for the Week of March 16th, 2026
Azeem Azhar
Jensen's OpenClaw Thesis-The Inference Transition Changes Everything
Latent Space
Dreamer: The Personal Agent OS-David Singleton
Ben Thompson Stratechery
Jensen Huang and Steve Jobs-What They Have in Common

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed