A 20-year DevOps veteran just shared the most useful mental model for LLMs I've seen: treat them as very fast junior engineers.
Not general assistants. Not magic automation tools. Junior engineers. The kind who execute well but need supervision. Who benefit from clear instructions. Who shouldn't be making architectural decisions without review.
The article on DEV.to walks through this framework with Terraform workflows as the test case. It's practical, honest, and immediately applicable.
What Makes This Model Work
The insight is simple but powerful. When you hire a junior engineer, you don't expect them to design the system. You expect them to implement well-defined tasks. Write the Terraform module. Update the configuration. Run the tests. Follow established patterns.
LLMs operate in exactly the same space. They're excellent at execution within boundaries. They struggle with architecture and context that isn't explicit. They need review before production deployment.
The mistake most teams make is treating LLMs as either useless or magic. Neither is true. They're capable but limited. Fast but supervised. Useful but not autonomous.
In simpler terms... imagine giving a task to a smart recent graduate who knows the syntax but doesn't yet understand why certain decisions were made. That's the capability level. And that's incredibly useful if you manage it correctly.
Domain Expertise Is Essential
Here's where it gets interesting. The article emphasises that domain expertise is required for effective supervision. You can't delegate DevOps tasks to an LLM unless you understand DevOps well enough to review the output.
This flips the "AI will replace experts" narrative. It doesn't replace expertise. It requires expertise. The better you understand the domain, the more effectively you can use LLMs as execution tools.
For Terraform specifically, this means knowing infrastructure patterns, understanding state management, recognising security implications. The LLM can write the code quickly. But someone needs to catch when it makes a subtle mistake that would cause problems three months later.
That said... who's paying for the supervision time? If reviewing LLM output takes as long as writing it yourself, the productivity gain disappears. The model only works when review is faster than creation. Which it often is - but not always.
Terraform as the Optimal Use Case
The article uses Terraform workflows because they're a perfect match for LLM capabilities. Declarative syntax. Well-documented patterns. Clear right and wrong answers. Testable output.
LLMs excel at this type of work. Given a clear requirement - "create an S3 bucket with versioning enabled and encryption at rest" - they can generate correct Terraform code reliably. The syntax is consistent. The documentation is extensive. The patterns are well-established.
Where they struggle is ambiguity. "Set up our infrastructure" is too broad. "Design a scalable architecture for our new service" requires judgment the LLM doesn't have. But "implement this specific module following our existing patterns" - that works.
The key is breaking work into LLM-suitable chunks. Not delegating entire projects. Not expecting independent decision-making. Just fast, accurate execution of well-defined tasks.
What This Changes for DevOps Teams
Right, but here's the practical bit. If you're running DevOps, this mental model changes how you allocate time.
Before LLMs: Senior engineers spend time on both architecture and implementation. Implementation is time-consuming but necessary.
With LLMs as junior engineers: Senior engineers focus on architecture and review. Implementation happens faster through LLM execution.
The bottleneck shifts from "writing code" to "designing systems and reviewing output". That's a better use of senior expertise. But it requires trust in the review process. You can't skip the review step. That's where mistakes get caught.
For teams adopting this approach, the workflow looks like: 1. Define the task clearly 2. Generate implementation with LLM 3. Review output for correctness and edge cases 4. Test in non-production environment 5. Deploy with confidence
Skip step 3 or 4, and you're gambling. But execute all five, and you're moving faster than manual implementation while maintaining quality.
The Limits Are Real
The article is honest about limitations. LLMs don't understand your specific infrastructure. They don't know your organisation's constraints. They don't remember previous conversations unless you explicitly provide context.
They're stateless junior engineers. Every interaction starts fresh. That means you need to provide context each time. Explain the patterns. Reference the documentation. Make requirements explicit.
This isn't a weakness of the technology. It's a characteristic you work with. Just like you'd give a junior engineer clear context before assigning a task.
For DevOps work specifically, this means maintaining good documentation becomes even more important. The LLM needs examples to follow. The better your internal patterns are documented, the better the LLM can replicate them.
Why This Framework Matters
What I appreciate about this approach is the honesty. It's not selling LLMs as significant. It's positioning them as useful tools with clear boundaries.
The "very fast junior engineer" model sets realistic expectations. You wouldn't give a junior engineer full architectural control. You wouldn't skip code review. You wouldn't deploy their work without testing.
Same rules apply to LLM output. Treat it as capable but supervised. Fast but reviewed. Useful within defined constraints.
For teams trying to figure out how to adopt AI tools without the hype, this is a solid starting point. Not transformational. Just practical productivity improvement through appropriate delegation.
And honestly, that's more valuable than most of the "AI will change everything" promises. Sometimes what you need is just a very fast junior engineer who never gets tired.