Skip to content
AI & Development7 min03/26/2026

My agent workflow – from idea to deployment in minutes

How I use a multi-agent setup to ship changes to my website in minutes instead of hours – from the first prompt to live deployment.

Christopher Groß
Christopher GroßFreelance Developer

A bug, a prompt, done

The other day I notice: my blog posts aren't showing up in the sitemap. Not a critical bug, but annoying – especially for SEO. In the past, I would've opened the sitemap module, read through the config, found the right spot, fixed it, tested, committed and deployed. Maybe 30 minutes, maybe an hour.

Instead, I type one sentence into my terminal:

"Blog posts are not included in the sitemap."

Five minutes later, the fix is live.

How it works

I work with a multi-agent setup in Claude Code. Three agents, clear roles:

  • Lead Agent – Plans, creates tickets, coordinates
  • Builder Agent – Implements the code
  • Tester Agent – Runs real browser tests with screenshots

The workflow is always the same. I give the lead agent a task – sometimes a sentence, sometimes a paragraph. It analyzes the problem, searches the codebase, creates a YouTrack ticket with a detailed description and a proposed solution. I glance at it, say "go" – and the rest happens.

The lead delegates implementation to the builder agent. It writes the code, follows my coding rules from CLAUDE.md, and reports back. Then the lead sends in the tester agent, which spins up a real browser, loads the page and checks if the problem is solved. Only when everything is green does the lead tell me: ready for review.

An example that convinced me

The sitemap was trivial. But there are tasks where the workflow really shows what it can do.

When I wanted to implement full accessibility for my website, I essentially told the lead agent: "The website needs WCAG-compliant accessibility." What happened next:

  1. The lead analyzed the scope – every page, every component, every form
  2. It created a ticket with a prioritized list of all necessary changes
  3. The builder systematically worked through all components – ARIA labels, focus management, contrast modes, skip links, focus traps
  4. The tester took screenshots after each iteration and verified accessibility

The result: WCAG 2.1 Level AA in under 2 hours. Not because the AI is magic, but because the workflow parallelizes the work and eliminates context switching. I didn't have to jump between specification, code and testing – the agents coordinated that among themselves.

What I love about this workflow

No context loss

The worst thing about traditional development is context switching. Read the ticket, open the IDE, find the right file, rebuild the context, code, test, commit. With the agent workflow, I describe the problem once – the context is preserved from ticket through implementation to testing.

Documentation happens automatically

Every change is automatically documented as a ticket with description, proposed solution and screenshot. No writing tickets after the fact, no "what did I do again?". The ticket exists before the first line of code is written.

Small fixes actually get done

We all know this: you spot a typo, an awkward phrase, a small visual bug. And then you think "I'll fix it later" – and never do. With the agent workflow, the barrier is so low that I tackle these things immediately. One sentence, one prompt, done.

The part nobody wants to hear

Here's the honest part. The workflow is not "AI does everything, I sit back." That would be irresponsible.

Intervening is normal

In roughly every third or fourth task, I need to step in. Sometimes the AI interprets a requirement differently than I intended. Sometimes the generated code is technically correct but stylistically not what I want. Sometimes it's faster to change three lines myself than to explain to the AI what exactly should be different.

That's fine. The workflow doesn't save me 100% of the work – it saves me 70-80%. And the remaining 20-30% are the parts where human judgment actually makes a difference.

Review is not optional

I review every single change. Every one. Not because I don't trust the AI, but because it's my code running in production. I read the diff, check the logic, manually test critical paths. That usually takes 2-5 minutes per change – but those minutes are non-negotiable.

Blindly accepting is not an option. Not because the AI is bad – it's usually good. But "usually" isn't enough for production.

Correcting the AI is part of the process

Sometimes I say: "No, not like that. Do it this way." And then the agent learns within the session context what I want. That's not a bug in the system – that's how collaboration works. You give feedback, adjust, iterate. Just like with a human colleague.

Time savings – concrete numbers

I don't want to throw around abstract percentages. Instead, here are some real examples from this website:

  • Sitemap bug – from "I notice" to "live": 5 minutes
  • Accessibility (WCAG) – complete implementation: under 2 hours instead of an estimated 2-3 days
  • Text adjustments across multiple pages – i18n in DE and EN: 3 minutes instead of 20
  • New blog section – planning, implementation, styling: 1 day instead of an estimated 1 week

The leverage is greatest for tasks that touch many files, follow clear rules and are repetitive. The leverage is smallest for creative work, complex business logic and architectural decisions – that's still on me.

What you need for this

The workflow sounds more complex than it is. At its core, you need three things:

  1. A good CLAUDE.md – This is your project specification. Design system, coding rules, project structure. The better this file, the better the results.
  2. Clear agent definitions – Each agent has a role, tools and boundaries. This prevents chaos.
  3. A place for tasks – This doesn't have to be a ticket system. You can store plans and tasks in markdown files in your repo – Claude Code works with those just fine. But personally, I prefer a real ticket system like YouTrack.

Why I prefer a ticket system

Markdown files work. But a ticket system gives me more:

  • Better documentation – Every ticket has a history, comments, screenshots. That's cleaner than a growing .md file.
  • Easy referencing – Every ticket has a number and a title. When I want to reference an older ticket in a new task, the number is enough – the system automatically links it with title and status. No searching through markdown files.
  • Agents comment directly – My agents comment as "Claude" in the ticket. I see progress, decisions and results right in the ticket – with screenshots.
  • I can reply in the ticket – Instead of always switching to the terminal, I could also give feedback directly in the ticket. That makes the workflow even more flexible.

In the end, it's a matter of preference. Markdown is enough to get started. But if you use this workflow seriously, you'll quickly appreciate the structure of a ticket system.

The setup takes a few hours. The time savings afterward are multiples of that.

The reality behind the hype

AI agents are not autopilot. They're more like a very fast, very patient junior developer who never gets tired and follows your specifications exactly – as long as you state them clearly.

The workflow works because I stay in control. I decide what gets built. I review what was built. I intervene when necessary. The agents accelerate execution – but the responsibility stays with me.

And that's exactly how it should be.

AI agents don't replace developers. They replace the parts of the work that keep developers from focusing on what actually matters.

Christopher Groß
$ whoami

Christopher Groß

Fullstack Developer & AI Orchestrator from Hamburg

Christopher Groß has been building web applications for startups and agencies for over 20 years. His focus is on Vue.js, Nuxt, and AI-powered development. He believes in clean code, clear specs, and coffee in large quantities.

Want more?

Want to know how I work and what drives me? Reach out – 30 minutes, a real conversation about tech, AI and projects.

Book a call

Write to me directly: [email protected]

Response within 24h
Relaxed conversation
No commitment