Essay
Own Your AI Harness: Why Workflow Sovereignty Will Define the Next Decade
Claude and OpenAI are racing past 'just a model' into platforms, harnesses, and workflows. That is a lock-in trap. The people who keep control of their AI workflows will win the next ten years.
Own Your AI Harness: Why Workflow Sovereignty Will Define the Next Decade
Watch the direction the big labs are moving. Anthropic and OpenAI are no longer content to sell you a model. They are selling you the harness around the model. Agents, skills, memory, orchestration, connectors, schedulers, runtimes. Everything above the token stream.
This makes sense for them. Models commoditize. Harnesses retain customers.
It should make you nervous anyway.
The Lock-in Is the Product
Two forces squeeze anyone who outsources their AI harness:
- Models degrade. A provider can silently swap weights, quantize, route you to a smaller variant under load, or retire the version you built against. Your evals drift, your prompts stop working, and the only recourse is to rebuild on top of something else the vendor controls.
- Prices only go up after lock-in. The first phase of any platform play is cheap credits and generous free tiers. The second phase is a pricing page that "simplifies" in a way that happens to cost you more. If the harness belongs to the vendor, your switching cost is the whole harness.
When Claude or GPT is three percent of your stack, you can shrug and move on. When their agent framework, their skill system, and their workflow runtime are the backbone of your product, you cannot. That is the point.
Sovereignty Is a Boring Word for a Serious Problem
Call it what you want. Self-hosting. Local-first. Open infrastructure. The idea is simple: if you are responsible for AI workflows in production, you should control how those workflows are defined, scheduled, observed, retried, and replaced.
Three things you lose when you cede that control:
- Observability. Vendor-run orchestration shows you what the vendor wants you to see. Your logs, your traces, your failures become their product surface.
- Portability. Prompts written against a proprietary skill or agent format are not portable. They are a dialect.
- Economics. Per-step, per-agent, per-memory pricing compounds. Orchestration is cheap when you own the runtime. It becomes a line item when you rent it.
None of this means you stop using frontier models. You keep calling the best model for the job. You just stop letting the model vendor own the surrounding graph.
OpenClaw and the Shift That Is Already Happening
The OpenClaw wave is not a meme. It is a signal that developers want an open equivalent of the proprietary agent harnesses. People are tired of watching their production logic get absorbed into a vendor's roadmap. The next ten years of AI infrastructure will look a lot like the last ten years of data infrastructure: the open, local-first, self-hostable tools win the long game, because the teams running workloads refuse to be tenants in someone else's kernel.
Cheap compute plus cheap inference plus a self-hostable orchestrator turns any modest machine into usable AI infrastructure. That is not a 2035 prediction. It is a 2026 build target.
What Dagu Is, and What It Is Not
Dagu is a single-binary, local-first workflow orchestration engine. No external database. No message broker. No control plane you do not run. YAML in, DAG out, logs on disk, UI on localhost. It runs on a laptop, a Raspberry Pi, or a fleet of workers over gRPC when you need to scale.
The design follows three commitments that matter for AI workloads:
- Non-invasive to application logic. Orchestration is a separate concern from the thing being orchestrated. Your Python script does not import Dagu. Your Go binary does not inherit from a base class. If it runs on the command line or speaks HTTP, Dagu can schedule it, retry it, chain it, and observe it. Other engines force you into their shoes. Dagu stays out of your shoes.
- Easy to use, easy to maintain. Workflow engines get abandoned because operating them costs more than the workflows they run. A DAG in Dagu is a YAML file. Deployment is a binary. Upgrading is replacing the binary. There is no cluster to babysit.
- Scalable only when you need it. Start local. Grow to queued execution. Grow to distributed workers over a coordinator. Same DAG definitions the whole way. You do not pay the distributed-systems tax until you need the distributed-systems benefit.
It is not an agent framework. It is not a model wrapper. It is not trying to be Claude's harness or OpenAI's harness. It is the thing underneath, the one you keep when you swap models, swap vendors, or swap stacks entirely.
A Concrete Example
This one is intentionally small. It does exactly three things:
- Reads
git statusand the last few commit messages. - Asks a coding harness to turn that into release notes.
- Saves the draft as
release-notes.md.
name: release-notes-draft
type: graph
params:
- name: repo_path
type: string
default: .
description: Git repo to summarize
- name: max_commits
type: integer
default: 5
minimum: 1
maximum: 20
description: How many recent commits to include
artifacts:
enabled: true
harness:
provider: claude
model: sonnet
bare: true
fallback:
- provider: codex
full-auto: true
steps:
- id: collect_changes
working_dir: ${repo_path}
command: |
set -eu
git rev-parse --is-inside-work-tree >/dev/null
printf '# Working tree\n39;
git status --short
printf '\n# Recent commits\n39;
git log -n "${max_commits}" --pretty=format:'- %h %s (%an)39;
output: GIT_CONTEXT
- id: draft_notes
type: harness
working_dir: ${repo_path}
command: "Write release notes from the git context on stdin. Return markdown with a title and 3 to 5 bullets. Do not invent changes."
script: |
${GIT_CONTEXT}
output: RELEASE_NOTES
depends: [collect_changes]
- id: save_notes
command: |
test -n "${DAG_RUN_ARTIFACTS_DIR}"
mkdir -p "${DAG_RUN_ARTIFACTS_DIR}"
printf '%s\n39; "${RELEASE_NOTES}" > "${DAG_RUN_ARTIFACTS_DIR}/release-notes.md"
printf '%s\n39; "${DAG_RUN_ARTIFACTS_DIR}/release-notes.md"
output: RELEASE_NOTES_FILE
depends: [draft_notes]
Two typed params are enough here. max_commits is validated as an integer before the run starts, and repo_path makes it obvious which repo is being summarized. If claude fails, Dagu retries the same harness step with codex.
Example stdout from a run on this repo:
collect_changes
# Working tree
M app/globals.css
M content/blog/own-your-ai-harness.md
# Recent commits
- a76de45 Simplify harness blog example and improve code contrast (hamadayouta)
- ce7ac55 Precompile blog posts for Cloudflare (yottahmd)
save_notes
/.../artifacts/release-notes-draft/.../release-notes.md
Example artifact:
# Release Notes Draft
- Simplified the AI harness blog example so it is faster to read.
- Improved YAML syntax contrast on the dark code block theme.
- Regenerated the precompiled blog data used by the site build.
The Ten-Year Bet
The bet behind Dagu is not complicated. Models get cheaper. Local inference gets faster. The teams that win are the ones who treat AI workflows the way good engineers treat every other critical system: owned, versioned, observable, and portable.
Orchestration is the layer where sovereignty lives or dies. Get it right and you keep your options forever. Get it wrong and you rent your own product back from whichever lab is ahead this quarter.
Own your harness. Run it on your own hardware. Keep the keys.
Dagu is open source under GPL v3 and available on GitHub. If this resonates, the fastest way to try it is a single curl.
curl -L https://raw.githubusercontent.com/dagu-org/dagu/main/scripts/installer.sh | sh
作者
开发 Dagu:一款可靠、可移植的自托管工作流编排引擎。
Yota Hamada 的更多文章