The easiest way to turn scripts into production workflows.

Dagu adds scheduling, dependencies, retries, logs, and a Web UI around your existing scripts, commands, containers, server tasks, and AI-assisted steps. No database. No message broker. No SDK rewrite.

View Examples
Try Live Demo
Login withdemouser / demouser
dagu server --port 8080
Install dagu command
$curl -fsSL https://raw.githubusercontent.com/dagucloud/dagu/main/scripts/installer.sh | bash

Guided installer: adds Dagu to your PATH, sets up a background service, and creates the first admin so you can start running workflows.

Non-invasive

No SDK required. Your business logic stays untouched.

Lightweight

Single binary, no required database or broker

Command-native

Run scripts, containers, SSH tasks, and HTTP calls

Air-Gapped Ready

Runs fully offline. No external services needed.

workflow.yaml
# Existing scripts, now production workflows
name: "daily-ops"
schedule: "0 2 * * *"

steps:
- name: "backup-db"
command: "./scripts/backup-postgres.sh"
output: BACKUP_PATH

- name: "upload-backup"
command: "aws s3 cp $${BACKUP_PATH} s3://backups/"
depends: backup-db

- name: "notify"
type: http
depends: upload-backup

Common workflow step types

Design Principles

Single binary. File-backed state. Existing code stays unchanged.

Lightweight to run

Dagu is a single binary with no required database, message broker, or control-plane stack. Start on one machine and add workers only when you need them.

Durable executions for any operation

Use schedules, DAG dependencies, retries, queues, parameters, secrets, notifications, SSH steps, container steps, and distributed execution in readable YAML.

Non-invasive to business logic

Your scripts, services, SQL, containers, and operational commands stay as they are. Dagu orchestrates around them instead of forcing a framework or SDK into your codebase.

Observable by default

Every run gets status, logs, history, timing, and a visual workflow view, so jobs stop disappearing into crontabs and server log files.

Production Workflow Requirements

Operational checks teams make before adopting a workflow engine: throughput, queues, scheduling, recovery, access control, API access, and worker execution.

Thousands/day

Single-node throughput

Run thousands of workflow runs per day on one machine, depending on hardware, workflow shape, step duration, and queue settings.

Queues + workers

Scale execution safely

Use queues, concurrency limits, and distributed workers to control load and spread jobs across machines.

Catchup + retry

Recover scheduled work

Cron schedules, catchup, durable automatic retries, timeouts, reruns, event handler scripts, and email notifications keep failures manageable.

Users + API

Operate as a team

Use user management, RBAC, workspaces, approvals, secrets, REST API, CLI, and webhooks for shared production workflows.

Use Cases

A practical index of jobs that start as scripts and need production workflow controls.

Operational Use Cases

Dagu fits where operational work already exists as commands, scripts, containers, and server tasks, then needs scheduling, retries, dependencies, logs, and handoff.

One readable YAML file. Existing commands. A run history people can actually use.

01

Example / Hidden cron work

Cron and Legacy Script Management

Bring existing shell scripts, Python scripts, HTTP calls, and scheduled jobs into Dagu without rewriting them.

Dependencies, status, logs, retries, and history become visible in the Web UI instead of being hidden across crontabs and server log files.

The UI stays simple enough for operators, while the workflow stays concrete enough for engineers.

02

Daily jobs people can maintain

ETL and Data Operations

Run PostgreSQL or SQLite queries, S3 transfers, jq transforms, validation steps, and reusable sub-workflows.

Daily data workflows stay declarative, observable, and easy to retry when one step fails.

03

Distributed media work

Media Conversion

Run ffmpeg, thumbnail extraction, audio normalization, image processing, and other compute-heavy jobs across workers.

Conversion work can run across distributed workers while status, history, logs, and artifacts stay in one persistence layer for monitoring, debugging, and retries.

04

Scheduled remote jobs

Infrastructure and Server Automation

Coordinate SSH backups, cleanup jobs, deploy scripts, patch windows, precondition checks, and lifecycle hooks.

Remote operations get schedules, retries, notifications, and per-step logs without requiring operators to SSH into servers for every recovery.

05

Container-native pipelines

Container and Kubernetes Workflows

Compose workflows where each step can run a Docker image, Kubernetes Job, shell command, or validation step.

Image-based tasks can be routed to the right workers without building a custom control plane around containers.

06

Non-engineer operations

Customer Support Automation

Run diagnostics, account repair jobs, data checks, and approval-gated support actions from a simple Web UI.

Non-engineers can operate reviewed workflows while engineers keep commands, logs, and results traceable.

07

Small devices, visible runs

IoT and Edge Workflows

Run sensor polling, local cleanup, offline sync, health checks, and device maintenance jobs on small devices.

The single binary and file-backed state work well on edge devices while still providing visibility through the Web UI.

08

YAML agents can read and change

AI Agent Automation

Use AI agents to write, update, debug, and repair workflows because the operational contract is plain YAML.

Agent-generated changes stay reviewable and observable in the same workflow system humans already operate.

Common thread

Plain YAMLAny commandDocker and Kubernetes JobsSSHSchedulesRetriesLogsNotifications

Common Workflow Patterns

Bring scripts, scheduled jobs, server tasks, and controlled automation into one workflow engine.

Health Check
SSH Backup
Notify

Script Workflows

Turn existing shell scripts, Docker commands, SSH tasks, and HTTP calls into reliable workflows.

  • 1Keep existing scripts and commands intact
  • 2Run containers, SSH tasks, and HTTP steps in one DAG
  • 3Use dependencies instead of fragile command chains
  • 4Retry failed steps with clear logs and history
workflow.yaml
steps:
  - name: health-check
    command: curl -sf http://app:8080/health

  - name: backup
    type: ssh
    config:
      host: db-server
      user: admin
    command: pg_dump mydb > /backups/daily.sql

  - name: notify
    type: http
    config:
      url: "https://hooks.slack.com/..."
      method: POST
    body: '{"text": "Backup complete"}'

Workflow Operator for Slack & Telegram

Persistent AI operator for Slack and Telegram.Debug failures, approve actions, and recover incidents without leaving the conversation.

DaguDagu
Message...

Workflow engine features for real operations

Dagu focuses on the production layer around your existing work: schedules, dependencies, retries, logs, queues, and controlled execution.

Quickstart Guide

Install Dagu with the guided wizard, then continue in the full installation guide or quickstart docs.

1

Install dagu command

The script installers are the recommended path. Homebrew, npm, and Docker remain available for binary-only or container installs.

Mac/Linux Terminal
$curl -fsSL https://raw.githubusercontent.com/dagucloud/dagu/main/scripts/installer.sh | bash
✓ Guided installer ready
2

Next steps

The guided installer can finish the first-run setup for you.

# What the installer can do
Add Dagu to your PATH
Set up a background service
Create and verify the first admin

Project Community

Discuss usage, report issues, and follow development.