Recipes

Turn any CLI command into a reliable, repeatable workflow

Copy any prompt below and use it with the Dagu AI assistant. Install Dagu

Daily Standup Prep

Fetch your GitHub activity across all orgs, generate a spoken standup draft per org using an AI coding agent.

Use the Dagu skill to create a daily standup preparation workflow. Refer to the schema, coding agent, and pitfalls references for correct syntax. Ask the user: - How many days back should the report cover? (default: 1) - What time should it run on weekdays? (default: 8:00 AM) - Which AI coding agent CLI do they have installed? (check for claude, codex, gemini, opencode, aider in that order — use the first one found, or ask if none detected) Prerequisites: gh CLI authenticated (gh auth login), at least one AI coding agent CLI installed. The workflow should: 1. Fetch the user's GitHub activity using gh api graphql with --jq for server-side JSON formatting (do NOT use jq CLI). Fetch commits per repo (with messages via REST), merged PRs (with body), open/draft PRs updated in the period (with recent commits and timestamps, grouped by repo), and reviews. 2. Auto-discover all organizations from the activity and group everything by org. 3. For each org with activity, use an inline sub-DAG (--- separator) to generate a spoken standup draft using the user's AI agent CLI. Use the cheapest/fastest model available for that agent since this is a simple text summarization task. Skip the call entirely for orgs with no activity to avoid wasting tokens. 4. The agent command, model, and draft prompt should all be defined as top-level env variables (use YAML multiline | for the prompt) so users can easily swap agents or customize the output without editing step logic. 5. Assemble each org section as markdown: spoken draft, merged PRs, open PRs grouped by repo with commit history and timestamps, and reviews. 6. Combine all org sections into a single report saved to DAG_DOCS_DIR. 7. Schedule on weekdays with catchup, retry defaults, and timeouts on agent steps. Important: review the pitfalls reference for known workarounds. Follow the coding agent reference for the correct non-interactive command and model flags for each agent CLI.

View details

Release Notes Generator

Generate formatted release notes from git tags with PR details, linked issues, and contributor credits.

Use the Dagu skill to create a release notes generator workflow. Refer to the schema, coding agent, and pitfalls references for correct syntax. Ask the user: - Which repository? (default: the current repo, detected via gh repo view --json nameWithOwner) - Which tags to compare? (default: latest tag vs previous tag, auto-detected) Prerequisites: gh CLI authenticated (gh auth login), at least one AI coding agent CLI installed. The workflow should: 1. Resolve the two git tags to compare. If not specified, auto-detect the latest and previous tags using the GitHub API. Output them as JSON so downstream steps can reference individual fields via JSON path (e.g. ${TAGS.to}). 2. Extract all PR numbers from commits between the two tags using the GitHub compare API with --jq (do NOT use jq CLI). Output one PR number per line. 3. For each PR, fetch details via gh api graphql: number, title, author login, body summary (first ~300 chars), labels, and closingIssuesReferences (number, title, author login). Build a JSON array from the results. - CRITICAL: When iterating over output from a previous step, do NOT use "for X in $VAR" — Dagu captures multiline output into a single string variable, so word splitting does not work. Instead, read the previous step's stdout file line by line: `while IFS= read -r line; do ... done < "${prev_step.stdout}"`. Strip non-numeric characters from each line with `tr -dc '0-9'` before passing to the GraphQL query. - The gh GraphQL query string uses $-prefixed variable names ($owner, $name, $num). These are safe in Dagu scripts because Dagu only expands ${braced} variables and bare $varname patterns that match defined Dagu variables — undefined bare $names are preserved as-is for the shell. However, pass integer variables to -F without quotes (e.g. -F num=$NUM not -F num="$NUM") so gh sends them as integers, not strings. 4. Use a single AI agent step (auto-detect which CLI is available, use the cheapest model) to both categorize each PR and format the final release notes. Feed it the PR details JSON, the changelog template, and context (repo, tags, date, repo owner to exclude from contributors). 5. Save the output to DAG_DOCS_DIR. 6. Use defaults.retry_policy and timeout_sec: 300 on the AI agent step. The changelog format template MUST be defined as a top-level env variable using YAML multiline (|) so users can customize the output without editing step logic.

View details

AI Writing Cleanup

Detect and remove AI writing patterns from text using Wikipedia's Signs of AI Writing as a live reference.

Use the Dagu skill to create an AI writing cleanup workflow. Refer to the schema, coding agent, and pitfalls references for correct syntax. Ask the user: - Do they want to process a file or paste text inline? (support both input_file and input_text params) - How many rewrite rounds? (default: 2) - Strictness level? (low/medium/high, default: medium) Prerequisites: at least one AI coding agent CLI installed (claude or gemini). curl for fetching the Wikipedia reference. The workflow has 4 steps: detect_agent, setup, review_loop, finalize. Step 1 — detect_agent: Output the full binary path (not just the name) since Dagu scripts may not have the user's full PATH. Check common locations like ~/.local/bin/ as fallback. Add PATH: "${HOME}/.local/bin:${PATH}" to top-level env. Step 2 — setup: - Fetch the latest Wikipedia "Signs of AI Writing" page (raw wikitext) via curl. The URL should be a top-level env variable so users can swap it. - Prepare the input text. For input_file, cp it. For input_text, use `printenv input_text` to safely write it to a file — do NOT use ${input_text} directly in scripts because Dagu expands variables before the shell runs. See the printenv pitfall. - Write all multiline/user-controlled env vars (WRITING_STYLE, ADDITIONAL_RULES, CHECK_STRICTNESS) to helper files with a common prefix in DAG_DOCS_DIR. These files are read by the loop step and cleaned up in finalize. Step 3 — review_loop: A single script step with a bash for loop (NOT repeat_policy, NOT a sub-DAG). The loop runs up to max_rounds iterations: a. Build a prompt with the wiki reference, style (from file), strictness (from file), and current text. Use a single-quoted heredoc delimiter (<<'INSTR') for the system instructions so shell doesn't expand them. b. Call the AI agent (CHECK_MODEL, e.g. sonnet) to check text. First line of output: issue count. Remaining lines: per-issue feedback with format: ISSUE: "<quote>" | SIGN: <category> | FIX: <rewrite>. c. Save the feedback to a per-round file (e.g. ${P}_feedback_round${ROUND}.txt) so finalize can include it in the report. d. Extract count. If 0, break immediately (no rewrite needed). e. Call the AI agent (REWRITE_MODEL, e.g. opus) to rewrite. Write output directly to the text file, overwriting in place. CRITICAL: do NOT reference multiline env vars like WRITING_STYLE or ADDITIONAL_RULES directly in the script — Dagu expands them before the shell runs, which can break parsing. Read them from the helper files written by setup via cat instead. Only simple env vars (paths, model names, numbers) are safe to use directly. Step 4 — finalize: Build a full report with: metadata header (date, word counts, strictness, per-round issue counts), then an "Issues Found and Fixed" section listing all per-round feedback, then a "Final Text" section with the rewritten text. Clean up all helper files including per-round feedback files. Env var knobs (all top-level, easily customizable): - WRITING_STYLE: multiline (|) target writing style instructions - CHECK_STRICTNESS: low/medium/high - CHECK_MODEL: model for checking (cheaper, e.g. sonnet) - REWRITE_MODEL: model for rewriting (quality, e.g. opus) - ADDITIONAL_RULES: extra rules beyond the Wikipedia reference - WIKI_URL: Wikipedia raw URL (swappable) - WIKI_EXCERPT_LINES: how many lines of the wiki to feed the AI Use strongly typed params (name, type, description, default, minimum, maximum). Important: review the pitfalls reference for known workarounds. Follow the coding agent reference for the correct non-interactive command and model flags.

View details

Have a workflow that works well?

Submit a recipe