Why Clash speaks YAML

Every Mihomo-class engine ultimately consumes a declarative profile—almost always config.yaml—that describes listeners, outbound transports, grouping logic, DNS posture, and the ordered rules stack that steers each connection. YAML trades curly-brace ceremony for indentation, which keeps day-to-day edits approachable once you respect a few structural rules. Unlike opaque proprietary VPN bundles, a Clash profile is plain text you can version in Git, diff during merges, and annotate with short comments that survive reload cycles.

This guide walks from primitive syntax expectations through proxy-groups, the feature people casually call “strategy groups.” Along the way you will see how outbound names flow into rules, why duplicated labels explode parsers, and how subscription imports relate to static proxies. For matcher specifics—DOMAIN-SUFFIX, GEOIP, RULE-SET, ordering recipes—pair this article with the dedicated routing deep dive linked conceptually throughout.

Scope note: Keywords shown reflect broadly deployed Mihomo derivatives bundled inside maintained GUIs (Verge Rev, FlClash, Stash-related forks, and similar). Frozen legacy cores may omit newer knobs; cross-check release notes for your pinned binary before relying on an experimental field.

YAML ground rules that save hours

Treat YAML like Python without the interpreter forgiving sloppy mixes of tabs and spaces. Stick to two-space indentation per level, never interleave tabs, and keep list markers aligned under their parent key. Strings containing colons or hash symbols should use quotes so the tokenizer does not mistake structure markers inside hostnames or URLs.

Sequences appear either as bracketed JSON-style arrays or hyphen lists; Clash configs overwhelmingly prefer hyphen lists for readability. Mappings rely on key: value pairs where nested mappings indent further rightward. When in doubt, paste fragments into a YAML linter inside your editor—many VS Code derivatives highlight the exact line where nesting wandered under the wrong parent.

# Comments start with # and survive reload if your GUI preserves them
port: 7890
socks-port: 7891
mixed-port: 7893

Large profiles often stitch together fragments from providers. After every merge, visually scan for duplicated top-level keys (dns: declared twice, for example). Later declarations silently override earlier ones, producing “ghost settings” that confuse troubleshooting.

Anatomy of a minimal Clash profile

Think of the document as concentric responsibilities. Listener ports announce how local applications attach—HTTP, SOCKS, or mixed listeners depending on your GUI defaults. mode selects global behavior snapshots—rule is what power users live in because it respects your crafted decision tree, while global forces everything through one outbound and direct bypasses remote hops entirely for quick isolation tests.

mode: rule
log-level: info

proxies: []
proxy-groups: []
rules:
  - MATCH,DIRECT

The empty arrays above are pedagogical placeholders; real files populate proxies with concrete transports or offload bulk entries via proxy-providers. rules always terminates with a catch-all—traditionally MATCH pointing at a resilient policy group rather than a brittle single node alias.

Listeners, secrets, and allow-LAN cautions

Beyond bare integers for ports, advanced setups declare authentication, TLS wrappers, or Unix sockets depending on platform support. When enabling allow-lan equivalents, remember you expose proxy ports to your LAN segment—fine on trusted home Wi-Fi, risky on hotel Ethernet. Pair LAN exposure with secrets and firewall awareness.

Modern GUIs often hide listener toggles behind toggles for TUN interfaces. Even if YAML shows only HTTP/SOCKS ports, TUN mode may still synthesize virtual adapters—consult your client manual alongside raw YAML so you are not debugging the wrong layer.

The proxies section line by line

Each list entry under proxies defines one outbound the scheduler can select. Required keys vary by protocol: Shadowsocks demands server port cipher and password; VMess layers UUID and alterId legacy fields; Trojan insists on password plus TLS expectations; newer transports sprinkle additional fingerprint knobs.

proxies:
  - name: ss-tokyo
    type: ss
    server: example.net
    port: 8388
    cipher: aes-256-gcm
    password: "replace-me"

The name token is your contract with downstream sections. proxy-groups, rules, and even some provider overrides reference that string verbatim. Rename thoughtfully using editor-wide search because dangling references surface only during reload validation.

Scaling with proxy-providers

Subscriptions emit dozens or hundreds of nodes. Embedding them statically bloats Git history and guarantees stale endpoints. proxy-providers wraps HTTP(S) downloads, interval refresh timers, optional parsing directives, and health-check hints so engines hydrate outbound lists asynchronously.

proxy-providers:
  airline:
    type: http
    url: https://example.com/subscribe/clash
    path: ./providers/airline.yaml
    interval: 3600
    health-check:
      enable: true
      url: https://cp.cloudflare.com/generate_204
      interval: 600

Choose intervals that respect your provider’s fair-use guidance—hammering every sixty seconds helps nobody. Health checks pair nicely with url-test groups downstream because unhealthy nodes disappear quickly from contention pools.

proxy-groups: where “strategy” becomes concrete

A proxy-group is a named policy object the rules engine targets. The type field defines selection semantics—manual pickers, automated latency probes, ordered failover chains, weighted dispersal, or chained relays. Additional keys tweak timers, tolerance bands, lazy activation, and preferred icons inside GUIs.

select — manual control

select exposes every listed member to the UI without automatic switching logic. Operators choose nodes explicitly—ideal for streaming geo experiments or auditing suspicious endpoints. Keep membership concise; scrolling hundred-node menus wastes time.

proxy-groups:
  - name: Manual
    type: select
    proxies:
      - ss-tokyo
      - AUTO-BEST
      - DIRECT

url-test — latency chasing

url-test polls members against a lightweight probe URL, ranks latencies, and sticks with the fastest candidate until jitter breaches tolerance. Tune interval, tolerance, and lazy flags to balance responsiveness against battery or log spam.

  - name: AUTO-BEST
    type: url-test
    proxies:
      - ss-tokyo
      - ss-seoul
    url: https://www.gstatic.com/generate_204
    interval: 300
    tolerance: 50

fallback — ordered resilience

fallback walks the array sequentially and promotes traffic forward only after failures. Unlike url-test, there is no ongoing leaderboard—just deterministic preference. Great when your provider ranks nodes intentionally (“primary,” “secondary”) or when probes lie but connectivity tells the truth.

load-balance — spreading sessions

load-balance distributes flows across members using consistent hashing strategies (strategy: consistent-hashing or round-robin depending on release). Useful when single-node throughput caps frustrate large downloads, but watch provider terms—some forbid concurrent multi-login semantics.

relay — chaining with eyes open

relay stitches proxies sequentially—traffic exits the last hop. Latency stacks multiplicatively and failures cascade harder. Reserve relays for niche censorship hops where explicit chaining beats obscure routing shortcuts.

Bridging groups into rules

The third column of each rule references either built-ins (DIRECT, REJECT) or any outbound label—including proxy-groups. That indirection is why naming hygiene matters. A typo such as PROXY-GROP fails validation immediately, whereas logical mistakes (routing finance portals through the wrong region) require runtime observation.

rules:
  - DOMAIN-SUFFIX,corp.internal,DIRECT
  - GEOIP,LAN,DIRECT
  - GEOIP,CN,DIRECT
  - MATCH,Manual

Ordering remains paramount: Clash evaluates top-down and stops at the first match. Slot narrow exemptions before broad matchers so intentional bypass entries never lose to sweeping GEOIP rows.

DNS section at a glance

Modern configs declare explicit DNS stacks—nameservers, fallback hierarchies, fake-ip toggles, hosts overrides, and policy routing for lookups themselves. Misconfigured DNS creates phantom symptoms: sites appear blocked because lookups never returned viable tuples even though proxies work.

Align DNS modes with how your rules expect host visibility. Fake-ip shortcuts optimize certain matcher combinations but can confuse debugging when DOMAIN rows appear not to trigger. When unsure, capture resolver logs alongside connection logs to see whether failures originate before or after policy evaluation.

Safe merge workflow for provider snippets

  1. Copy the baseline profile your GUI exported—never edit only the ephemeral runtime cache.
  2. Merge provider YAML fragments section-wise rather than whole-file replacement when customizing.
  3. Deduplicate proxy-groups names across fragments; append suffixes if collisions arise.
  4. Re-run validation inside the client to catch schema drift before disconnecting yourself.
  5. Commit meaningful Git messages summarizing rule intent so future merges trace rationale.

Editors, schema hints, and operational checks

YAML-aware editors catch indentation slips early. Pair them with Clash-specific schemas when available so autocomplete surfaces allowed keys per protocol. After reload, exercise predictable domains—domestic news, international CDNs, LAN management IPs—to confirm expectations before trusting long sessions.

Logging verbosity interacts with troubleshooting ergonomics. Briefly elevating log-level to debug exposes matcher hits; revert afterward to avoid noisy traces on laptops with aggressive log rotation policies.

Common pitfalls that still bite experts

Duplicated rules: arrays from sloppy merges produce “lost tail” sections where half your policy never executes. Similarly, repeating proxy-groups declarations causes latter blocks to override earlier definitions without warning. Provider dashboards occasionally rename nodes—your groups still reference stale strings until refresh.

Another subtle trap is assuming load-balance behaves like multipath TCP; it does not magically bond bandwidth—it selects different upstream TCP sessions per flow hash. Expect throughput gains only when endpoints and ISP paths cooperate.

Frequently asked questions

Should FINAL and MATCH differ?
Terminology shifts across distributions, but you generally want exactly one terminal catch-all referencing a stable policy group. Duplicate terminators invite unpredictable evaluation depending on merge order.
Can proxy-groups nest indefinitely?
Practical configs nest one or two layers—manual picker atop automated selectors. Deep recursion complicates mental models without proportional gain.
How do I migrate from bare proxies-only files?
Introduce proxy-providers, reload once to fetch nodes, rebuild groups referencing both legacy manual proxies and provider-backed lists, then retire stale duplicates carefully.

Why expressive YAML still favors Clash-class cores

Consumer VPN templates rarely expose composable listeners, DNS steering, and typed outbound objects in one file. Operators trapped inside glossy buttons spend evenings bouncing between “split tunnel” toggles that never quite capture LAN exemptions or streaming concurrency rules they already drafted mentally.

Clash keeps the contract readable: proxies describe transports, proxy-groups encode operational intent—from deliberate manual selection to automated failover—and rules glue those policies onto measurable traffic classes with deterministic ordering. If you want a maintained distribution that preserves Mihomo breadth while pairing documentation with field-tested GUIs, the Clash builds curated here stay aligned with the YAML vocabulary above so you spend cycles refining strategy—not reverse-engineering opaque installers.

The sensible path remains grounded: fetch a client you trust, hydrate proxies through subscriptions where appropriate, then iterate YAML deliberately while logs confirm each hypothesis.

Download Clash for your platform and put these YAML patterns into practice →