Switchboard
automation
AI coding agents

How to Use Switchboard Analytics to Find Automation Friction

Use Switchboard run analytics and activity patterns to spot bad routes, unclear issues, flaky repos, and automation that needs review.

Junction TeamJunction Panel5 min read
On this page

Analytics Should Change The Workflow

Switchboard analytics are useful only if they change how you operate the automation lane. A dashboard that says """12 runs today""" is interesting. A dashboard that helps you see why three runs blocked on missing context is operationally useful.

The goal is not to celebrate agent volume. The goal is to find friction: unclear issues, bad routes, flaky validation, overloaded daemons, and review bottlenecks.

Watch Outcomes, Not Just Activity

Activity is the least useful metric by itself. A busy queue can still be unhealthy.

Look for:

  • Runs completed with reviewable pull requests.
  • Runs blocked for human input.
  • Runs stopped or redirected.
  • Runs that changed too many files.
  • Runs that failed validation.
  • Runs waiting on review after PR creation.

The healthy state is not maximum automation. The healthy state is a steady flow of bounded issues becoming reviewable pull requests.

In Junction, read those outcomes alongside the activity feed. A run that says """completed""" but produced a broad diff may be less healthy than a run that blocked early with a clear request for missing context.

Find Route Problems

If a route fails repeatedly, inspect the route before blaming the agent.

Common route problems include:

  • Wrong daemon for the repo.
  • Missing provider authentication.
  • Missing GitHub CLI authentication.
  • Repo path not present on the target machine.
  • Model or mode mismatch for the task.
  • Concurrency too high for the machine.
  • Instructions that are too broad or stale.

A route is infrastructure. Treat repeated failures as a route bug until proven otherwise.

Concrete signal: several runs for the same repo fail before the first meaningful code edit. That usually points to daemon setup, repository path, provider auth, dependency install, or route instructions.

Find Issue Quality Problems

If runs often land in a blocked state, look at the issue content.

Signals of issue-quality friction:

  • The agent asks what repo to use.
  • The agent changes files outside the intended area.
  • The resulting PR solves a different problem.
  • Reviewers repeatedly ask for the same missing context.
  • Acceptance criteria are not testable.

This is where analytics should influence your issue template. If three blocked runs needed the same missing field, add that field to the issue-writing habit.

Concrete signal: the activity feed shows repeated clarifying questions before implementation starts. That is not an agent-speed problem. It is a context problem.

Find Repository Problems

Sometimes the issue and route are clear, but the repository is hostile to automation.

Watch for:

  • Tests that fail before the agent edits anything.
  • Build commands that are undocumented.
  • Dependencies that only install on one developer machine.
  • Unstable generated files.
  • Huge diffs from formatters.
  • Long validation commands that time out on the target daemon.

Those problems are not agent-specific. They are repository maintenance issues exposed by automation.

Find Review Bottlenecks

An agent run is not done when code exists. It is done when the work has been reviewed and accepted.

If many runs reach pull request review and then stall, the bottleneck is human review capacity. Possible fixes:

  • Reduce the queue size.
  • Split issues smaller.
  • Add clearer PR review notes.
  • Route lower-risk work first.
  • Reserve automation for issues with known owners.

Automation that creates review debt faster than humans can absorb it is not helping the team.

Concrete signal: Switchboard runs complete, but PRs sit untouched. The fix is not more routes. The fix is smaller issues, clearer ownership, or lower concurrency.

A Weekly Review Routine

Once a week, review the Switchboard lane:

  1. List completed runs.
  2. List blocked runs.
  3. List stopped or redirected runs.
  4. Identify the most common blocker.
  5. Decide whether to change issue templates, route settings, repo readiness, or concurrency.

Keep the review short. The point is to tune the system, not produce a quarterly analytics ritual.

Example: The Wrong Fix

Suppose a route has ten runs. Six complete. Four block because tests cannot run. The tempting fix is to make the agent more autonomous or loosen approvals.

That is probably the wrong fix. If tests cannot run, the route or repo environment is broken. Fix the daemon setup, dependency install, or validation command first. More autonomy only lets the agent fail faster.

Example: The Right Fix

Suppose a route has ten runs. Eight complete. Two block because issues omit the target package. The fix is simple: update the Linear issue template to require a repo and package field for Switchboard-ready work.

Analytics turned into a process change. That is the point.

Tradeoffs

Analytics can become vanity data. Avoid optimizing for total runs, average run duration, or success rate without reading the underlying work. A high success rate on tiny docs fixes does not mean the same route should handle risky backend changes.

Use analytics as a prompt for inspection. The numbers tell you where to look. The activity feed and diffs tell you what happened.

Where Junction Fits

Switchboard is available on the $15/month plan and adds Linear automation to Junction""'s control surface. It can watch a Linear workspace, route issues, run agents, and create pull requests. Analytics and activity feed help you decide whether that automation lane is healthy.

If you are still shaping the lane, read How Switchboard Turns Linear Issues Into Pull Requests and Manual AI Agent Runs vs Switchboard Automation. When the queue is stable enough to scale, compare plans on the pricing page.