Session names are a retrieval tool
AI coding agent sessions get hard to manage when every run has the same generic label. A useful name is not decoration. It is how you find the right Claude Code or Codex run later, especially when you have several branches, a few worktrees, and a week of unfinished ideas in front of you.
The value shows up in three places:
- searching for the right run later,
- handing a session to another person,
- and deciding what can be archived or cleaned up.
If a name does not help with those jobs, it is probably too vague.
Junction keeps the local session history visible, so naming discipline matters more, not less. The app can surface the run, but the label still has to tell you why the run exists.
Name for the task, not the mood
Bad names usually describe how the session felt:
fixing stuffagent testlaterbug maybe
Those labels are useless a day later. The person reading them still has to open the transcript to know what happened.
Good names describe the task class and the target area:
billing: empty state copyapi: webhook retry testdocs: setup instructionssearch: branch cleanup
That style keeps the meaning obvious even when the session is old.
A practical naming formula
A strong session name usually has four parts:
- Area or repo.
- The task.
- The risk or shape of the work.
- A short hint about the expected outcome.
For example:
app / pricing empty state / low risk / copy and test
server / webhook retry / review required / fix and verify
site / mobile triage / read only / inspection passYou do not need all four parts in every name. The point is to include enough information that the next human can scan the list and know which session matters.
Keep branch names and session names aligned
If your session name says one thing and the branch name says another, review gets harder.
That mismatch usually means one of two things:
- the task changed and the label did not,
- or the run started in the wrong place.
A useful convention is to let the branch do the Git work and let the session name do the human work. The branch can be terse:
fix/pricing-empty-statedocs/setup-copyrefactor/search-history
The session can add the operational context:
pricing empty state / copy and testsetup copy / doc update / low risksearch history / cleanup / review required
That split makes the run legible without forcing the session name to carry the whole prompt.
What to include
The most helpful fields are the ones you will forget later:
- the repo or area,
- the actual task,
- the expected risk level,
- whether the run was meant to end in a PR,
- and any special handling such as a worktree or follow-up.
If the same kind of task appears often, encode the pattern in the label:
inspectionreviewcleanuphotfixdocsrefactor
Those words help you group sessions by intent instead of by whatever random prompt happened first.
What not to include
Do not turn the name into a second transcript.
Avoid:
- full prompts,
- long ticket text,
- jokes,
- timestamps that are already visible elsewhere,
- or implementation detail that belongs in the run itself.
The name should help you choose the session. The transcript should explain the session.
A simple review table
| Good session name | Why it works |
|---|---|
pricing empty state / review required |
States the area and the decision point. |
webhook retry / fix and test |
Names the task and the expected shape of the work. |
search history / cleanup pass |
Makes the follow-up action obvious. |
mobile triage / read only |
Tells you not to expect edits. |
| Weak session name | Why it fails |
|---|---|
session 4 |
Could be anything. |
bug |
Too broad to be useful. |
random idea |
Describes intention poorly. |
done maybe |
Tells you nothing about the work. |
Why naming matters for search history
Session naming becomes more important once the archive fills up. When you need to find the last good investigation, the name is often the fastest filter.
That is especially true for:
- repeated bug classes,
- recurring review flows,
- prompt experiments,
- and handoffs across devices.
If you combine a naming scheme with archived history, you can search by task class instead of rereading every transcript. That is a better use of time than rediscovering the same bug twice.
For the retrieval side, see How to Search AI Agent Session History for Useful Context. For the cleanup side, Archive AI Agent Sessions Without Losing History is the companion piece.
A concrete example
Suppose you have three active sessions:
billing empty state / review requiredsearch history / cleanup passmobile triage / read only
You can tell immediately which one needs a full review, which one is likely a cleanup task, and which one should not produce any edits at all.
Now compare that with:
fix,session,new thing.
Those names force you into the transcript every time.
Tradeoffs
Strict naming can feel picky when you are moving fast. The tradeoff is that a little discipline up front saves a lot of re-reading later.
You also do not want the name to become a process tax. If a naming convention takes longer to invent than the task itself, it is too heavy. Keep the format short enough that you can type it before the agent starts.
Where Junction fits
Junction helps because the session, diff, and output are all in one place. That makes the name more valuable, not less. The session list can be a real operational tool instead of a pile of unlabeled work.
If you want to start with a clean local setup, use the setup guide. If you are deciding how many open sessions your workflow should keep at once, compare pricing.