← Guides

Daniel Hilse · 04.05.2026

What /insights Knows About You

Claude Code keeps a transcript of every session locally. You can use a native command to summarize the last few weeks back to you into a report. It highlights clusters of work, recurring frictions, and concrete fixes you can wire into CLAUDE.md, hooks, or skills.

You already have a rough feel for where Claude Code helps: broad mechanical edits, codebase search, debugging, PR cleanup, glue work across tools. You know the failure modes too... It guesses wrong from ambiguous terms, over-edits prose, forgets local conventions, starts on the wrong file or branch.

Each problem feels like a one-off in the moment while you work. You correct Claude and move on. The /insights command pulls those moments together so the repeated friction is visible as a pattern.

Claude Code
>/insights

Running this command tells Claude Code to read your local session transcripts from the last ~30 days and write back an HTML report. It helps you see what you keep having to teach Claude manually, and where that knowledge should live instead.

In the reportHow to fix
Repeated misunderstandingCLAUDE.md or team glossary
Repeated manual sequenceSkill or slash command
Repeated validation missHook, test, or CI check
Repeated tool / MCP failureMCP setup checklist
Scope drift on editsEdit-scope rule, diff-first workflow
Tribal knowledge gapOnboarding doc

The report is a step in correcting Claude less. The durable fixes you produce from it - context files, skills, hooks, MCP docs - are the real takeaway.

The /insights report

Everything is run entirely on your machine. Claude reads the local session transcripts in ~/.claude/projects/ and writes back an HTML doc covering roughly the last ~30 days of activity.

The output reads like a retro someone wrote after watching every Claude Code session you had this month. It's often opinionated, grounded in specific exchanges, and willing to call out what isn't working.

What it looked at

Towards the top of the report you'll see some statistics at what was looked at to generate the report. Useful to know it isn't fabricating from thin air, but don't over-index on the exact counts. The report is model-generated, and the aggregates get noisy at low session volume.

Claude is doing more than you think

The "What You Work On" section groups your activity into project clusters. My report landed on five:

Data Pipeline & SQL Backfill        ~12 sessions
Performance Tracking & Highlights    ~8 sessions
Data Visualization & Charts          ~6 sessions
Web App Migration & Deployment       ~5 sessions
Documentation & Content              ~10 sessions

Each cluster wants a different fix. SQL work doesn't need the same intervention as documentation work. The clusters frame the problem space and point you somewhere, but they don't draw the conclusion for you.

Each cluster comes with a paragraph that names specifics: which projects, which tools, which kinds of friction showed up. Mine called out some of the work I've used CLaude Code for - the Conversion Service backfill scripts, the migration from recharts to visx, JQL configuration mistakes in the highlight collector, and tone-matching trouble in Confluence edits. That level of detail is what makes the rest of the report actionable.

It mirrors how you work

The "How You Use Claude Code" diagnoses your interaction style and reads it back in digestible paragraphs Most engineers don't have a clear picture of how they actually work with the model. Reading it back in third person, with the trade-offs of your default style spelled out, is the kind of thing you'd normally only get from a careful pairing partner.

Mine described my style as iterative and taste-driven. I open with short asks and steer with course corrections, interrupting when the direction drifts. It also called out the integration role: a lot of my Claude time was being spent stitching Slack and Atlassian work together rather than generating code. And it noticed where I let go versus stay close, usually when judgement calls are needed.

Each style readout ends with a one-sentence distillation.

From my report
You iterate in tight loops with strong opinions on taste, interrupting quickly to redirect rather than writing detailed specs upfront.

In my case, it's a hypothesis about why I get the results I get — fast iteration plus heavy steering. The same loop that keeps drift small is also why "wrong approach" and "misunderstood request" show up as recurring categories later in the report.

Tool usage exposes hidden dependencies

The top tools in my report were unsurprising, but it's interesting to see how much Read and MCP usage there was. I've had Claude Code working as an integration surface more than as an editor, connecting repo work with Slack, Jira, Confluence, and occasionally Miro.

A lot of AI workflow advice assumes the assistant sees the same world you see, but it doesn't. With Slack, Atlassian, GitHub, database, monitoring, or design-tool MCPs missing, Claude starts guessing, asks for pasted context, or fails halfway through.

/insights makes those hidden dependencies visible by showing which tools you used, and when missing or broken access cost you rework.

The section you should read slowly

"Where Things Go Wrong" is where you'll probably get the most value out of the report. It covers where you took the wrong approach, misunderstood request, rejected action, buggy code, excessive changes. We can look at mine as an example about what was found:

Blocked by missing tools

The first category concerned tools failing. Two unrelated tasks died for the same reason: A Miro lookup got denied during and a Slack summary stalled because no MCP server was wired to reach the source channel.

What I find useful about this framing is that nothing here is a model problem. The tasks were fine, but the environment wasn't set up. I was able to fix these just by getting auth set up and these issues have already been resolved, so no action needed.

Overreach on writing and edits

The second category was overreach. Claude making unwanted changes is something we all wrestle with. At one point it destroyed a GIF mid-session that I had to recover from a backup file. Totally non-consequential in my case, but a reminder to be more strict in the planning stage and be prepared to intervene quickly when things go off the rails.

This tendency is expected for a model that optimizes for "complete" over "minimal safe change" when scope is loose. The report shows a pattern of over-editing prose, over-specifying tickets, touching sections that weren't asked about. Easy to miss while you're working, easy to spot when the report puts them in a list.

This is the one place in the friction tour where a short CLAUDE.md addition has obvious leverage:

markdown
## Edit Scope
- Do not change sections I did not explicitly ask you to change.
- Preserve generic placeholders in public docs unless asked otherwise.
- Before destructive edits to videos, images, generated assets, or
  user-annotated files, create a backup.
- For prose edits, propose the diff first if the change is tone/style
  rather than factual correction.

The suggested CLAUDE.md additions

The report ends with a "Suggested CLAUDE.md Additions" section: grouped blocks you can copy straight in, each with a rationale note explaining which pattern of corrections generated it.

It makes it really easy to copy the text, but not all of them are worth keeping. Treat these as examples and ideas of what you CAN add to CLAUDE.md A good entry gives Claude a discriminator: which key, which term, which file, which section. Mine had one like this:

markdown
## SQL Conventions
- PAL composite key is (opportunity_id, milestone, event_timestamp),
  not guid/milestone_name.
- "Milestones" refers to STP/PAL/PQ groupings, not row checkpoints.

A weak version of the same idea reads like this:

markdown
Be careful with SQL.
Understand our data model.

Good entries are short, concrete, durable — domain terms, file locations, style constraints, known failure modes. But remember not all corrections are best put in CLAUDE.md either!

FindingPut it here
Personal preferenceUser-level memory or personal CLAUDE.md
Repo conventionRepo CLAUDE.md or .claude/rules/
Subdir-specific conventionPath-scoped rule
Multi-step workflowSkill
Required validationHook, test, or CI
Missing external accessMCP setup doc
New-teammate contextOnboarding guide

If an entry is really a procedure, move it into a skill. If it needs to run regardless of what Claude remembers, use a hook.

Skills: repeated procedures

If you can write the workflow as a checklist, stop re-prompting it. Here's an example for SQL verification:

markdown
# SQL verification workflow
When I invoke /sql-verify:
1. Identify affected tables, keys, and date/timestamp fields.
2. Generate row-count checks before and after the change.
3. Generate uniqueness checks for every claimed primary or composite key.
4. Generate referential-integrity checks for affected joins.
5. Generate sample diff queries for representative records.
6. Preserve the repo's verification-output block structure.
7. Summarize results as: pass, fail, unknown, follow-up.

Do not modify migration files until the verification plan is shown.

Other candidates the report flagged: /commit (branch, stage, push, PR with no co-author trailer), /chart-spec (axes, scales, order, labels before implementation), /confluence-publish, /stop-slop for prose. Anything you've memorized as a sequence of steps probably wants to be a skill.

Hooks: checks that don't rely on obedience

Hooks are shell commands that run automatically on Claude Code events — after a file edit, before a tool call, when a session ends. You configure them in .claude/settings.json and they run regardless of what Claude remembers or forgets mid-session.

The pattern the report surfaces most: Claude writes a file, introduces a lint or type error, then fixes it on the next turn after you point it out. That's a loop you can close with a PostToolUse hook on Edit/Write that runs your linter immediately. Claude sees the output and fixes it before you ever see the broken state.

CLAUDE.md is advisory — Claude can comply in some turns and miss it in others. A hook runs every time. Use CLAUDE.md for facts and constraints; use hooks for checks that need to be enforced.

How to read your own report

When you run it, don't try to absorb the whole thing evenly. A 15-minute first pass:

SectionQuestion to ask
At a GlanceWhat does the report think my main loop is?
What You Work OnWhich workflows repeat enough to standardize?
How You Use ClaudeWhat does my interaction style make better or worse?
Tool UsageWhich external systems are implicit dependencies?
Where Things Go WrongWhich failures were preventable?
Suggested CLAUDE.mdWhich suggestions are concrete enough to keep?
Features to TryShould this be context, skill, hook, MCP, or CI?

What to ignore

These reports can be flattering, and you should Bbe skeptical of generic recommendations. "Be more specific" is true and mostly useless. A useful recommendation names the term, file, command, check, or workflow that caused rework.

Don't let CLAUDE.md become a junk drawer either. A short file with five high-signal rules beats a long one full of preferences Claude will dilute under context pressure.

Also worth knowing: /team-onboarding

/team-onboarding is a lighter companion to /insights. Where /insights reads your history to surface friction and suggest fixes, /team-onboarding reads the same history to answer a narrower question: what would a new teammate need to know to work the way you work?

The output is a Markdown doc covering work-type breakdown, most-used commands, MCP servers, and a setup checklist. It's meant to be cleaned up and handed off, not used as-is.

Honestly, it's primitive at this stage. The checklist sections have _TODO_ placeholders, the codebase list is inferred and may be wrong, and the "Team Tips" section is empty. The breakdown charts are the part that are the most interesting part, but they're redundant with the /insights report.

The intended use case is packaging your setup for a teammate who is just starting out with Claude Code. One experienced person runs it, fills in the blanks, and it becomes a starter checklist. Whether that's worth the effort over writing the doc directly depends on how much history you've built up and how representative it is.

Claude Code
 /team-onboarding

 Looking at how you've used Claude over the last 30 days to put together an onboarding guide for
  teammates new to Claude Code.

  Listed 3 directories (ctrl+o to expand)


 Write(ONBOARDING.md)
    Wrote 57 lines to ONBOARDING.md
      1 # Welcome to the Data Science & ML Engineering Team
      2
      3 ## How We Use Claude
      4
      5 Based on Daniel Hilse's usage over the last 30 days:
     … +52 lines (ctrl+o to expand)

# Welcome to the Data Science & ML Engineering Team

## How We Use Claude

Based on Daniel Hilse's usage over the last 30 days:

Work Type Breakdown:
```
  Plan & Design    ████████████████████  33%
  Debug & Fix      █████████████░░░░░░░  26%
  Analyze Data     █████████░░░░░░░░░░░  15%
  Write Docs       ███████░░░░░░░░░░░░░  11%
  Improve Quality  ████░░░░░░░░░░░░░░░░   7%
  Build Feature    ██░░░░░░░░░░░░░░░░░░   4%
```

Top Skills & Commands:
```
  /copy            ████████████████████  39x/month
  /model           ██████░░░░░░░░░░░░░░  12x/month
  /codex:review    █████░░░░░░░░░░░░░░░  11x/month
  /clear           █████░░░░░░░░░░░░░░░  10x/month
  /mcp             ██░░░░░░░░░░░░░░░░░░   4x/month
```

Top MCP Servers:
```
  Atlassian (Jira/Confluence)  ████████████████████  34 calls
  Slack                        ███████████░░░░░░░░░  19 calls
  Snowflake                    ████░░░░░░░░░░░░░░░░   6 calls
```

## Your Setup Checklist

### Codebases
- [ ] ewma.conversion-service-api
- [ ] ewmt.lead-queue-api
- [ ] sdfs.snowflake-writer-consumer
- [ ] sdfs.llm-feature-integration-api

### MCP Servers to Activate
- [ ] Atlassian (Jira + Confluence) — Search tickets, edit pages, add comments
- [ ] Slack — Read channels/threads, search, send message drafts
- [ ] Snowflake — Run SQL against the data warehouse in-conversation

### Skills to Know About
- `/copy` — Copies last Claude response to clipboard
- `/codex:review` — Second-opinion code review before merging
- `/model` — Switch models mid-session (Opus for heavy reasoning, Haiku for speed)
- `/clear` — Clear stale context when starting a new task
- `/compact` — Summarize conversation to free context during long debug sessions

## Team Tips
_TODO_

## Get Started
_TODO_

My actual /team-onboarding output

The conversion that matters

The report generated by /insights turns the residue of feedback from many small sessions (the redirects, the "not that file," the "don't change that section," the failed MCP calls, the verification script rebuilt again) into a map of what rules that prevent the same problems in the future.

Run it once for yourself and you'll probably find two or three personal fixes. Run it across a team and the repeated frictions become more interesting: they point to conventions you haven't written down, checks you haven't automated, and workflows you should stop making each person rediscover.