What each tool actually does
| Doc chatbot | Investigation agent | |
|---|---|---|
| Data source | Knowledge base, help center, docs | Live production systems (ClickHouse, Linear, Stripe, GitHub) |
| Question type | "How does this feature work?" | "Why is this broken for this customer right now?" |
| Output | Article links, generated text from docs | Structured diagnosis with evidence from actual data |
| Ticket coverage | ~20% (FAQ-answerable) | ~80% (requires live investigation) |
| Time saved | Deflects simple tickets entirely | Reduces 20-45 min investigations to 2 min |
| Data freshness | Static — updated when docs change | Real-time — queries live customer data per ticket |
| Failure mode | Gives generic answer or says "I don't know" | Returns "insufficient data" with what it did find |
The 80/20 split in B2B support
B2B support tickets split roughly 80/20:
- •20% are knowledge-answerable: "How do I rotate my API key?" "What's the rate limit on the batch endpoint?" "How do I set up webhook retries?" These are well-served by doc chatbots.
- •80% require investigation: "My API calls are returning 429s." "Latency spiked since this morning." "Our webhook endpoint stopped receiving events." "My invoice doesn't match my usage." These require querying the customer's actual data across multiple systems.
The 20% that chatbots handle are the cheapest tickets anyway — they take 2–3 minutes to resolve manually. The 80% that require investigation are the expensive ones — 20–45 minutes each. A chatbot that handles 100% of the cheap tickets and 0% of the expensive tickets reduces your support cost by roughly 10–15%. An investigation agent that handles the expensive tickets reduces cost by 60–70%.
Why chatbots fail on investigation tickets
When a chatbot encounters an investigation ticket — "my API calls are failing" — it does the only thing it can: search the knowledge base for "API errors" and return the most relevant article. The customer gets a link to "Troubleshooting API Errors" that tells them to check their API key and verify their endpoint URL.
This fails because the answer isn't in the documentation. The answer is in the customer's actual API logs (12% → 43% error rate spike), your bug tracker (LIN-482, rate limit regression), and your deployment history (fix in PR #891, 3 days out). No amount of knowledge base improvement will put live, per-customer, per-ticket data into a static document.
The customer escalates. An engineer spends 30 minutes doing the investigation the chatbot couldn't. The chatbot's "deflection" actually increased resolution time by adding a wasted round-trip.
When to use each tool
The answer isn't either/or — it's both, each in their lane:
- •Deploy a doc chatbot for customer-facing FAQ deflection. It handles onboarding questions, feature documentation, and self-service workflows. Measure it by deflection rate on FAQ-type tickets.
- •Deploy an investigation agent for internal technical diagnosis. It handles the 80% of tickets that require querying live data across multiple systems. Measure it by investigation time reduction and escalation rate.
- •Don't try to make one tool do both jobs. A chatbot with "agent capabilities" bolted on will do neither well. A purpose-built investigation tool that also tries to answer FAQs is overengineered for the simple cases.
"Our doc chatbot handled maybe 1 in 5 tickets. The rest sat in queue until an engineer had time to investigate. Now the investigation happens automatically and the engineer just reviews the diagnosis."