Knowledge Analysis
Knowledge Analysis is where you monitor whether your support knowledge is staying healthy as your product and policies evolve. Teams that skip this layer usually discover problems late, after quality has already drifted in live conversations.
If Validation is your pre-flight check, Analysis is your ongoing instrumentation.
What Analysis Tracks in Practice
In advanced analysis mode, Jardine surfaces health and ingestion signals that help you understand the current state of your knowledge base, not just whether a single prompt worked once.
You can inspect overall health score, indexed ratio, failed ratio, total chunk volume, and collection-level breakdowns. You also get issue highlights and recommendations that help prioritize maintenance work.
These metrics are useful because quality regressions often start in ingestion and content coverage long before they show up as obvious conversation failures.
For example, if indexed ratio drops or failed-document count climbs, support behavior can become unstable even when routing and channel setup have not changed.
Why Analysis Matters for Support Operations
Support quality is not static. Documentation changes, product policies shift, teams upload new files, and old docs become stale. Without analysis, these changes can quietly degrade answer reliability.
Analysis gives support leads and operators an early warning system. Instead of reacting to scattered tickets, you can detect trends in one place and fix root causes proactively.
This changes the team posture from reactive to preventive.
A practical mental model is:
- Validation answers “does this scenario work right now?”
- Analysis answers “is our knowledge system staying healthy over time?”
You need both.
How to Read the Signals
Start with health score and indexed ratio. If both are healthy, your base is likely stable. If they trend downward, inspect failed and pending counts next.
Then review collection-level metrics. A strong global score can hide one weak collection that powers high-risk workflows. Collection views help you find those blind spots quickly.
Next, inspect issues and recommendations. These are useful for prioritization. Not every issue needs urgent action, but recurring issues in customer-critical collections should move to the top of your queue.
Finally, cross-reference with live conversation patterns. If a topic is producing weak answers and the related collection shows ingestion or coverage problems, you have a clear repair path.
A Weekly Analysis Routine That Works
Most teams benefit from a short weekly analysis review. It does not need to be heavy.
A useful routine is:
- Review health score trend.
- Check failed and pending document counts.
- Scan collection-level outliers.
- Read current issues and recommendations.
- Create one prioritized improvement action.
This routine catches drift early and keeps maintenance manageable.
After major policy or product changes, run a deeper pass. Update affected docs, verify ingestion, then validate high-impact scenarios so analysis and validation stay aligned.
Common Mistakes in Analysis Usage
The first mistake is using analysis only when things are already broken. By then, customers have already felt the impact.
The second mistake is focusing only on one headline metric. Knowledge health is multi-dimensional; indexed ratio alone cannot tell you whether critical collections are healthy.
The third mistake is collecting metrics without follow-through. Analysis is valuable only when it informs action: content updates, re-ingestion, archiving stale docs, or targeted validation runs.
The fourth mistake is treating recommendations as optional noise. They are often the quickest way to identify where small fixes create big reliability gains.
How Analysis Connects to the Rest of the Workflow
Analysis is most useful when connected to day-to-day operating loops.
If analysis flags failed docs, fix ingestion and rerun validation. If analysis flags weak collection health, tighten content and monitor conversation outcomes. If analysis remains strong while incidents persist, investigate routing or connector behavior instead of over-editing sources.
This layer-by-layer approach prevents guesswork and keeps your team focused on the actual failure domain.
Keeping Knowledge Healthy as You Scale
As your support operation expands, the library grows, teams contribute content, and edge cases multiply. Analysis becomes even more important in that phase because manual intuition no longer scales.
Healthy teams use analysis to keep source quality visible, measurable, and continuously maintained. They do not wait for a quarterly cleanup. They run small, consistent quality cycles.
That discipline is what allows AI-assisted support to stay reliable at scale.
When analysis and validation are part of your normal cadence, you can move faster with fewer surprises, because you are learning from your system continuously instead of debugging it only when it fails.
After analysis reviews are in place, continue with Routing Overview to align policy behavior with the quality signals you are now monitoring.