Documentation Index
Fetch the complete documentation index at: https://labs.prompthon.io/llms.txt
Use this file to discover all available pages before exploring further.
Summary
This starter shows a small prompt-cache-aware agent loop: stable prompt layers first, dynamic memory later, and a tiny benchmark surface for comparing cold and warm run metadata.Status
starter
Source code: patterns/examples/prompt-cache-agent-starter
Why It Exists
Prompt caching is easy to describe and easy to misuse. Builders often place retrieved memory, user-specific facts, or current-turn inputs inside the same long prefix they expect the provider to cache. That makes cache behavior harder to reason about. This starter keeps the boundary visible. It treats tool manifests, system instructions, and stable reference context as cacheable layers, while durable memory summaries and current tasks stay outside the cached prefix unless the builder intentionally promotes them.Related Lab Pages
Folder Structure
Included Sample Files
src/prompt_cache_agent_starter.py: typed helpers for prompt layers, cache boundary detection, usage summaries, and cold/warm comparisonstests/test_prompt_cache_agent_starter.py: executable smoke test for the starter behaviorSOURCE_NOTES.md: source lineage and attribution boundary
Flow Boundaries
The starter may:- model prompt layers as cacheable or dynamic
- calculate where the stable prefix ends
- compare cache-read and cache-write shares
- estimate input cost when current pricing values are supplied
- call a real API
- store raw transcripts
- hardcode provider prices
- collapse durable memory into the cached prefix by default
Quick Start
From the repository root:Next Steps
- Add a provider adapter that consumes redacted Claude usage metadata.
- Add a small JSONL fixture for documentation-only report examples.
- Add a companion notebook if the benchmark flow becomes more exploratory.
