docs: add per-engine TTL cache design spec
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
685088d3b0
commit
59f1c85fc5
1 changed files with 219 additions and 0 deletions
219
docs/superpowers/specs/2026-03-24-per-engine-ttl-cache-design.md
Normal file
219
docs/superpowers/specs/2026-03-24-per-engine-ttl-cache-design.md
Normal file
|
|
@ -0,0 +1,219 @@
|
|||
# Per-Engine TTL Cache — Design
|
||||
|
||||
## Overview
|
||||
|
||||
Replace the current merged-response cache with a per-engine response cache. Each engine's raw response is cached independently with a tier-based TTL, enabling stale-while-revalidate semantics and more granular freshness control.
|
||||
|
||||
## Cache Key Structure
|
||||
|
||||
```
|
||||
samsa:resp:{engine}:{query_hash}
|
||||
```
|
||||
|
||||
Where `query_hash` = SHA-256 of shared request params (query, pageno, safesearch, language, time_range), truncated to 16 hex chars.
|
||||
|
||||
Example:
|
||||
- `samsa:resp:wikipedia:a3f1b2c3d4e5f678`
|
||||
- `samsa:resp:duckduckgo:a3f1b2c3d4e5f678`
|
||||
|
||||
The same query to Wikipedia and DuckDuckGo produce different cache keys, enabling independent TTLs per engine.
|
||||
|
||||
## Query Hash
|
||||
|
||||
Compute from shared request parameters:
|
||||
|
||||
```go
|
||||
func QueryHash(query string, pageno int, safesearch int, language, timeRange string) string {
|
||||
h := sha256.New()
|
||||
fmt.Fprintf(h, "q=%s|", query)
|
||||
fmt.Fprintf(h, "pageno=%d|", pageno)
|
||||
fmt.Fprintf(h, "safesearch=%d|", safesearch)
|
||||
fmt.Fprintf(h, "lang=%s|", language)
|
||||
if timeRange != "" {
|
||||
fmt.Fprintf(h, "tr=%s|", timeRange)
|
||||
}
|
||||
return hex.EncodeToString(h.Sum(nil))[:16]
|
||||
}
|
||||
```
|
||||
|
||||
Note: `engines` is NOT included because each engine has its own cache key prefix.
|
||||
|
||||
## Cached Data Format
|
||||
|
||||
Each cache entry stores:
|
||||
|
||||
```go
|
||||
type CachedEngineResponse struct {
|
||||
Engine string // engine name
|
||||
Response []byte // JSON-marshaled contracts.SearchResponse
|
||||
StoredAt time.Time // when cached (for staleness check)
|
||||
}
|
||||
```
|
||||
|
||||
## TTL Tiers
|
||||
|
||||
### Default Tier Assignments
|
||||
|
||||
| Tier | Engines | Default TTL |
|
||||
|------|---------|-------------|
|
||||
| `static` | wikipedia, wikidata, arxiv, crossref, stackoverflow, github | 24h |
|
||||
| `api_general` | braveapi, youtube | 1h |
|
||||
| `scraped_general` | google, bing, duckduckgo, qwant, brave | 2h |
|
||||
| `news_social` | reddit | 30m |
|
||||
| `images` | bing_images, ddg_images, qwant_images | 1h |
|
||||
|
||||
### TOML Override Format
|
||||
|
||||
```toml
|
||||
[cache.ttl_overrides]
|
||||
wikipedia = "48h" # override default 24h
|
||||
reddit = "15m" # override default 30m
|
||||
```
|
||||
|
||||
## Search Flow
|
||||
|
||||
### 1. Parse Request
|
||||
Extract engine list from planner, compute shared `queryHash`.
|
||||
|
||||
### 2. Parallel Cache Lookups
|
||||
For each engine, spawn a goroutine to check cache:
|
||||
|
||||
```go
|
||||
type engineCacheResult struct {
|
||||
engine string
|
||||
resp contracts.SearchResponse
|
||||
fromCache bool
|
||||
err error
|
||||
}
|
||||
|
||||
// For each engine, concurrently:
|
||||
cached, hit := engineCache.Get(ctx, engine, queryHash)
|
||||
if hit && !isStale(cached) {
|
||||
return cached.Response, nil // fresh cache hit
|
||||
}
|
||||
if hit && isStale(cached) {
|
||||
go refreshInBackground(engine, queryHash) // stale-while-revalidate
|
||||
return cached.Response, nil // return stale immediately
|
||||
}
|
||||
// cache miss
|
||||
fresh, err := engine.Search(ctx, req)
|
||||
engineCache.Set(ctx, engine, queryHash, fresh)
|
||||
return fresh, err
|
||||
```
|
||||
|
||||
### 3. Classify Each Engine
|
||||
- **Cache miss** → fetch fresh immediately
|
||||
- **Cache hit, fresh** → use cached
|
||||
- **Cache hit, stale** → use cached, fetch fresh in background (stale-while-revalidate)
|
||||
|
||||
### 4. Background Refresh
|
||||
When a stale cache hit occurs:
|
||||
1. Return stale data immediately
|
||||
2. Spawn goroutine to fetch fresh data
|
||||
3. On success, overwrite cache with fresh data
|
||||
4. On failure, log and discard (stale data already returned)
|
||||
|
||||
### 5. Merge
|
||||
Collect all engine responses (cached + fresh), merge via existing `MergeResponses`.
|
||||
|
||||
### 6. Write Fresh to Cache
|
||||
For engines that were fetched fresh, write to cache with their tier TTL.
|
||||
|
||||
## Staleness Check
|
||||
|
||||
```go
|
||||
func isStale(cached CachedEngineResponse, tier TTLTier) bool {
|
||||
return time.Since(cached.StoredAt) > tier.Duration
|
||||
}
|
||||
```
|
||||
|
||||
## Tier Resolution
|
||||
|
||||
```go
|
||||
type TTLTier struct {
|
||||
Name string
|
||||
Duration time.Duration
|
||||
}
|
||||
|
||||
func EngineTier(engineName string) TTLTier {
|
||||
if override := ttlOverrides[engineName]; override > 0 {
|
||||
return TTLTier{Name: engineName, Duration: override}
|
||||
}
|
||||
return defaultTiers[engineName] // from hardcoded map above
|
||||
}
|
||||
```
|
||||
|
||||
## New Files
|
||||
|
||||
### `internal/cache/engine_cache.go`
|
||||
`EngineCache` struct wrapping `*Cache` with tier-aware `Get/Set` methods:
|
||||
|
||||
```go
|
||||
type EngineCache struct {
|
||||
cache *Cache
|
||||
overrides map[string]time.Duration
|
||||
tiers map[string]TTLTier
|
||||
}
|
||||
|
||||
func (ec *EngineCache) Get(ctx context.Context, engine, queryHash string) (CachedEngineResponse, bool)
|
||||
func (ec *EngineCache) Set(ctx context.Context, engine, queryHash string, resp contracts.SearchResponse)
|
||||
```
|
||||
|
||||
### `internal/cache/tiers.go`
|
||||
Tier definitions and `EngineTier(engineName string)` function.
|
||||
|
||||
## Modified Files
|
||||
|
||||
### `internal/cache/cache.go`
|
||||
- Rename `Key()` to `QueryHash()` and add `Engine` prefix externally
|
||||
- `Get/Set` remain for favicon caching (unchanged)
|
||||
|
||||
### `internal/search/service.go`
|
||||
- Replace `*Cache` with `*EngineCache`
|
||||
- Parallel cache lookups with goroutines
|
||||
- Stale-while-revalidate background refresh
|
||||
- Merge collected responses
|
||||
|
||||
### `internal/config/config.go`
|
||||
Add `TTLOverrides` field:
|
||||
|
||||
```go
|
||||
type CacheConfig struct {
|
||||
// ... existing fields ...
|
||||
TTLOverrides map[string]time.Duration
|
||||
}
|
||||
```
|
||||
|
||||
## Config Example
|
||||
|
||||
```toml
|
||||
[cache]
|
||||
enabled = true
|
||||
url = "valkey://localhost:6379/0"
|
||||
default_ttl = "5m"
|
||||
|
||||
[cache.ttl_overrides]
|
||||
wikipedia = "48h"
|
||||
reddit = "15m"
|
||||
braveapi = "2h"
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Cache read failure**: Treat as cache miss, fetch fresh
|
||||
- **Cache write failure**: Log warning, continue without caching for that engine
|
||||
- **Background refresh failure**: Log error, discard (stale data already returned)
|
||||
- **Engine failure**: Continue with other engines, report in `unresponsive_engines`
|
||||
|
||||
## Testing
|
||||
|
||||
1. **Unit tests** for `QueryHash()` consistency
|
||||
2. **Unit tests** for `EngineTier()` with overrides
|
||||
3. **Unit tests** for `isStale()` boundary conditions
|
||||
4. **Integration tests** for cache hit/miss/stale scenarios using mock Valkey
|
||||
|
||||
## Out of Scope
|
||||
|
||||
- Cache invalidation API (future work)
|
||||
- Dog-pile prevention (future work)
|
||||
- Per-engine cache size limits (future work)
|
||||
Loading…
Add table
Add a link
Reference in a new issue