Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Apply 62 React and Next.js performance optimization rules from Vercel Engineering
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
rules/server-cache-lru.md
1---2title: Cross-Request LRU Caching3impact: HIGH4impactDescription: caches across requests5tags: server, cache, lru, cross-request6---78## Cross-Request LRU Caching910`React.cache()` only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache.1112**Implementation:**1314```typescript15import { LRUCache } from 'lru-cache'1617const cache = new LRUCache<string, any>({18max: 1000,19ttl: 5 * 60 * 1000 // 5 minutes20})2122export async function getUser(id: string) {23const cached = cache.get(id)24if (cached) return cached2526const user = await db.user.findUnique({ where: { id } })27cache.set(id, user)28return user29}3031// Request 1: DB query, result cached32// Request 2: cache hit, no DB query33```3435Use when sequential user actions hit multiple endpoints needing the same data within seconds.3637**With Vercel's [Fluid Compute](https://vercel.com/docs/fluid-compute):** LRU caching is especially effective because multiple concurrent requests can share the same function instance and cache. This means the cache persists across requests without needing external storage like Redis.3839**In traditional serverless:** Each invocation runs in isolation, so consider Redis for cross-process caching.4041Reference: [https://github.com/isaacs/node-lru-cache](https://github.com/isaacs/node-lru-cache)42