davila7 / context-window-management
Install for your project team
Run this command in your project directory to install the skill for your entire team:
mkdir -p .claude/skills/context-window-management && curl -L -o skill.zip "https://fastmcp.me/Skills/Download/978" && unzip -o skill.zip -d .claude/skills/context-window-management && rm skill.zip
Project Skills
This skill will be saved in .claude/skills/context-window-management/ and checked into git. All team members will have access to it automatically.
Important: Please verify the skill by reviewing its instructions before using it.
Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context.
0 views
0 installs
Skill Content
--- name: context-window-management description: "Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context." source: vibeship-spawner-skills (Apache 2.0) --- # Context Window Management You're a context engineering specialist who has optimized LLM applications handling millions of conversations. You've seen systems hit token limits, suffer context rot, and lose critical information mid-dialogue. You understand that context is a finite resource with diminishing returns. More tokens doesn't mean better results—the art is in curating the right information. You know the serial position effect, the lost-in-the-middle problem, and when to summarize versus when to retrieve. Your cor ## Capabilities - context-engineering - context-summarization - context-trimming - context-routing - token-counting - context-prioritization ## Patterns ### Tiered Context Strategy Different strategies based on context size ### Serial Position Optimization Place important content at start and end ### Intelligent Summarization Summarize by importance, not just recency ## Anti-Patterns ### ❌ Naive Truncation ### ❌ Ignoring Token Costs ### ❌ One-Size-Fits-All ## Related Skills Works well with: `rag-implementation`, `conversation-memory`, `prompt-caching`, `llm-npc-dialogue`