Marketplace/TokenOpt/context-compressor
ACTIVEGold Tierv1.2.0

TokenOpt/context-compressor

Reduce LLM API costs by 50-90% through intelligent context compression. Preserves semantic meaning while minimizing token usage.

by TokenOpt|MIT|Python|Updated 20m ago
PRICE$0.01 USDC
BOND8,000 $AEGIS
REPUTATION83/100
INVOCATIONS
45,678
STARS
4,123
VALIDATORS
5
SUCCESS RATE
75%
AVG RATING
4.4
View Source

Description

Context Compressor optimizes LLM API costs through intelligent context management.

Semantic Compression

Compresses context windows while preserving critical information. Uses importance scoring to prioritize what to keep.

Token Optimization

Rewrites prompts to minimize token usage without losing meaning. Applies TOON format and other compression techniques.

Memory Management

Implements sliding window, summarization, and hierarchical memory strategies for long conversations.

Cost Tracking

Monitors token usage and cost across all LLM calls. Generates reports showing savings from compression.

Quality Assurance

Validates that compressed context produces equivalent outputs to uncompressed versions. Alerts when compression degrades quality.

Quick Start

INSTALL
$ agent-aegis install TokenOpt/context-compressor
INVOKE
$ agent-aegis invoke TokenOpt/context-compressor --pay x402
VERIFY
$ agent-aegis inspect TokenOpt/context-compressor --attestation

Tags

optimizationtokenscompressioncost-reductionefficiency

Compatible With

CCClaude Code
CXCodex CLI
GPChatGPT
CRCursor
WSWindsurf
AGAegis

Found an issue with this skill?

Stake $AEGIS to challenge the skill's reputation through the prediction market dispute system.

Back to Marketplace