JSXN

Your React components eat context window for breakfast.
JSXN compresses them so your AI reads ~40% more code.

An MCP server for Claude Code, Cursor, Windsurf, VS Code, Zed, and more.

TaskList.tsx
195 195 tokens

Each line, compressed

10 deterministic transforms, top to bottom. Fully reversible.

How it works

See the full flow
  1. You type something in the chat. Your promptFix the bug in LoginForm — the submit button doesn't disable while loading
  2. Your message is sent to the AI model. Along with it, the client sends every available tool the model can call — the built-in file reader, the editor, the terminal, etc. If the JSXN MCP is installed, read_jsxn and write_jsxn appear alongside the built-in tools. The model sees all of them.
  3. The model decides it needs to read your file. It sees both the built-in reader and read_jsxn. It reads the descriptions, finds “~40% fewer tokens — use this instead of reading JSX/TSX files directly”, and picks read_jsxn. Tool call from the modelread_jsxn({ path: "src/LoginForm.tsx" })
  4. The client routes the call to the JSXN MCP server. The server reads LoginForm.tsx from disk, runs it through the encoder, and returns compressed JSXN instead of the raw source.
    Without JSXN — raw file, ~195 tokens
    import { useState } from "react"; import { Button } from "./ui/Button"; export default function LoginForm() { const [loading, setLoading] = useState(false); return ( <form className="login-form"> <input className="field" /> <Button onClick={() => submit()} > Submit </Button> </form> ); }
    With JSXN — what the model receives, ~116 tokens
    @I react: useState @I ./ui/Button: Button export default LoginForm() @state loading = false --- form.login-form input.field Button {onClick:() => submit()} "Submit"
  5. The model reads the JSXN and understands it natively — same information, fewer tokens. It figures out the bug and prepares the fix.
  6. To save the fix, the model calls write_jsxn. The tool description includes the full JSXN format reference, so the model writes its changes in JSXN notation — a 1:1 mapping to JSX, no information lost. Tool call from the modelwrite_jsxn({ path: "src/LoginForm.tsx", code: `@I react: useState @I ./ui/Button: Button export default LoginForm() @state loading = false --- form.login-form input.field Button {onClick:() => submit(), disabled:loading} "Submit"` })
  7. The client routes this call to the JSXN MCP server. The server decodes the JSXN back to standard JSX and writes the normal file to disk. You see clean, familiar code. What gets written to src/LoginForm.tsximport { useState } from "react"; import { Button } from "./ui/Button"; export default function LoginForm() { const [loading, setLoading] = useState(false); return ( <form className="login-form"> <input className="field" /> <Button onClick={() => submit()} disabled={loading}> Submit </Button> </form> ); }
  8. Your files never change format. JSXN only exists in the traffic between the AI model and your codebase — invisible to you, your team, and your git history.

JSXN does not compress code you paste in the chat. It only kicks in when the AI calls a tool to read files from your project on its own.

Get started

One-click install

Claude Code
claude mcp add jsx-notation -- npx jsx-notation-mcp

VS Code — .vscode/mcp.json
{ "servers": { "jsx-notation": { "command": "npx", "args": ["jsx-notation-mcp"] } } }

Cursor — ~/.cursor/mcp.json
{ "mcpServers": { "jsx-notation": { "command": "npx", "args": ["jsx-notation-mcp"] } } }

Windsurf — ~/.codeium/windsurf/mcp_config.json
{ "mcpServers": { "jsx-notation": { "command": "npx", "args": ["jsx-notation-mcp"] } } }
Cline, Continue, Amazon Q, and JetBrains IDEs also use the same mcpServers format. Paste the JSON above into your MCP settings.

Zed — ~/.config/zed/settings.json
{ "context_servers": { "jsx-notation": { "command": "npx", "args": ["jsx-notation-mcp"] } } }