MCP server
Storylayer ships a Streamable HTTP MCP server at https://app.storylayer.ai/api/mcp. Connect any spec-compliant MCP client and start driving Storylayer with natural language.
Authentication
The MCP endpoint accepts the same bearer tokens as the REST API:
- Local clients (Claude Desktop, MCP Inspector): paste a Personal Access Token.
- Hosted clients (Claude.ai connectors, ChatGPT GPTs): the client walks through OAuth 2.1 + Dynamic Client Registration automatically. See OAuth 2.1.
The MCP route advertises its protected-resource metadata via WWW-Authenticate on every 401, so hosted clients discover the auth flow without manual setup.
Claude Desktop
Drop the snippet below into Claude Desktop's config file, replacing sl_pat_… with your token. Restart Claude Desktop; under Settings → Developer → MCP Servers you should see storylayer with all 22 tools. If you upgraded recently and still see the old single-image schema, fully quit Claude Desktop (don't just close the window) and reopen — Claude caches MCP tool definitions per session.
Config file location
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Snippet
{
"mcpServers": {
"storylayer": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://app.storylayer.ai/api/mcp",
"--header",
"Authorization: Bearer sl_pat_..."
]
}
}
}The space between Bearer and the token is required (HTTP spec).
Claude.ai connector
Claude.ai's hosted connector flow uses OAuth 2.1 with Dynamic Client Registration. Add a custom connector and point it at:
https://app.storylayer.ai/api/mcpClaude.ai discovers /.well-known/oauth-protected-resource, registers itself via POST /oauth/register, and walks the user through consent on /oauth/authorize. No token paste required.
ChatGPT custom GPTs
ChatGPT's MCP connector flow follows the same protected-resource discovery. Point a custom GPT or Action at the MCP URL above and authorize when prompted. The token issued is an OAuth access token (sl_oat_…), scoped to whatever the user approved.
Carousels & binary uploads
The most common agent workflow is "user hands me 7 slides, ship them as a carousel." Storylayer covers this end-to-end — no separate image hosting required. Three upload paths cover every agent shape from the most permissive (Claude Desktop) to the most context-constrained (Cowork, ChatGPT MCP). Pick the path that matches your runtime:
Path A — whole-file upload (Claude Desktop, big windows)
- For each slide: call
upload_mediawithdata_base64+mime_type+filename. You get back a public URL hosted on Storylayer storage. - Call
create_storywithpost_type: "carousel"andslides: [...](ormedia_urls: [...]) carrying the URLs from step 1.
Path B — chunked upload (per-call cap, generous total budget)
Some MCP hosts cap each individual tool-call payload at ~25K tokens (~80KB base64) but allow generous total conversation context. The chunked flow splits the binary into small base64 pieces that each fit under the per-call cap:
# 1. Open a session
upload_media_init({ filename: "slide-1.png", mime_type: "image/png", total_size_bytes: 487213 })
→ { session_id, expires_at, recommended_chunk_bytes: 49152, max_chunk_bytes: 262144 }
# 2. Append chunks (repeat). Send ~48 KB binary (~64 KB base64 ≈ 16K tokens) per call.
upload_media_chunk({ session_id, chunk_index: 0, data_base64: "..." })
upload_media_chunk({ session_id, chunk_index: 1, data_base64: "..." })
# ...etc until all bytes are sent
# 3. Finalize → returns the hosted URL
upload_media_finalize({ session_id })
→ { asset, file_url: "https://...png", bytes }
# 4. Pass file_url into create_story.slides[]Caveat: chunked upload sends every byte through the agent's context window, so the total context cost still scales linearly with file size. For agents that bill conversation-history accumulation (Cowork-style), use Path C instead.
Path C — presigned PUT (the complete fix for context-bounded agents)
For Cowork, ChatGPT MCP, and any agent that pays for every byte it reads, the only complete fix is a server-side fetch pattern: hand the agent a short-lived URL that bypasses the conversation context entirely. Per-slide context cost is constant (~500 bytes), regardless of file size.
# 1. Mint a presigned PUT URL via MCP
request_upload_url({ filename: "slide-1.png", mime_type: "image/png" })
→ {
upload_url: "https://app.storylayer.ai/api/v1/media/uploads/{token}",
asset_intent_id: "...",
expires_at: "...", # 15 minutes from now
max_bytes: 10485760
}
# 2. Have your shell tool PUT the file directly. NO Authorization header —
# the token in the URL is the auth.
curl -X PUT "$UPLOAD_URL" \
-H "Content-Type: image/png" \
--data-binary @slide-1.png
→ { ok: true, file_url: "https://...png", asset: { ... }, bytes: 487213 }
# 3. Pass file_url into create_story.slides[]The bytes never enter the agent's conversation context. The agent reads only the small JSON request and response (~500 bytes total). This is the path to use for any publisher-grade carousel where total volume would exceed the agent's context budget.
Idempotent: replaying a successful PUT returns the same asset (won't double-upload). The signed token is one-shot — once the upload completes, the token can no longer be used. To recover a lost PUT response: GET /api/v1/media/uploads/{token} returns the current intent state with file_url if the upload succeeded.
Cardinality limits: Instagram + Facebook 2–10 slides; X up to 4. Each slide accepts media_url, alt_text, caption_overlay, and swipe_through_url (per-slide click-out URL). Cumulative cap across all paths: 10 MB per asset.
Pin-to-grid (Instagram)
Two shapes accepted on create_story:
pin_to_grid: true // pin indefinitely
pin_to_grid: { pin: true, until: "2026-05-30T00:00:00Z" } // bounded pin
pin_until_at: "2026-05-30T00:00:00Z" // sugar form alongside pin_to_grid: trueComment ask
Pass a structured ask via comment_ask. Default behavior: the ask is auto-appended to the caption tail at publish (idempotent — Storylayer skips the append if the ask already appears in the caption). Set comment_ask_mode: "metadata" to keep the ask analytics-only and write it into primary_caption yourself.
Destination URL (link-in-bio)
destination_url is per-post outbound destination metadata. Storylayer doesn't update your Instagram bio link, but the field travels through to content_queue and powers per-post traffic attribution.
X threads
Set caption_format: "thread" inside channel_overrides.x and Storylayer auto-splits the caption on paragraph breaks (default separator \\n\\n) and posts a chained reply thread on X. The first tweet carries any media; subsequent tweets are text-only replies.
Bulk calendar load
Use create_stories_bulk to load up to 50 stories at once — per-item failures don't block the rest of the batch. Each item accepts the same fields as create_story.
Tools reference
Every tool runs through the same auth + scope enforcement as the REST API. If a tool needs a scope the token doesn't have, the call returns an insufficient_scope error and the agent can prompt the user to re-auth.
| Tool | What it does |
|---|---|
whoami | Inspect the current token's principal + scopes. |
list_projects | Every project the token can see. |
list_templates | Visual templates available to the project. |
list_social_connections | Connected channels per project (no secrets). |
list_moments | Detected moments awaiting review. |
list_stories | Drafts, scheduled, published — same data as the dashboard. |
get_story | Full story payload + variants. |
create_story | Draft or schedule single posts and 2–10 slide carousels with per-channel overrides + tz-aware scheduling. |
create_stories_bulk | Create up to 50 stories in one call. Per-item failures don't block the rest of the batch. |
preview_story | Resolved per-channel view — exactly what would post to each channel after overrides. |
schedule_story | Pin a story to a UTC instant or local-time + IANA timezone. Bridges into the publishing queue. |
publish_story | Ship a story right now (or at a future time). Bridges into the publishing queue. |
list_media | Project asset library. |
upload_media_from_url | Register a remote URL as a brand asset (no re-host). |
upload_media | Whole-file binary upload (one base64 string). Use when the agent's per-tool payload cap fits the asset (e.g. Claude Desktop). |
upload_media_init | Open a chunked upload session. Use when per-call payload is capped but total context budget isn't (Claude.ai connector). |
upload_media_chunk | Append a small base64 chunk to a session. Recommended 48 KB binary (~16K tokens base64) per call. |
upload_media_finalize | Concatenate chunks, upload to storage, return file_url. Pass the URL into create_story's slides[]. |
request_upload_url | Mint a presigned PUT URL the agent hands to a shell tool (curl) so bytes stream straight to storage WITHOUT touching the conversation context. The complete fix for context-bounded agents (Cowork, ChatGPT MCP) and any 10+ MB asset. |
list_webhooks | All endpoints subscribed to events. |
create_webhook | Subscribe a URL to events. |
Rate limits
MCP requests share the same per-token bucket as the REST API. PATs default to 120 req/min; OAuth access tokens to 60 req/min. See Rate limits.
Troubleshooting
- Tools don't appear in Claude Desktop — confirm the JSON file parses (
cat ~/Library/Application\ Support/Claude/claude_desktop_config.json | jq .) and that the token isn't revoked. - 401 from MCP — your token expired or was revoked. Generate a new one at /dashboard/developers.
- insufficient_scope errors — the calling tool needs a scope your token doesn't have. Either regenerate the token with the right preset or, for OAuth, re-run the consent flow.
- Hosted client can't discover OAuth — verify
https://app.storylayer.ai/.well-known/oauth-protected-resourcereturns JSON. If you're proxying, make surex-forwarded-hostis preserved.