Why this exists
Picking the right providers is the first friction point in a new project.
Stack's catalog has 39 curated providers across 12
categories — too many to hold in your head, too few to outsource to a
generic LLM. stack recommend closes the gap: free-text in,
ranked providers out, frozen to a reproducible Recipe that stack apply can replay.
Quick start
Three lines from a blank repo to a provisioned stack:
stack recommend "B2B SaaS with auth, AI, and payments" --save
# → .stack/recipes/b2b-saas-with-auth-ai-and-paymen.toml
stack apply b2b-saas-with-auth-ai-and-paymen Two modes
Stack runs in whichever mode matches where you're typing. Neither mode ever calls a remote LLM from Stack itself.
| Mode | When it fires | What does the reasoning |
|---|---|---|
| Claude Code (MCP) |
You're in a Claude / Cursor / Windsurf session and the agent calls stack_recommend.
| Claude consumes Stack's retrieval output as grounded context and writes the rationale itself. Stack never calls an LLM. |
| Standalone CLI (local SLM) |
You run stack recommend --synth in a terminal, no agent
attached.
|
Stack speaks OpenAI-compatible HTTP to LM Studio (:1234)
or Ollama (:11434). Missing endpoint? Falls back to
retrieval-only output — command still exits 0.
|
stack recommend
Rank providers against a free-text description. Flags compose.
| Flag | Type | What it does |
|---|---|---|
--save | boolean |
Freeze the result as .stack/recipes/<id>.toml. The
id is slugified from the query. Feed it to stack apply.
|
--synth | boolean | Call the local SLM for model-authored rationales. Silently degrades to retrieval-only when neither LM Studio nor Ollama is reachable. |
--json | boolean | Machine-readable output. Same shape the MCP server returns to Claude. |
--k | number |
Max hits to return. Default 6.
|
--category | string |
Restrict to one category: Database, Deploy, Cloud, AI, Analytics, Errors, Payments, Code, Tickets, Email, Auth.
|
Default output
$ stack recommend "B2B SaaS with auth, AI, and payments"
┌ stack recommend
query: B2B SaaS with auth, AI, and payments
● Clerk Auth (4.13)
Drop-in auth + users. Secret key stored in Phantom.
add with: stack add clerk
● Stripe Payments (3.70)
Billing, subscriptions, tax. Restricted / test-mode secret key stored in Phantom.
add with: stack add stripe
● xAI AI (2.90)
Grok + tool use. Key verified on paste.
add with: stack add xai
● Sentry Errors (2.48)
Error + performance tracking. Creates a project and fetches its DSN.
add with: stack add sentry
● OpenAI AI (2.23)
GPT, Realtime, embeddings. Key verified against /v1/models on paste.
add with: stack add openai
● Anthropic AI (2.23)
Claude models + MCP. Key verified against the Messages API on paste.
add with: stack add anthropic
Top pick per category: Auth→clerk, Payments→stripe, AI→xai, Errors→sentry.
No matches surfaced for: Database, Deploy, Cloud, Analytics, Code, Tickets, Email.
Apply a chosen set with: stack add clerk stripe xai (or supply your own list).
Tip: rerun with `--save` to freeze as a recipe for `stack apply`.
│
└ Tell Claude or run `stack add <name>` to continue. --save — freeze to a Recipe
$ stack recommend "B2B SaaS with auth" --save
● Clerk Auth (4.13)
● Sentry Errors (2.48)
● Supabase Database (2.16)
● Firebase Database (2.16)
● Turso Database (0.79)
Saved recipe: b2b-saas-with-auth (/path/to/.stack/recipes/b2b-saas-with-auth.toml)
Apply with: stack apply b2b-saas-with-auth --synth — local-SLM rationales
With LM Studio or Ollama running, each hit gains a why: line authored by the model. Without either, the command
behaves exactly like the default — no prompts, no failures:
$ stack recommend "analytics + error tracking for a Next.js app" --synth
● PostHog Analytics (5.01)
Product analytics + session replay. Personal API key stored in Phantom.
why: PostHog gives you funnels, flags, and replay for a Next.js app in one SDK.
add with: stack add posthog
● Sentry Errors (4.77)
Error + performance tracking. Creates a project and fetches its DSN.
why: Sentry captures unhandled exceptions and Web Vitals without extra glue.
add with: stack add sentry --json — machine-readable
Same shape the MCP server emits back to Claude. Good for scripts, CI, and your own tooling:
$ stack recommend "B2B SaaS with auth and payments" --json
{
"query": "B2B SaaS with auth and payments",
"hits": [
{
"name": "clerk",
"displayName": "Clerk",
"category": "Auth",
"authKind": "api_key",
"secrets": ["CLERK_SECRET_KEY"],
"blurb": "Drop-in auth + users. Secret key stored in Phantom.",
"score": 4.125,
"matched": ["auth"]
},
{
"name": "stripe",
"displayName": "Stripe",
"category": "Payments",
"authKind": "api_key",
"secrets": ["STRIPE_SECRET_KEY"],
"blurb": "Billing, subscriptions, tax. …",
"score": 3.697,
"matched": ["payments"]
}
],
"byCategory": { "Auth": [{ "name": "clerk", "score": 4.125 }], "…": [] },
"guidance": "Top pick per category: Auth→clerk, Payments→stripe. …"
} --k — cap the hits
$ stack recommend "postgres database" --k 3
● Neon Database (5.76)
Serverless Postgres. Creates a project and pools the connection string.
● Supabase Database (5.49)
Postgres + Auth + Storage. Full upstream provisioning via the Management API.
● Turso Database (4.35)
Edge SQLite (libSQL). Creates a database in your default org. --category — restrict to one category
$ stack recommend "serverless" --category Database
● Neon Database (5.76)
Serverless Postgres. Creates a project and pools the connection string.
● Turso Database (2.18)
Edge SQLite (libSQL). Creates a database in your default org. stack apply
Replay a saved Recipe through the existing stack add pipeline,
then pre-wire Phantom rotation so the first provisioned credential lands
into an already-rotating envelope.
$ stack apply b2b-saas-with-auth
recipe b2b-saas-with-auth
query B2B SaaS with auth
providers clerk, sentry, supabase
› stack add clerk
› stack add sentry
› stack add supabase
envelopes 3 · webhooks 2
› stack doctor
✓ b2b-saas-with-auth applied. | Flag | What it does |
|---|---|
--noWire |
Skip the Phantom-wire layer — pure stack add replay, no
envelopes, no webhook stubs. Use when you want to opt out of the
rotation-from-the-first-call moat.
|
What gets wired
With the default flow (no --noWire) apply also writes:
- Phantom rotation envelopes — one per secret slot the recipe's providers declare. The envelope rotates on a cadence from day one, so the credential you just fetched is never the credential that ships to prod a week later.
- Webhook stubs — scaffolded verification + handler files
for any provider the recipe includes with a known rotation/webhook
contract:
stripe,clerk,supabase,github. Drop them into your app; they're ready to verify signatures against Phantom-held secrets. - Inline
stack doctor— runs at the end so you see any red dots beforeapplyexits.
stack apply against
the same recipe is safe — providers already present short-circuit, and the
Phantom-wire pass de-duplicates envelopes by slot name.
MCP tools
Both commands are exposed as MCP tools so Claude Code (and any MCP-speaking editor) can drive the full loop from a coding session. See the full MCP integration docs for setup.
stack_recommend stack recommend …
Retrieval-only. Returns ranked providers with scores, matched terms, and
per-category top picks. Pass save: true to also freeze a
Recipe to disk so you can follow up with stack_apply.
| Input | Type | Description |
|---|---|---|
query req | string | Free-text description of the project. |
k | number | Max results (default 6). |
category | string | Restrict to one category. |
save | boolean |
Freeze to .stack/recipes/<id>.toml; returns the
recipe id.
|
stack_apply stack apply …
Replay a saved Recipe. Runs stack add for each provider,
then pre-wires Phantom rotation envelopes + webhook stubs.
| Input | Type | Description |
|---|---|---|
recipe_id req | string |
Recipe id (filename stem in .stack/recipes/).
|
no_wire | boolean | Skip Phantom envelope + webhook wiring. |
In-session example
A real back-and-forth once Stack's MCP server is installed in Claude Code:
You > Build me a SaaS — auth, payments, error tracking, Postgres.
Claude > I'll pick the providers first and freeze a recipe so you can
replay this.
(calls stack_recommend { query: "SaaS with auth, payments,
error tracking, and Postgres", save: true })
Top picks: clerk (Auth), stripe (Payments), sentry (Errors),
neon (Database). Saved as recipe "saas-with-auth-payments-error".
Want me to apply it now? This runs `stack add` for each and
pre-wires Phantom rotation + webhook stubs.
You > Yes.
Claude > (calls stack_apply { recipe_id: "saas-with-auth-payments-error" })
✓ 4 providers provisioned · 7 rotation envelopes · 3 webhook stubs
All services healthy (stack doctor). Recipe file format
Recipes are plain TOML under .stack/recipes/<id>.toml.
Safe to commit — provider names are stable, the file contains no secrets
(credentials live in Phantom).
id = "b2b-saas-with-auth"
query = "B2B SaaS with auth"
createdAt = "2026-04-19T02:22:44.810Z"
guidance = "Top pick per category: Auth→clerk, Errors→sentry, Database→supabase."
[[providers]]
name = "clerk"
rationale = "auth"
[[providers]]
name = "sentry"
rationale = "auth"
[[providers]]
name = "supabase"
rationale = "auth"
The id is the first ~40 chars of the query, slugified. The rationale starts as the matched terms from retrieval and gets
overwritten when you re-run with --synth or when Claude
synthesizes via MCP.
How retrieval works
Zero-dep BM25 + IDF scorer over the curated provider catalog, with a small
table of hand-authored synonyms (db → database, billing → payments, auth → users, …) so natural-language queries map to the terms we
actually index. Scoring is deterministic — same query, same ranking. The
catalog itself lives in packages/core/src/catalog.ts and is
the same source the providers page renders
from.
FAQ
Can I use my own LLM?
Yes. Set STACK_LM_URL to any OpenAI-compatible endpoint. Stack
talks plain HTTP to LM Studio (localhost:1234) and Ollama
(localhost:11434) out of the box — no SDKs, no API keys in the
Stack process.
Does anything call Anthropic or OpenAI?
No. All synthesis is either local (your SLM) or delegated to Claude Code via MCP. Stack's repo has zero remote-LLM SDKs. The user's LLM keys belong to Claude, not to us.
What if the SLM isn't running?
--synth silently falls back to retrieval-only output. The
command still exits 0, the ranking is unchanged, and --save writes the same Recipe shape. No prompts, no errors —
retrieval alone is the floor, synthesis is the bonus.
What now?
Browse every provider
The full catalog retrieval runs against — 39 providers across 12 categories.
MCP integration
Wire Stack's MCP server into Claude Code, Cursor, Windsurf, or Zed.
CLI reference
Every stack subcommand with
flags and defaults.
Phantom Secrets
Where the rotation envelopes stack apply wires actually live.