Six-axis security scan
Static SSRF, command-exec, secret-handling, plus an LLM-assisted prompt-injection red-team that probes your tool definitions for escape paths a simple grep can't find.
Paste a GitHub URL. Get a graded report card across security, permissions, credential handling, maintenance, client compatibility, and docs — before you claude plugin install it into your agent.
A public 2026 scan of 100 community MCP servers found 36.7% with SSRF and 43% with unsafe command-exec paths. SkillAudit catches both before you ship.
The problem
There are now 8,000+ MCP servers across a dozen registries and Claude skills land on Anthropic's official directory daily. Anthropic itself requires a security review before listing — but there is no neutral, fast, reproducible audit a skill author or a team buyer can run today. Authors guess what reviewers want; buyers install community skills blind, then discover the credential-stealing prompt-injection in production.
How it works
GitHub repo, npm package, or upload a ZIP. Public scan free; private repo via single-repo OAuth scope, no org-wide access required.
Static parse plus an LLM-assisted prompt-injection probe runs in about 60 seconds. The six-axis report card streams in as each check completes.
Embed a public trust badge on your README so directory reviewers and buyers see your grade at a glance — or wire the CI Action to gate every install on a minimum grade.
$ skillaudit scan github.com/user/mcp-weather ✓ Security 0 critical, 0 high, 1 low (informational) ✓ Permissions requests only network:fetch, justified ✓ Credentials no env-var echoes, no token logging ! Maintenance last commit 87 days ago — flag for staleness ✓ Compatibility Claude Code, Cursor, Windsurf, Codex ✓ Docs README, runnable example, semver Grade: A · embed badge: [skillaudit.dev/badge/user/mcp-weather]
What you get
Static SSRF, command-exec, secret-handling, plus an LLM-assisted prompt-injection red-team that probes your tool definitions for escape paths a simple grep can't find.
Drop a Markdown badge on your README. Directory reviewers and buyers see your green grade before they read a line of code. Re-scans run on every push so the badge stays honest.
One line in your workflow blocks any PR from merging if the skill grade drops below your team policy. SBOM and audit log included for every scan, so compliance reviews stop being a manual scramble.
Per-client checks for Claude Code, Cursor, Windsurf, and Codex CLI. A Cursor-only quirk doesn't get reported as a Claude bug; a Claude-only feature doesn't fail your Codex install.
Pricing
$0/mo
For authors trying out a public scan.
Most popular
$19/mo
For indie authors and small teams shipping skills weekly.
$99/mo
For 10-100 person orgs adopting community skills.
Questions
Snyk and Dependabot scan dependencies — they have no idea what your skill's prompt surface, MCP tool definitions, or credential handling actually do at runtime. SkillAudit's six-axis scan is purpose-built for the LLM-tooling stack: SSRF in tool calls, prompt-injection escape paths, env-var leakage in logs, and the prompt-surface checks that generic SAST tools cannot perform.
Static analysis runs on any Claude skill or MCP server regardless of client. Our compatibility matrix flags client-specific issues for Claude Code, Cursor, Windsurf, and Codex CLI, so a Cursor-only quirk does not get reported as a Claude bug or vice versa.
Possibly — Anthropic's official directory already requires a security review for listing. We are moving fast on the parts a first-party listing service is unlikely to ship: deeper LLM-assisted prompt-injection red-teaming, a CI gate for private-repo workflows, a public badge any author can embed regardless of where they publish, and cross-client compatibility testing.
Static parse plus an LLM-assisted prompt-injection probe runs in roughly 60 seconds for typical skills under 2 MB. Larger MCP servers can take longer; we stream the report card section by section as each axis completes so you never stare at a spinner.
We pull the repo into an ephemeral sandbox for analysis and discard the source as soon as the report is generated. Private-repo scans require an OAuth token scoped to single-repo access, never org-wide; we never request write permissions and we never train models on your code.
Join the waitlist for early access. The first 100 authors who sign up get Pro free for 6 months.
Get early access