Audit any Claude skill or MCP server in 60 seconds.

Paste a GitHub URL. Get a graded report card across security, permissions, credential handling, maintenance, client compatibility, and docs — before you claude plugin install it into your agent.

A public 2026 scan of 100 community MCP servers found 36.7% with SSRF and 43% with unsafe command-exec paths. SkillAudit catches both before you ship.

The problem

The Claude skill ecosystem is exploding. The trust signal isn't.

There are now 8,000+ MCP servers across a dozen registries and Claude skills land on Anthropic's official directory daily. Anthropic itself requires a security review before listing — but there is no neutral, fast, reproducible audit a skill author or a team buyer can run today. Authors guess what reviewers want; buyers install community skills blind, then discover the credential-stealing prompt-injection in production.

How it works

From URL to graded report card in three steps.

  1. 01

    Paste a URL

    GitHub repo, npm package, or upload a ZIP. Public scan free; private repo via single-repo OAuth scope, no org-wide access required.

  2. 02

    Get graded

    Static parse plus an LLM-assisted prompt-injection probe runs in about 60 seconds. The six-axis report card streams in as each check completes.

  3. 03

    Earn the badge

    Embed a public trust badge on your README so directory reviewers and buyers see your grade at a glance — or wire the CI Action to gate every install on a minimum grade.

Sample report excerpt — what the static scan returns for a real MCP server:
$ skillaudit scan github.com/user/mcp-weather
✓ Security         0 critical, 0 high, 1 low (informational)
✓ Permissions      requests only network:fetch, justified
✓ Credentials      no env-var echoes, no token logging
! Maintenance      last commit 87 days ago — flag for staleness
✓ Compatibility    Claude Code, Cursor, Windsurf, Codex
✓ Docs             README, runnable example, semver

Grade: A  · embed badge: [skillaudit.dev/badge/user/mcp-weather]

What you get

Six-axis scan. Public badge. CI gate. Done.

Six-axis security scan

Static SSRF, command-exec, secret-handling, plus an LLM-assisted prompt-injection red-team that probes your tool definitions for escape paths a simple grep can't find.

Public trust badge

Drop a Markdown badge on your README. Directory reviewers and buyers see your green grade before they read a line of code. Re-scans run on every push so the badge stays honest.

CI gate via GitHub Action

One line in your workflow blocks any PR from merging if the skill grade drops below your team policy. SBOM and audit log included for every scan, so compliance reviews stop being a manual scramble.

Cross-client compatibility

Per-client checks for Claude Code, Cursor, Windsurf, and Codex CLI. A Cursor-only quirk doesn't get reported as a Claude bug; a Claude-only feature doesn't fail your Codex install.

Pricing

Free for public repos. $19/mo when you ship for real.

Free

$0/mo

For authors trying out a public scan.

  • 3 audits/month on public repos
  • Public trust badge
  • Basic six-axis report
Join waitlist

Team

$99/mo

For 10-100 person orgs adopting community skills.

  • Everything in Pro for up to 10 seats
  • SSO + role-based access
  • Policy export (min-grade gate)
  • SBOM + audit log per scan
Contact us

Questions

Frequently asked

Why not just use Snyk or Dependabot?

Snyk and Dependabot scan dependencies — they have no idea what your skill's prompt surface, MCP tool definitions, or credential handling actually do at runtime. SkillAudit's six-axis scan is purpose-built for the LLM-tooling stack: SSRF in tool calls, prompt-injection escape paths, env-var leakage in logs, and the prompt-surface checks that generic SAST tools cannot perform.

What clients does it work with?

Static analysis runs on any Claude skill or MCP server regardless of client. Our compatibility matrix flags client-specific issues for Claude Code, Cursor, Windsurf, and Codex CLI, so a Cursor-only quirk does not get reported as a Claude bug or vice versa.

Will Anthropic ship this themselves?

Possibly — Anthropic's official directory already requires a security review for listing. We are moving fast on the parts a first-party listing service is unlikely to ship: deeper LLM-assisted prompt-injection red-teaming, a CI gate for private-repo workflows, a public badge any author can embed regardless of where they publish, and cross-client compatibility testing.

What does "in 60 seconds" actually mean?

Static parse plus an LLM-assisted prompt-injection probe runs in roughly 60 seconds for typical skills under 2 MB. Larger MCP servers can take longer; we stream the report card section by section as each axis completes so you never stare at a spinner.

Do you store my source code?

We pull the repo into an ephemeral sandbox for analysis and discard the source as soon as the report is generated. Private-repo scans require an OAuth token scoped to single-repo access, never org-wide; we never request write permissions and we never train models on your code.

Get the green badge before you publish.

Join the waitlist for early access. The first 100 authors who sign up get Pro free for 6 months.

Get early access