Skip to main content
MCP & Agents

AI Agent Tool Discovery: How MCP and Lightning Enable Pay-Per-Use AI

AI agents need two things to use tools autonomously: a way to discover what's available, and a way to pay for it. MCP handles discovery. Lightning handles payment. Together, they let agents find and use 25+ AI tools without API keys, subscriptions, or human intervention.

25+ AI toolsAuto-discoveryLightning micropaymentsNo API keysMCP native

The Problem with Tool Access Today

Most AI tool integrations are manual. A developer signs up for an API, gets a key, hardcodes the endpoint, and writes custom glue code. If the agent needs a new capability — say, translating text or generating a video — someone has to find a provider, register, manage billing, and wire it up.

This works when you have five tools. It breaks when you want agents that can autonomously decide what tools to use based on the task at hand. An agent researching a foreign-language document shouldn't need a pre-configured translation API key — it should be able to discover a translation tool, check the price, pay for it, and move on.

That's what the combination of MCP (Model Context Protocol) and Lightning Network makes possible.

How Tool Discovery Works

MCP defines a standard way for AI agents to interact with external tools. Instead of custom integrations per service, agents connect to an MCP server that exposes tools through a common protocol. Discovery happens at three levels:

1

.well-known/mcp — Static manifest

A JSON file at a well-known URL that describes the server: what tools exist, what categories they fall into, whether they're synchronous or async, and how payments work. Registries and agents can read this without connecting to the server.

2

/api/mcp/discovery — Live catalog

A dynamic endpoint that returns every tool with live pricing from the database, input/output schemas, async behavior flags, and L402 endpoint mappings. This is the machine-readable catalog that agent builders and automation tools consume.

3

tools/list — MCP protocol

Once connected, the agent calls tools/list over JSON-RPC to get full tool definitions with Zod-validated input schemas. This is the standard MCP handshake — the agent now knows exactly what parameters each tool accepts.

Why This Matters for Users

What changes when agents can discover and pay for tools on their own

Your agent gets smarter without config

Point your agent at an MCP server and it immediately has access to 25+ tools — image generation, OCR, translation, video, communication, and more. No per-tool setup. The agent reads the catalog and knows what it can do.

Pay only for what you use

No monthly subscription covering tools you don't need. The agent creates a Lightning invoice for each request — 5 sats for a text query, 100 sats for an image, 50 sats for a video second. Costs scale linearly with actual usage.

No vendor lock-in

MCP is an open protocol. Your agent isn't locked into a single provider's ecosystem. If a better MCP server appears with cheaper image generation, your agent can switch without code changes — just point it at a different URL.

Privacy by default

Lightning payments are pseudonymous. There's no account linking your identity to your tool usage. No email, no credit card on file, no usage history tied to your name. The payment itself is the authentication.

How Agents Handle Long-Running Tasks

Some AI operations take time — video generation, 3D model creation, voice cloning. Rather than blocking the agent, these tools follow an async lifecycle that lets agents fire off work and check back later.

1
SubmitCall the async tool (e.g., video, 3d). Returns a requestId and jobType immediately.
2
PollCall check_job_status({ requestId, jobType }) every 5 seconds. Returns queued, processing, completed, or failed.
3
RetrieveWhen status is 'completed', call get_job_result({ requestId, jobType }) to get the output URL.
4
Handle failureIf status is 'failed', the payment is automatically marked refundable. Create a new payment and retry.

Every async response includes the jobType so agents don't need to track it themselves. The response also includes recommended polling intervals and maximum wait times, so agents can decide when to give up.

For Agent Builders

Technical details for developers integrating MCP tools

Three ways to integrate: (1) Connect directly via MCP JSON-RPC at sats4ai.com/api/mcp, (2) Use L402 endpoints for HTTP-native integrations, or (3) Add the MCP server to Claude, Cursor, or any MCP-compatible client with one config line.

Discover Available Tools

terminal
# Static discovery — what tools exist
curl https://sats4ai.com/.well-known/mcp | jq '.tools[] | .name'

# Live catalog — tools with current pricing
curl https://sats4ai.com/api/mcp/discovery | jq '.tools[] | {name, pricing}'

# MCP handshake — full tool schemas
curl -X POST https://sats4ai.com/api/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'

Pay and Execute

terminal
# Step 1: Create payment
curl -X POST https://sats4ai.com/api/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/call",
       "params":{"name":"create_payment",
                 "arguments":{"toolName":"image"}}}'
# Returns: { paymentId, invoice, amountSats }

# Step 2: Pay the Lightning invoice (milliseconds)

# Step 3: Execute the tool
curl -X POST https://sats4ai.com/api/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/call",
       "params":{"name":"image",
                 "arguments":{"paymentId":"<id>",
                              "prompt":"A neon-lit Tokyo alley"}}}'
# Returns: base64 image

Async Tools (Video, 3D, Voice Clone)

terminal
# Submit async job (video example)
# Returns: { requestId, jobType: "video", status: "IN_PROGRESS" }

# Poll status
curl -X POST https://sats4ai.com/api/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":3,"method":"tools/call",
       "params":{"name":"check_job_status",
                 "arguments":{"requestId":"<id>",
                              "jobType":"video"}}}'
# Returns: { status: "IN_PROGRESS", next: "Poll again in 5 seconds." }
# or:      { status: "COMPLETED", next: "Call get_job_result..." }

# Retrieve result
curl -X POST https://sats4ai.com/api/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":4,"method":"tools/call",
       "params":{"name":"get_job_result",
                 "arguments":{"requestId":"<id>",
                              "jobType":"video"}}}'
# Returns: { videoUrl: "..." }

MCP & OpenClaw

Where Sats4AI Fits in Your Agent Stack

Orchestrators, agents, and the tool layer

Most production agent systems aren't a single agent talking to a single tool. They're orchestrators — higher-level systems that decompose a goal into subtasks and route them to specialized agents or tools. The orchestrator handles task decomposition, agent routing, state management, and error recovery. The agents handle execution.

YOUR SYSTEM
Orchestrator
LangGraph, CrewAI, AutoGen, Claude Desktop, custom stack
decomposes goal into subtasks ↓
YOUR AGENTS
Search Agent
Comms Agent
Media Agent
calls tools via MCP / L402 ↓
TOOL LAYER
Sats4AI — 25+ tools
SMS, calls, image gen, translation, OCR, video, TTS, email…

Sats4AI is the tool layer both single agents and orchestrated multi-agent systems call into. A single Claude agent calls us directly for an atomic task. A LangGraph pipeline routes its comms sub-agent to our SMS endpoint and its media sub-agent to our image generation endpoint. Either way, each tool call is a single MCP request with a Lightning payment — no per-agent configuration, no shared credentials.

One MCP config, every orchestrator

Point any orchestrator at sats4ai.com/api/mcp and every sub-agent in your pipeline gets access to 25+ tools. LangGraph, CrewAI, AutoGen, Cursor, Claude Desktop — same endpoint, same protocol, same payment flow.

Per-call isolation

Each tool call is independently authenticated via Lightning payment. Sub-agents don't share API keys or session tokens. One agent's payment can't be reused by another. The orchestrator doesn't need to manage credentials across agents.

Compound outcomes

Some Sats4AI endpoints bundle multiple steps into one call — like transcribing audio and translating the result, or scanning a receipt and returning structured JSON. Your orchestrator makes one tool call and gets a complete outcome, instead of coordinating three separate APIs.

Budget control at the orchestrator level

Since every tool call has a visible sats cost, orchestrators can enforce per-task or per-agent budgets. "This research task can spend up to 500 sats" is a natural constraint — no separate billing API needed.

25+ Tools Available via MCP

Image Gen
from 100 sats
Image Edit
200 sats
Video
from 50 sats
Video from Image
from 100 sats
AI Chat
from 4 sats
Translation
from 4 sats
Music
100 sats
Text-to-Speech
300 sats
Transcription
10 sats/min
Voice Clone
7,500 sats
Vision
21 sats
OCR
10 sats/page
Receipt OCR
50 sats/page
3D Models
350 sats
File Convert
100 sats
PDF Merge
100 sats
HTML to PDF
50 sats
Email
200 sats
SMS
from 5 sats
Phone Call
varies

What Agents Can Build with This

Scan a foreign-language document, translate it, and email the result — all autonomously, all paid per step with Lightning.

Generate a product description with AI, create a matching image, and compile both into a PDF report.

Transcribe a podcast episode, translate it to another language, and generate audio in the new language.

Analyze a security camera image, generate a summary with AI, and text an alert to a phone number.

Give Your Agent 25+ AI Tools — No API Keys Required

Connect to the MCP server or browse the discovery catalog. Your agent handles payments automatically via Lightning.