🆕 BridgeAPI docs for AI Agents & LLMs

BridgeAPI documentation is AI-accessible. We provide specialized interfaces and high-portability content formats designed specifically for Large Language Models (LLMs) and AI Agents. Whether you are building with Claude, Codex, ChatGPT, or AI-powered IDEs like Cursor and Windsurf, our documentation is optimized to give your tools the context they need with zero friction.

🛠 Model Context Protocol (MCP)

BridgeAPI documentation supports the Model Context Protocol, an open standard that enables AI models to "see" and "interact" with our API reference dynamically. Instead of manual copy-pasting, you can connect your AI agent directly to our live technical specification.

MCP Server URL: https://docs.bridgeapi.io/mcp

🧰 Available Tools

Our built-in MCP server exposes a suite of tools that allow AI agents to navigate and test our API:

  • list-endpoints: Lists all available API paths.
  • get-endpoint: Retrieves detailed technical info for a specific endpoint.
  • get-request-body: Provides the JSON schema for POST/PUT requests.
  • get-response-schema: Explains the structure of the API's return data.
  • execute-request: Allows the AI to test real API calls (if credentials are provided).
  • get-code-snippet: Generates implementation code in various languages.
  • search-specs: Performs a semantic search across the entire API reference.
  • list-security-schemes: Informs the AI about authentication requirements.
  • get-bridge-api-requirements: Provides high-level prerequisites for integration.

🚀 How to integrate with your favorite tools

Run the following command in a terminal:

claude mcp add --transport http bridgeapidocs https://docs.bridgeapi.io/mcp

✅ Testing Your MCP Setup

Once configured, you can test your MCP server connection:

  1. Open your AI editor (Cursor, Windsurf, etc.)
  2. Start a new chat with the AI assistant
  3. Ask about Bridge API Documentation - try questions like:
    • "How do I [common use case]?"
    • "Show me an example of [API functionality]"
    • "Create a [integration type] using Bridge API Documentation"

The AI should now have access to Bridge API Documentation and API Reference through the MCP server.


📄 LLMs.txt

For agents that prefer a flat-file context or for RAG (Retrieval-Augmented Generation) systems, we provide a standardized llms.txt file. This acts as a high-speed, token-efficient map of our entire documentation.

URL: https://docs.bridgeapi.io/llms.txt

  • Token Efficient: Strips away UI elements and CSS to focus purely on technical data.
  • Always Current: Automatically synced with our latest API updates.
  • Perfect for Context: Ideal for adding to "Project Rules" in IDEs.

📝 Native Markdown Portability

Our entire documentation suite is designed to be "Copy-Paste Ready" for AI prompt engineering. You can extract the raw technical content of any page in three ways:

  1. Append .md to any URL: Simply add .md to the end of any documentation link (e.g., https://docs.bridgeapi.io/docs/bridgeapi-for-ai-agents-llms.md) to view the raw markdown source.
  2. "Ask AI" Menu: Use the Ask AI button located at the top of every page. From this menu, you can select Copy Markdown to immediately add the page content to your clipboard or View as Markdown to see the source directly.
  3. Direct Integration: Send these markdown links directly to ChatGPT or Claude to give them full, unformatted context of specific endpoints.