Skip to main content

Announcing StackOne Defender: leading open-source prompt injection guard for your agent Read More

Databricks MCP Server
for AI Agents

Production-ready Databricks MCP server with extensible actions — plus built-in authentication, security, and optimized execution.

Databricks logo
Databricks MCP Server
Built by StackOne StackOne

Coverage

17 Agent Actions

Create, read, update, and delete across Databricks — and extend your agent's capabilities with custom actions.

Authentication

Agent Tool Authentication

Per-user OAuth in one call. Your Databricks MCP server gets session-scoped tokens with zero credentials stored on your infra.

Agent Auth →

Security

Agent Protection

Every Databricks tool response scanned for prompt injection in milliseconds — 88.7% accuracy, all running on CPU.

Prompt Injection Defense →

Performance

Max Agent Context. Min Cost.

Free up to 96% of your agent's context window to enhance reasoning and reduce cost, on every Databricks call.

Tools Discovery →

What is the Databricks MCP Server?

A Databricks MCP server lets AI agents read and write Databricks data through the Model Context Protocol — Anthropic's open standard for connecting LLMs to external tools. StackOne's Databricks MCP server ships with pre-built actions, fully extensible via the Connector Builder — plus managed authentication, prompt injection defense, and optimized agent context. Connect it from MCP clients like Claude Desktop, Cursor, and VS Code, or from agent frameworks like OpenAI Agents SDK, LangChain, and Vercel AI SDK.

All Databricks MCP Tools and Actions

Every action from Databricks's API, ready for your agent. Create, read, update, and delete — scoped to exactly what you need.

Users

  • Create User

    Create a new user in the Databricks account

  • List Users

    Get all users in the Databricks account

  • Get User

    Retrieve a specific user by ID

  • Update User

    Update an existing user's attributes

  • Delete User

    Remove a user from the Databricks account

Groups

  • Create Group

    Create a new group in the Databricks account

  • List Groups

    Get all groups in the Databricks account

  • Get Group

    Retrieve a specific group by ID

  • Update Group

    Update an existing group's attributes

  • Delete Group

    Remove a group from the Databricks account

Group Members

  • Add Group Member

    Add a user or service principal to a group

  • Remove Group Member

    Remove a user or service principal from a group

Service Principals

  • Create Service Principal

    Create a new service principal in the Databricks account

  • List Service Principals

    Get all service principals in the Databricks account

  • Get Service Principal

    Retrieve a specific service principal by ID

  • Update Service Principal

    Update an existing service principal's attributes

  • Delete Service Principal

    Remove a service principal from the Databricks account

Set Up Your Databricks MCP Server in Minutes

One endpoint. Any framework. Your agent is talking to Databricks in under 10 lines of code.

MCP Clients

Agent Frameworks

Claude Desktop
{
  "mcpServers": {
    "stackone": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-remote@latest",
        "https://api.stackone.com/mcp?x-account-id=<account_id>",
        "--header",
        "Authorization: Basic <YOUR_BASE64_TOKEN>"
      ]
    }
  }
}

More Data Infrastructure MCP Servers

Supabase

128+ actions

Grafana

89+ actions

Render

81+ actions

Snowflake

80+ actions

Sentry

74+ actions

Honeycomb

68+ actions

Talend

52+ actions

Databricks MCP Server FAQ

Databricks MCP server vs direct API integration — what's the difference?
A Databricks MCP server and direct API integration serve different use cases. Direct API integration is for software-to-software — backend code calling Databricks. A Databricks MCP server is for AI agents — MCP clients like Claude and Cursor, plus framework agents built with OpenAI, LangChain, or Vercel AI — discovering and calling Databricks at runtime. StackOne provides both.
How does Databricks authentication work for AI agents?
Databricks authentication for AI agents works through a StackOne Connect Session. Create one via the dashboard or the SDK — you get an auth link and ready-to-paste config for Claude Desktop, Cursor, and other MCP clients. Your user authenticates their own Databricks account; StackOne handles token exchange, storage, and refresh. Credentials never reach the LLM, and each user is isolated via origin_owner_id.
Are Databricks MCP tools vulnerable to prompt injection?
Yes — Databricks MCP tools can be vulnerable to indirect prompt injection. Any tool that reads user-written content — documents, messages, tickets, records, or free-text fields — is a potential vector. StackOne Defender scans every tool response before it enters the agent's context — regex patterns in ~1ms, then a MiniLM classifier in ~4ms. 88.7% accuracy, CPU-only.
What is the context bloat of a Databricks agent and how do I avoid it?
Context bloat happens when Databricks tool schemas and API responses eat your Databricks agent's memory, preventing it from reasoning effectively. A single Databricks query can return a massive JSON response, and connecting multiple tools compounds the problem. Tools Discovery and Code Mode reduce context bloat — loading only relevant tools per query and keeping raw responses out of the agent's context.
Can I limit which actions my Databricks agent can access?
Yes — you can limit which actions your Databricks agent can access directly from the StackOne dashboard. Toggle actions on or off, or restrict them to specific accounts, with no code changes to your agent. Session tokens can be scoped to exact actions so if one leaks, exposure stays contained.
Can I create custom agent actions for my Databricks MCP server?
Yes — you can create custom agent actions for your Databricks MCP server using Connector Builder. It's an integration agent your coding assistant (Claude Code, Cursor, or Copilot) can invoke to research Databricks's API, generate production-ready connector YAML, test against the live API, and validate before you ship.
When should I NOT use a Databricks MCP server?
Skip a Databricks MCP server if your integration is purely software-to-software — direct Databricks API integration is simpler when no AI agent is involved. For deterministic, compliance-critical operations (financial transactions, regulatory reporting), direct API gives you predictable behavior without agent-driven decision-making. MCP shines when AI agents need to dynamically discover and call Databricks actions at runtime.
What AI frameworks and AI clients does the StackOne Databricks MCP server support?
The StackOne Databricks MCP server supports both. MCP clients (paste-and-go apps): Claude Desktop, Claude Code, Cursor, VS Code, Goose. Agent frameworks (code SDKs you build with): OpenAI Agents SDK, Anthropic, Vercel AI, Google ADK, CrewAI, Pydantic AI, LangChain, LangGraph, Azure AI Foundry.

Put your AI agents to work

All the tools you need to build and scale AI agent integrations, with best-in-class connectivity, execution, and security.