Skip to main content

Announcing StackOne Defender: leading open-source prompt injection guard for your agent Read More

LinkedIn Learning MCP Server
for AI Agents

Production-ready LinkedIn Learning MCP server with extensible actions — plus built-in authentication, security, and optimized execution.

LinkedIn Learning logo
LinkedIn Learning MCP Server
Built by StackOne StackOne

Coverage

7 Agent Actions

Create, read, update, and delete across LinkedIn Learning — and extend your agent's capabilities with custom actions.

Authentication

Agent Tool Authentication

Per-user OAuth in one call. Your LinkedIn Learning MCP server gets session-scoped tokens with zero credentials stored on your infra.

Agent Auth →

Security

Agent Protection

Every LinkedIn Learning tool response scanned for prompt injection in milliseconds — 88.7% accuracy, all running on CPU.

Prompt Injection Defense →

Performance

Max Agent Context. Min Cost.

Free up to 96% of your agent's context window to enhance reasoning and reduce cost, on every LinkedIn Learning call.

Tools Discovery →

What is the LinkedIn Learning MCP Server?

A LinkedIn Learning MCP server lets AI agents read and write LinkedIn Learning data through the Model Context Protocol — Anthropic's open standard for connecting LLMs to external tools. StackOne's LinkedIn Learning MCP server ships with pre-built actions, fully extensible via the Connector Builder — plus managed authentication, prompt injection defense, and optimized agent context. Connect it from MCP clients like Claude Desktop, Cursor, and VS Code, or from agent frameworks like OpenAI Agents SDK, LangChain, and Vercel AI SDK.

All LinkedIn Learning MCP Tools and Actions

Every action from LinkedIn Learning's API, ready for your agent. Create, read, update, and delete — scoped to exactly what you need.

Assets

  • Get Asset

    Retrieve a specific learning asset by URN

Assets By Criterias

  • List Assets By Criteria

    Retrieve learning assets using criteria-based filtering

Assets By Locale And Types

  • List Assets By Locale And Type

    Retrieve learning assets by source locale and asset type

Classifications

  • Get Classification

    Retrieve a specific learning classification by URN

Classifications By Keywords

  • Search Classifications By Keyword

    Search learning classifications by keyword

Classifications By Locale And Types

  • List Classifications By Locale And Type

    Retrieve learning classifications by source locale and type

Activity Reports

  • Get Activity Report

    Retrieve learning activity report with aggregation criteria

Set Up Your LinkedIn Learning MCP Server in Minutes

One endpoint. Any framework. Your agent is talking to LinkedIn Learning in under 10 lines of code.

MCP Clients

Agent Frameworks

Claude Desktop
{
  "mcpServers": {
    "stackone": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-remote@latest",
        "https://api.stackone.com/mcp?x-account-id=<account_id>",
        "--header",
        "Authorization: Basic <YOUR_BASE64_TOKEN>"
      ]
    }
  }
}

More Learning / LMS MCP Servers

LinkedIn Learning MCP Server FAQ

LinkedIn Learning MCP server vs direct API integration — what's the difference?
A LinkedIn Learning MCP server and direct API integration serve different use cases. Direct API integration is for software-to-software — backend code calling LinkedIn Learning. A LinkedIn Learning MCP server is for AI agents — MCP clients like Claude and Cursor, plus framework agents built with OpenAI, LangChain, or Vercel AI — discovering and calling LinkedIn Learning at runtime. StackOne provides both.
How does LinkedIn Learning authentication work for AI agents?
LinkedIn Learning authentication for AI agents works through a StackOne Connect Session. Create one via the dashboard or the SDK — you get an auth link and ready-to-paste config for Claude Desktop, Cursor, and other MCP clients. Your user authenticates their own LinkedIn Learning account; StackOne handles token exchange, storage, and refresh. Credentials never reach the LLM, and each user is isolated via origin_owner_id.
Are LinkedIn Learning MCP tools vulnerable to prompt injection?
Yes — LinkedIn Learning MCP tools can be vulnerable to indirect prompt injection. Any tool that reads user-written content — documents, messages, tickets, records, or free-text fields — is a potential vector. StackOne Defender scans every tool response before it enters the agent's context — regex patterns in ~1ms, then a MiniLM classifier in ~4ms. 88.7% accuracy, CPU-only.
What is the context bloat of a LinkedIn Learning agent and how do I avoid it?
Context bloat happens when LinkedIn Learning tool schemas and API responses eat your LinkedIn Learning agent's memory, preventing it from reasoning effectively. A single LinkedIn Learning query can return a massive JSON response, and connecting multiple tools compounds the problem. Tools Discovery and Code Mode reduce context bloat — loading only relevant tools per query and keeping raw responses out of the agent's context.
Can I limit which actions my LinkedIn Learning agent can access?
Yes — you can limit which actions your LinkedIn Learning agent can access directly from the StackOne dashboard. Toggle actions on or off, or restrict them to specific accounts, with no code changes to your agent. Session tokens can be scoped to exact actions so if one leaks, exposure stays contained.
Can I create custom agent actions for my LinkedIn Learning MCP server?
Yes — you can create custom agent actions for your LinkedIn Learning MCP server using Connector Builder. It's an integration agent your coding assistant (Claude Code, Cursor, or Copilot) can invoke to research LinkedIn Learning's API, generate production-ready connector YAML, test against the live API, and validate before you ship.
When should I NOT use a LinkedIn Learning MCP server?
Skip a LinkedIn Learning MCP server if your integration is purely software-to-software — direct LinkedIn Learning API integration is simpler when no AI agent is involved. For deterministic, compliance-critical operations (financial transactions, regulatory reporting), direct API gives you predictable behavior without agent-driven decision-making. MCP shines when AI agents need to dynamically discover and call LinkedIn Learning actions at runtime.
What AI frameworks and AI clients does the StackOne LinkedIn Learning MCP server support?
The StackOne LinkedIn Learning MCP server supports both. MCP clients (paste-and-go apps): Claude Desktop, Claude Code, Cursor, VS Code, Goose. Agent frameworks (code SDKs you build with): OpenAI Agents SDK, Anthropic, Vercel AI, Google ADK, CrewAI, Pydantic AI, LangChain, LangGraph, Azure AI Foundry.

Put your AI agents to work

All the tools you need to build and scale AI agent integrations, with best-in-class connectivity, execution, and security.