robotlynk-mcp

lynk-mcp is an MCP (Model Context Protocol) server that enables AI assistants β€” Claude, Cursor, VS Code Copilot, and Zed β€” to query the Interlynk platform using natural language. It exposes SBOM data, vulnerability information, policy results, and compliance status through a standardized tool interface.

Repository: github.com/interlynk-io/lynk-mcparrow-up-right


When to Use lynk-mcp vs pylynk

Use Case
lynk-mcp
pylynk

Natural language queries about SBOMs

Best

Not applicable

CI/CD SBOM uploads

Not applicable

Best

AI-assisted vulnerability triage

Best

Manual

Scripted automation

Not applicable

Best

Interactive exploration via AI assistant

Best

CLI only

Drift analysis between versions

Built-in

Manual

Use lynk-mcp when your workflow involves AI assistants and conversational interaction. Use pylynk for scripted, non-interactive automation.


Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   AI Assistant        β”‚
β”‚ (Claude, Cursor, etc) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚ MCP Protocol (stdio)
           β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     lynk-mcp          β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚   β”‚  MCP Server   β”‚   β”‚
β”‚   β”‚  (24 Tools)   β”‚   β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚   β”‚ GraphQL Clientβ”‚   β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚   β”‚  Config &     β”‚   β”‚
β”‚   β”‚  Keyring      β”‚   β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚ HTTPS + Bearer Token
           β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Interlynk API        β”‚
β”‚  (GraphQL endpoint)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The MCP server runs as a local process. The AI assistant communicates with it over stdio using the MCP protocol. The server translates tool calls into GraphQL queries against the Interlynk API and returns structured responses.


Setup

Installation

Homebrew (macOS/Linux):

Go Install:

Docker:

From Source:

Configuration

Run the interactive configuration:

This prompts for:

  1. API Endpoint β€” defaults to https://api.interlynk.io/lynkapi

  2. API Token β€” must start with lynk_live_, lynk_staging_, or lynk_test_

The token is stored securely in the system keychain. The endpoint and logging configuration are saved to ~/.lynk-mcp/config.yaml.

Verify Connection

A successful response displays the organization name and confirms connectivity. The verify command includes retry logic (up to ~6 minutes) to account for token propagation delays.

Environment Variables

Variable
Description
Default

LYNK_API_TOKEN

API token (overrides keychain)

β€”

LYNK_MCP_API_ENDPOINT

API endpoint override

https://api.interlynk.io/lynkapi

LYNK_MCP_LOGGING_LEVEL

Log level: debug, info, warn, error

info

Configuration File

Location: ~/.lynk-mcp/config.yaml


Connecting to AI Assistants

Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

Claude Code (CLI)

VS Code (v1.99+)

Add to .vscode/mcp.json:

Cursor

Add to ~/.cursor/mcp.json:

Zed

Add to ~/.config/zed/settings.json:

Docker-Based Setup

For environments where local installation is not practical:


Available Tools

The MCP server exposes 24 tools organized into five categories.

Organization & Products

Tool
Parameters
Description

get_organization

β€”

Organization info and metrics

list_products

search, limit

List products with optional search

get_product

id (required)

Product details with environments

list_environments

product_id (required), search

Environments in a product

get_environment

id (required)

Environment details

Versions & Components

Tool
Parameters
Description

list_versions

environment_id (required), lifecycle, limit

Versions in an environment

get_version

id (required)

Version details with statistics

list_components

version_id (required), search, kind, direct, limit

Components in a version

get_component

id (required), version_id (required)

Component details

compare_versions

source_version_id, target_version_id (both required)

Drift analysis between two versions

Vulnerabilities

Tool
Parameters
Description

list_vulnerabilities

version_id (required), severity, vex_status, kev, search, limit

Filtered vulnerability list

get_vulnerability

vuln_id (required)

Vulnerability by CVE ID or UUID

search_vulnerabilities

search, severity, kev, limit

Cross-product vulnerability search

Vulnerability filters:

Filter
Values

severity

critical, high, medium, low

vex_status

affected, not_affected, fixed

kev

true / false (Known Exploited Vulnerabilities)

Policies & Compliance

Tool
Parameters
Description

list_policies

search, limit

List security policies

get_policy

id (required)

Policy details with rules

list_policy_violations

policy_id, version_id, result_type, limit

Policy evaluation results

list_licenses

status, search, limit

Organization license inventory

License filters:

Filter
Values

status

approved, rejected, unspecified

Resources

MCP resources provide structured access to complete datasets:

Resource URI
Description

version:///{version_id}

Complete version information

version:///{version_id}/components

All components (up to 1000)

version:///{version_id}/vulnerabilities

All vulnerabilities (up to 1000)

environment:///{environment_id}/latest-version

Most recent version

organization:///summary

Organization overview

vulnerability:///{cve_id}

Vulnerability details by CVE


Security Considerations

Token Storage

lynk-mcp stores API tokens in the system keychain:

Platform
Storage Backend

macOS

Keychain (login keychain)

Windows

Credential Manager

Linux

Secret Service (or file-based with encryption)

Tokens are never written to the configuration file or logged to stderr.

Token Format

Valid token prefixes: lynk_live_, lynk_staging_, lynk_test_. The configure command validates the format before storing.

Access Control

  • All API requests are scoped to the organization associated with the token.

  • Users can only access data their token's role permits.

  • The MCP server does not add, modify, or elevate permissions beyond what the token provides.

Isolation Recommendations

  • Run lynk-mcp as a dedicated process per user session. Do not share a single instance across multiple users.

  • In Docker deployments, pass the token via environment variable rather than mounting configuration files.

  • Use service tokens with read-only roles for MCP access unless write operations are explicitly needed.

  • In production environments, restrict the LYNK_API_TOKEN to the minimum required role (typically Viewer or Operator).


Example Workflows

Vulnerability Triage

Ask your AI assistant:

"Show me all critical vulnerabilities with KEV status across my organization."

The assistant calls search_vulnerabilities(severity="critical", kev=true) and returns CVE details with EPSS/CVSS scores, allowing you to prioritize remediation.

Drift Analysis

"Compare the last two releases of my-app in production and highlight security-relevant changes."

The assistant calls list_versions, then compare_versions, returning component additions, removals, and version changes between releases.

Policy Compliance Review

"Which products are currently failing security policies?"

The assistant calls list_policy_violations(result_type="fail"), groups results by product, and presents violations with the associated policy rules.

License Audit

"Find all GPL-licensed components in my organization."

The assistant calls list_licenses(search="GPL") and summarizes license distribution, highlighting deprecated or restrictive licenses.

"Do any of my products use log4j?"

The assistant calls search_vulnerabilities(search="log4j") or iterates through products and environments calling list_components(search="log4j") to locate all instances.


Debugging & Observability

Logging Configuration

Set the log level via environment variable:

Level
Output

debug

Full request/response details, retry attempts, timing

info

Startup, connection events (default)

warn

Recoverable issues

error

Failures only

Logs are written to stderr in JSON format. They do not interfere with MCP protocol communication on stdout.

Monitoring Recommendations

  • Monitor the lynk-mcp process for unexpected exits. AI assistants typically restart the process automatically, but persistent crashes indicate configuration issues.

  • Check ~/.lynk-mcp/config.yaml exists and contains a valid endpoint.

  • Run lynk-mcp verify periodically to confirm token validity and API connectivity.

Common Issues

Issue
Cause
Resolution

AI assistant cannot find tools

lynk-mcp not configured in assistant

Add MCP server configuration (see Connecting to AI Assistants)

token not found

Keychain empty and no LYNK_API_TOKEN set

Run lynk-mcp configure or set the environment variable

invalid token format

Token does not start with valid prefix

Use a token starting with lynk_live_, lynk_staging_, or lynk_test_

Connection timeout

Network or firewall blocking API

Verify HTTPS access to api.interlynk.io; check proxy settings

Verify command hangs

Token propagation delay

Wait up to 6 minutes; the verify command retries automatically

Docker: keychain not available

No system keychain in container

Pass token via LYNK_API_TOKEN environment variable


Common Misconfigurations

Issue
Symptom
Fix

Token stored in config file instead of keychain

Security risk β€” plaintext token on disk

Run lynk-mcp configure to store in keychain

Wrong API endpoint

All queries return errors

Verify api.endpoint in ~/.lynk-mcp/config.yaml

Admin token used for read-only MCP access

Excessive permissions

Create a Viewer or Operator service token

lynk-mcp binary not in PATH

AI assistant fails to start the server

Install via Homebrew or add binary location to PATH

Multiple AI assistants sharing one config

Token collisions in keychain

Each instance reads the same keychain entry β€” this is fine for same-organization access

Last updated