Skip to content

🧠 NeuroLink

NPM Version Downloads GitHub Stars License TypeScript CI

Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.

NeuroLink is the universal AI integration platform that unifies 12 major AI providers and 100+ models under one consistent API.

Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 12 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.

Why NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDKβ€”whichever fits your workflow.

Where we're headed: We're building for the future of AIβ€”edge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision β†’

Get Started in <5 Minutes β†’


What's New (Q4 2025)

  • CSV File Support – Attach CSV files to prompts for AI-powered data analysis with auto-detection. β†’ CSV Guide
  • LiteLLM Integration – Access 100+ AI models from all major providers through unified interface. β†’ Setup Guide
  • SageMaker Integration – Deploy and use custom trained models on AWS infrastructure. β†’ Setup Guide
  • Human-in-the-loop workflows – Pause generation for user approval/input before tool execution. β†’ HITL Guide
  • Guardrails middleware – Block PII, profanity, and unsafe content with built-in filtering. β†’ Guardrails Guide
  • Context summarization – Automatic conversation compression for long-running sessions. β†’ Summarization Guide
  • Redis conversation export – Export full session history as JSON for analytics and debugging. β†’ History Guide

Q3 highlights (multimodal chat, auto-evaluation, loop sessions, orchestration) are now in Platform Capabilities below.

Get Started in Two Steps

# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup

# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"

Need a persistent workspace? Launch loop mode with npx @juspay/neurolink loop - Learn more β†’

🌟 Complete Feature Set

NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.

πŸ€– AI Provider Integration

12 providers unified under one API - Switch providers with a single parameter change.

Provider Models Free Tier Tool Support Status Documentation
OpenAI GPT-4o, GPT-4o-mini, o1 ❌ βœ… Full βœ… Production Setup Guide
Anthropic Claude 3.5/3.7 Sonnet, Opus ❌ βœ… Full βœ… Production Setup Guide
Google AI Studio Gemini 2.5 Flash/Pro βœ… Free Tier βœ… Full βœ… Production Setup Guide
AWS Bedrock Claude, Titan, Llama, Nova ❌ βœ… Full βœ… Production Setup Guide
Google Vertex Gemini via GCP ❌ βœ… Full βœ… Production Setup Guide
Azure OpenAI GPT-4, GPT-4o, o1 ❌ βœ… Full βœ… Production Setup Guide
LiteLLM 100+ models unified Varies βœ… Full βœ… Production Setup Guide
AWS SageMaker Custom deployed models ❌ βœ… Full βœ… Production Setup Guide
Mistral AI Mistral Large, Small βœ… Free Tier βœ… Full βœ… Production Setup Guide
Hugging Face 100,000+ models βœ… Free ⚠️ Partial βœ… Production Setup Guide
Ollama Local models (Llama, Mistral) βœ… Free (Local) ⚠️ Partial βœ… Production Setup Guide
OpenAI Compatible Any OpenAI-compatible endpoint Varies βœ… Full βœ… Production Setup Guide

πŸ“– Provider Comparison Guide - Detailed feature matrix and selection criteria


πŸ”§ Built-in Tools & MCP Integration

6 Core Tools (work across all providers, zero configuration):

Tool Purpose Auto-Available Documentation
getCurrentTime Real-time clock access βœ… Tool Reference
readFile File system reading βœ… Tool Reference
writeFile File system writing βœ… Tool Reference
listDirectory Directory listing βœ… Tool Reference
calculateMath Mathematical operations βœ… Tool Reference
websearchGrounding Google Vertex web search ⚠️ Requires credentials Tool Reference

58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):

// Add any MCP server dynamically
await neurolink.addExternalMCPServer("github", {
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-github"],
  transport: "stdio",
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// Tools automatically available to AI
const result = await neurolink.generate({
  input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});

πŸ“– MCP Integration Guide - Setup external servers


πŸ’» Developer Experience Features

SDK-First Design with TypeScript, IntelliSense, and type safety:

Feature Description Documentation
Auto Provider Selection Intelligent provider fallback SDK Guide
Streaming Responses Real-time token streaming Streaming Guide
Conversation Memory Automatic context management Memory Guide
Full Type Safety Complete TypeScript types Type Reference
Error Handling Graceful provider fallback Error Guide
Analytics & Evaluation Usage tracking, quality scores Analytics Guide
Middleware System Request/response hooks Middleware Guide
Framework Integration Next.js, SvelteKit, Express Framework Guides

🏒 Enterprise & Production Features

Production-ready capabilities for regulated industries:

Feature Description Use Case Documentation
Enterprise Proxy Corporate proxy support Behind firewalls Proxy Setup
Redis Memory Distributed conversation state Multi-instance deployment Redis Guide
Cost Optimization Automatic cheapest model selection Budget control Cost Guide
Multi-Provider Failover Automatic provider switching High availability Failover Guide
Telemetry & Monitoring OpenTelemetry integration Observability Telemetry Guide
Security Hardening Credential management, auditing Compliance Security Guide
Custom Model Hosting SageMaker integration Private models SageMaker Guide
Load Balancing LiteLLM proxy integration Scale & routing Load Balancing

Security & Compliance:

  • βœ… SOC2 Type II compliant deployments
  • βœ… ISO 27001 certified infrastructure compatible
  • βœ… GDPR-compliant data handling (EU providers available)
  • βœ… HIPAA compatible (with proper configuration)
  • βœ… Hardened OS verified (SELinux, AppArmor)
  • βœ… Zero credential logging
  • βœ… Encrypted configuration storage

πŸ“– Enterprise Deployment Guide - Complete production checklist


🎨 Professional CLI

15+ commands for every workflow:

Command Purpose Example Documentation
setup Interactive provider configuration neurolink setup Setup Guide
generate Text generation neurolink gen "Hello" Generate
stream Streaming generation neurolink stream "Story" Stream
status Provider health check neurolink status Status
loop Interactive session neurolink loop Loop
mcp MCP server management neurolink mcp discover MCP CLI
models Model listing neurolink models Models
eval Model evaluation neurolink eval Eval

πŸ“– Complete CLI Reference - All commands and options

πŸ’° Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • πŸ’° Automatic Cost Optimization: Selects cheapest models for simple tasks
  • πŸ”„ LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • πŸ” Capability-Based Selection: Find models with specific features (vision, function calling)
  • ⚑ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

✨ Interactive Loop Mode

NeuroLink features a powerful interactive loop mode that transforms the CLI into a persistent, stateful session. This allows you to run multiple commands, set session-wide variables, and maintain conversation history without restarting.

Start the Loop

npx @juspay/neurolink loop

Example Session

# Start the interactive session
$ npx @juspay/neurolink loop

neurolink Β» set provider google-ai
βœ“ provider set to google-ai

neurolink Β» set temperature 0.8
βœ“ temperature set to 0.8

neurolink Β» generate "Tell me a fun fact about space"
The quietest place on Earth is an anechoic chamber at Microsoft's headquarters in Redmond, Washington. The background noise is so low that it's measured in negative decibels, and you can hear your own heartbeat.

# Exit the session
neurolink Β» exit

Conversation Memory in Loop Mode

Start the loop with conversation memory to have the AI remember the context of your previous commands.

npx @juspay/neurolink loop --enable-conversation-memory

Skip the wizard and configure manually? See docs/getting-started/provider-setup.md.

CLI & SDK Essentials

neurolink CLI mirrors the SDK so teams can script experiments and codify them later.

# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai

# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
  --provider azure --model gpt-4o-mini

# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
  --enable-analytics --enable-evaluation --format json
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
  },
  enableOrchestration: true,
});

const result = await neurolink.generate({
  input: {
    text: "Create a comprehensive analysis",
    files: [
      "./sales_data.csv", // Auto-detected as CSV
      "./diagrams/architecture.png", // Auto-detected as image
    ],
  },
  enableEvaluation: true,
  region: "us-east-1",
});

console.log(result.content);
console.log(result.evaluation?.overallScore);

Full command and API breakdown lives in docs/cli/commands.md and docs/sdk/api-reference.md.

Platform Capabilities at a Glance

Capability Highlights
Provider unification 12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3).
Multimodal pipeline Stream images + CSV data across providers with local/remote assets. Auto-detection for mixed file types.
Quality & governance Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging.
Memory & context Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4).
CLI tooling Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output.
Enterprise ops Proxy support, regional routing (Q3), telemetry hooks, configuration management.
Tool ecosystem MCP auto discovery, LiteLLM hub access, SageMaker custom deployment, web search.

Documentation Map

Area When to Use Link
Getting started Install, configure, run first prompt docs/getting-started/index.md
Feature guides Understand new functionality front-to-back docs/features/index.md
CLI reference Command syntax, flags, loop sessions docs/cli/index.md
SDK reference Classes, methods, options docs/sdk/index.md
Integrations LiteLLM, SageMaker, MCP, Mem0 docs/LITELLM-INTEGRATION.md
Operations Configuration, troubleshooting, provider matrix docs/reference/index.md
Visual demos Screens, GIFs, interactive tours docs/demos/index.md

Integrations

Contributing & Support


NeuroLink is built with ❀️ by Juspay. Contributions, questions, and production feedback are always welcome.