π§ NeuroLink¶
Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.
π§ What is NeuroLink?¶
NeuroLink is the universal AI integration platform that unifies 12 major AI providers and 100+ models under one consistent API.
Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 12 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.
Why NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDKβwhichever fits your workflow.
Where we're headed: We're building for the future of AIβedge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision β
What's New (Q4 2025)¶
- CSV File Support β Attach CSV files to prompts for AI-powered data analysis with auto-detection. β CSV Guide
- LiteLLM Integration β Access 100+ AI models from all major providers through unified interface. β Setup Guide
- SageMaker Integration β Deploy and use custom trained models on AWS infrastructure. β Setup Guide
- Human-in-the-loop workflows β Pause generation for user approval/input before tool execution. β HITL Guide
- Guardrails middleware β Block PII, profanity, and unsafe content with built-in filtering. β Guardrails Guide
- Context summarization β Automatic conversation compression for long-running sessions. β Summarization Guide
- Redis conversation export β Export full session history as JSON for analytics and debugging. β History Guide
Q3 highlights (multimodal chat, auto-evaluation, loop sessions, orchestration) are now in Platform Capabilities below.
Get Started in Two Steps¶
# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup
# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"
Need a persistent workspace? Launch loop mode with npx @juspay/neurolink loop
- Learn more β
π Complete Feature Set¶
NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.
π€ AI Provider Integration¶
12 providers unified under one API - Switch providers with a single parameter change.
Provider | Models | Free Tier | Tool Support | Status | Documentation |
---|---|---|---|---|---|
OpenAI | GPT-4o, GPT-4o-mini, o1 | β | β Full | β Production | Setup Guide |
Anthropic | Claude 3.5/3.7 Sonnet, Opus | β | β Full | β Production | Setup Guide |
Google AI Studio | Gemini 2.5 Flash/Pro | β Free Tier | β Full | β Production | Setup Guide |
AWS Bedrock | Claude, Titan, Llama, Nova | β | β Full | β Production | Setup Guide |
Google Vertex | Gemini via GCP | β | β Full | β Production | Setup Guide |
Azure OpenAI | GPT-4, GPT-4o, o1 | β | β Full | β Production | Setup Guide |
LiteLLM | 100+ models unified | Varies | β Full | β Production | Setup Guide |
AWS SageMaker | Custom deployed models | β | β Full | β Production | Setup Guide |
Mistral AI | Mistral Large, Small | β Free Tier | β Full | β Production | Setup Guide |
Hugging Face | 100,000+ models | β Free | β οΈ Partial | β Production | Setup Guide |
Ollama | Local models (Llama, Mistral) | β Free (Local) | β οΈ Partial | β Production | Setup Guide |
OpenAI Compatible | Any OpenAI-compatible endpoint | Varies | β Full | β Production | Setup Guide |
π Provider Comparison Guide - Detailed feature matrix and selection criteria
π§ Built-in Tools & MCP Integration¶
6 Core Tools (work across all providers, zero configuration):
Tool | Purpose | Auto-Available | Documentation |
---|---|---|---|
getCurrentTime |
Real-time clock access | β | Tool Reference |
readFile |
File system reading | β | Tool Reference |
writeFile |
File system writing | β | Tool Reference |
listDirectory |
Directory listing | β | Tool Reference |
calculateMath |
Mathematical operations | β | Tool Reference |
websearchGrounding |
Google Vertex web search | β οΈ Requires credentials | Tool Reference |
58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):
// Add any MCP server dynamically
await neurolink.addExternalMCPServer("github", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"],
transport: "stdio",
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});
// Tools automatically available to AI
const result = await neurolink.generate({
input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});
π MCP Integration Guide - Setup external servers
π» Developer Experience Features¶
SDK-First Design with TypeScript, IntelliSense, and type safety:
Feature | Description | Documentation |
---|---|---|
Auto Provider Selection | Intelligent provider fallback | SDK Guide |
Streaming Responses | Real-time token streaming | Streaming Guide |
Conversation Memory | Automatic context management | Memory Guide |
Full Type Safety | Complete TypeScript types | Type Reference |
Error Handling | Graceful provider fallback | Error Guide |
Analytics & Evaluation | Usage tracking, quality scores | Analytics Guide |
Middleware System | Request/response hooks | Middleware Guide |
Framework Integration | Next.js, SvelteKit, Express | Framework Guides |
π’ Enterprise & Production Features¶
Production-ready capabilities for regulated industries:
Feature | Description | Use Case | Documentation |
---|---|---|---|
Enterprise Proxy | Corporate proxy support | Behind firewalls | Proxy Setup |
Redis Memory | Distributed conversation state | Multi-instance deployment | Redis Guide |
Cost Optimization | Automatic cheapest model selection | Budget control | Cost Guide |
Multi-Provider Failover | Automatic provider switching | High availability | Failover Guide |
Telemetry & Monitoring | OpenTelemetry integration | Observability | Telemetry Guide |
Security Hardening | Credential management, auditing | Compliance | Security Guide |
Custom Model Hosting | SageMaker integration | Private models | SageMaker Guide |
Load Balancing | LiteLLM proxy integration | Scale & routing | Load Balancing |
Security & Compliance:
- β SOC2 Type II compliant deployments
- β ISO 27001 certified infrastructure compatible
- β GDPR-compliant data handling (EU providers available)
- β HIPAA compatible (with proper configuration)
- β Hardened OS verified (SELinux, AppArmor)
- β Zero credential logging
- β Encrypted configuration storage
π Enterprise Deployment Guide - Complete production checklist
π¨ Professional CLI¶
15+ commands for every workflow:
Command | Purpose | Example | Documentation |
---|---|---|---|
setup |
Interactive provider configuration | neurolink setup |
Setup Guide |
generate |
Text generation | neurolink gen "Hello" |
Generate |
stream |
Streaming generation | neurolink stream "Story" |
Stream |
status |
Provider health check | neurolink status |
Status |
loop |
Interactive session | neurolink loop |
Loop |
mcp |
MCP server management | neurolink mcp discover |
MCP CLI |
models |
Model listing | neurolink models |
Models |
eval |
Model evaluation | neurolink eval |
Eval |
π Complete CLI Reference - All commands and options
π° Smart Model Selection¶
NeuroLink features intelligent model selection and cost optimization:
Cost Optimization Features¶
- π° Automatic Cost Optimization: Selects cheapest models for simple tasks
- π LiteLLM Model Routing: Access 100+ models with automatic load balancing
- π Capability-Based Selection: Find models with specific features (vision, function calling)
- β‘ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost
# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"
# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider
β¨ Interactive Loop Mode¶
NeuroLink features a powerful interactive loop mode that transforms the CLI into a persistent, stateful session. This allows you to run multiple commands, set session-wide variables, and maintain conversation history without restarting.
Start the Loop¶
Example Session¶
# Start the interactive session
$ npx @juspay/neurolink loop
neurolink Β» set provider google-ai
β provider set to google-ai
neurolink Β» set temperature 0.8
β temperature set to 0.8
neurolink Β» generate "Tell me a fun fact about space"
The quietest place on Earth is an anechoic chamber at Microsoft's headquarters in Redmond, Washington. The background noise is so low that it's measured in negative decibels, and you can hear your own heartbeat.
# Exit the session
neurolink Β» exit
Conversation Memory in Loop Mode¶
Start the loop with conversation memory to have the AI remember the context of your previous commands.
Skip the wizard and configure manually? See docs/getting-started/provider-setup.md
.
CLI & SDK Essentials¶
neurolink
CLI mirrors the SDK so teams can script experiments and codify them later.
# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai
# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
--provider azure --model gpt-4o-mini
# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
--enable-analytics --enable-evaluation --format json
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
store: "redis",
},
enableOrchestration: true,
});
const result = await neurolink.generate({
input: {
text: "Create a comprehensive analysis",
files: [
"./sales_data.csv", // Auto-detected as CSV
"./diagrams/architecture.png", // Auto-detected as image
],
},
enableEvaluation: true,
region: "us-east-1",
});
console.log(result.content);
console.log(result.evaluation?.overallScore);
Full command and API breakdown lives in docs/cli/commands.md
and docs/sdk/api-reference.md
.
Platform Capabilities at a Glance¶
Capability | Highlights |
---|---|
Provider unification | 12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3). |
Multimodal pipeline | Stream images + CSV data across providers with local/remote assets. Auto-detection for mixed file types. |
Quality & governance | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging. |
Memory & context | Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4). |
CLI tooling | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output. |
Enterprise ops | Proxy support, regional routing (Q3), telemetry hooks, configuration management. |
Tool ecosystem | MCP auto discovery, LiteLLM hub access, SageMaker custom deployment, web search. |
Documentation Map¶
Area | When to Use | Link |
---|---|---|
Getting started | Install, configure, run first prompt | docs/getting-started/index.md |
Feature guides | Understand new functionality front-to-back | docs/features/index.md |
CLI reference | Command syntax, flags, loop sessions | docs/cli/index.md |
SDK reference | Classes, methods, options | docs/sdk/index.md |
Integrations | LiteLLM, SageMaker, MCP, Mem0 | docs/LITELLM-INTEGRATION.md |
Operations | Configuration, troubleshooting, provider matrix | docs/reference/index.md |
Visual demos | Screens, GIFs, interactive tours | docs/demos/index.md |
Integrations¶
- LiteLLM 100+ model hub β Unified access to third-party models via LiteLLM routing. β
docs/LITELLM-INTEGRATION.md
- Amazon SageMaker β Deploy and call custom endpoints directly from NeuroLink CLI/SDK. β
docs/SAGEMAKER-INTEGRATION.md
- Mem0 conversational memory β Persistent semantic memory with vector store support. β
docs/MEM0_INTEGRATION.md
- Enterprise proxy & security β Configure outbound policies and compliance posture. β
docs/ENTERPRISE-PROXY-SETUP.md
- Configuration automation β Manage environments, regions, and credentials safely. β
docs/CONFIGURATION-MANAGEMENT.md
- MCP tool ecosystem β Auto-discover Model Context Protocol tools and extend workflows. β
docs/advanced/mcp-integration.md
Contributing & Support¶
- Bug reports and feature requests β GitHub Issues
- Development workflow, testing, and pull request guidelines β
docs/development/contributing.md
- Documentation improvements β open a PR referencing the documentation matrix.
NeuroLink is built with β€οΈ by Juspay. Contributions, questions, and production feedback are always welcome.