Feature Guides¶
Comprehensive guides for all NeuroLink features organized by category. Each guide includes setup, usage patterns, configuration, and troubleshooting.
Latest Features (Q4 2025)¶
Feature | Description |
---|---|
Human-in-the-Loop (HITL) | Pause AI tool execution for user approval before risky operations like file deletion or API calls. |
Guardrails Middleware | Content filtering, PII detection, and safety checks for AI outputs with zero configuration. |
Redis Conversation Export | Export complete session history as JSON for analytics, debugging, and compliance auditing. |
:material-brain-circuit: Context Summarization | Automatic conversation compression for long-running sessions to stay within token limits. |
LiteLLM Integration | Access 100+ AI models from all major providers through unified LiteLLM routing interface. |
SageMaker Integration | Deploy and use custom trained models on AWS SageMaker infrastructure with full control. |
Core Features (Q3 2025)¶
Feature | Description |
---|---|
Multimodal Chat Experiences | Stream text and images together with automatic provider fallbacks and format conversion. |
CSV File Support | Process CSV files for data analysis with automatic format conversion. Works with all providers. |
Auto Evaluation Engine | Automated quality scoring and metrics export for AI response validation using LLM-as-judge. |
CLI Loop Sessions | Persistent interactive mode with conversation memory and session state for prompt engineering. |
Regional Streaming Controls | Region-specific model deployment and routing for compliance and latency optimization. |
Provider Orchestration Brain | Adaptive provider and model selection with intelligent fallbacks based on task classification. |
Platform Capabilities at a Glance¶
Category | Features | Documentation |
---|---|---|
Provider unification | 12+ providers with automatic failover, cost-aware routing, provider orchestration (Q3) | Provider Setup |
Multimodal pipeline | Stream images + CSV data across providers with local/remote assets. Auto-detection for mixed file types. | Multimodal Guide, CSV Support |
Quality & governance | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging | Auto Evaluation, Guardrails, HITL |
Memory & context | Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4) | Conversation Memory, Redis Export |
CLI tooling | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output | CLI Loop, CLI Commands |
Enterprise ops | Proxy support, regional routing (Q3), telemetry hooks, configuration management | Enterprise Proxy, Telemetry |
Tool ecosystem | MCP auto discovery, LiteLLM hub access, SageMaker custom deployment, web search | MCP Integration, MCP Catalog |
AI Provider Integration¶
NeuroLink supports 12 major AI providers with unified API access:
Provider | Key Features | Free Tier | Tool Support | Status | Documentation |
---|---|---|---|---|---|
OpenAI | GPT-4o, GPT-4o-mini, o1 models | ❌ | ✅ Full | ✅ Production | Setup Guide |
Anthropic | Claude 3.5/3.7 Sonnet, Opus | ❌ | ✅ Full | ✅ Production | Setup Guide |
Google AI | Gemini 2.5 Flash/Pro | ✅ Free Tier | ✅ Full | ✅ Production | Setup Guide |
AWS Bedrock | Claude, Titan, Llama, Nova | ❌ | ✅ Full | ✅ Production | Setup Guide |
Google Vertex | Gemini via GCP | ❌ | ✅ Full | ✅ Production | Setup Guide |
Azure OpenAI | GPT-4, GPT-4o, o1 | ❌ | ✅ Full | ✅ Production | Setup Guide |
LiteLLM | 100+ models unified | Varies | ✅ Full | ✅ Production | Integration Guide |
AWS SageMaker | Custom deployed models | ❌ | ✅ Full | ✅ Production | Integration Guide |
Mistral AI | Mistral Large, Small | ✅ Free Tier | ✅ Full | ✅ Production | Setup Guide |
Hugging Face | 100,000+ models | ✅ Free | ⚠️ Partial | ✅ Production | Setup Guide |
Ollama | Local models | ✅ Free (Local) | ⚠️ Partial | ✅ Production | Setup Guide |
OpenAI Compatible | Any compatible endpoint | Varies | ✅ Full | ✅ Production | Setup Guide |
📖 Provider Comparison Guide - Full feature matrix
Advanced CLI Capabilities¶
Interactive Setup Wizard¶
NeuroLink includes a revolutionary interactive setup wizard that guides users through provider configuration in 2-3 minutes:
# Launch interactive setup wizard
npx @juspay/neurolink setup
# Provider-specific guided setup
npx @juspay/neurolink setup --provider openai
npx @juspay/neurolink setup --provider bedrock
Wizard Features:
- 🔐 Secure credential collection with validation
- ✅ Real-time authentication testing
- 📝 Automatic
.env
file creation - 🎯 Recommended model selection
- 📘 Quick-start command examples
- 🔍 Interactive provider discovery
15+ CLI Commands¶
Complete command-line toolkit for every workflow:
Command | Description | Key Features |
---|---|---|
generate/gen | Text generation | Multimodal input, tool support, streaming |
stream | Real-time streaming | Live token output, evaluation |
loop | Interactive session | Persistent variables, conversation memory |
setup | Guided configuration | Provider wizard, validation |
status | Health monitoring | Provider health, latency checks |
models list | Model discovery | Capability filtering, availability |
config | Configuration management | Init, validate, export, reset |
memory | Conversation management | Export, import, stats, clear |
mcp | MCP server management | List, discover, connect, status |
provider | Provider operations | List, test, health dashboard |
ollama | Ollama management | Model download, list, remove |
sagemaker | SageMaker operations | Status, endpoint management |
vertex | Vertex AI operations | Auth status, quota checks |
completion | Shell completion | Bash and Zsh support |
validate | Config validation | Environment verification |
Shell Integration¶
Bash and Zsh completions for faster command-line workflows:
# Install Bash completion
neurolink completion bash >> ~/.bashrc
# Install Zsh completion
neurolink completion zsh >> ~/.zshrc
Learn more: Complete CLI Reference
Built-in Tools & MCP Integration¶
8 Core Built-in Agent Tools¶
Complete autonomous agent foundation with security and validation:
Tool | Function | Capabilities | Security | Status |
---|---|---|---|---|
getCurrentTime |
Time access | Date/time with timezone support | Safe | ✅ |
readFile |
File reading | Secure file system access with path validation | Sandboxed | ✅ |
writeFile |
File writing | File creation and modification with safety checks | HITL | ✅ |
listFiles |
Directory listing | Directory navigation and listing | Restricted | ✅ |
createDirectory |
Directory creation | Directory creation with permission checks | Validated | ✅ |
deleteFile |
File deletion | File and directory deletion with confirmation | HITL | ✅ |
executeCommand |
Command execution | System command execution with safety limits | HITL | ✅ |
websearchGrounding |
Web search | Google Vertex web search integration | API-based | ✅ |
Tool Management System:
- ✅ Dynamic tool registration and validation
- ✅ Secure execution with sandboxing
- ✅ Result processing and error recovery
- ✅ Tool discovery and availability tracking
📖 Custom Tools Guide - Create your own tools
Model Context Protocol (MCP) - Enterprise-Grade Ecosystem¶
5 Built-in MCP Servers¶
NeuroLink includes 5 production-ready MCP servers for enterprise agent deployment:
Server | Purpose | Tools Provided | Status |
---|---|---|---|
AI Core | Provider orchestration | generate, select-provider, check-status | ✅ Operational |
AI Analysis | Analytics capabilities | analyze-usage, performance-metrics | ✅ Operational |
AI Workflow | Workflow automation | execute-workflow, batch-process | ✅ Operational |
Direct Tools | Agent integration | file-ops, web-search, execute | ✅ Operational |
Utilities | General utilities | time, calculations, formatting | ✅ Operational |
Advanced MCP Infrastructure¶
Component | Capabilities | Status |
---|---|---|
Tool Registry | Tool registration, execution, statistics | ✅ Active |
External Server Manager | Lifecycle management, health monitoring | ✅ Active |
Tool Discovery Service | Automatic tool discovery and registration | ✅ Active |
MCP Factory | Lighthouse-compatible server creation | ✅ Active |
Flexible Tool Validator | Universal safety validation | ✅ Active |
Context Manager | Rich context with 15+ fields | ✅ Active |
Tool Orchestrator | Sequential pipelines, error handling | ✅ Active |
Lighthouse MCP Compatibility¶
- ✅ Factory Pattern:
createMCPServer()
fully compatible with Lighthouse architecture - ✅ Transport Mechanisms: stdio, SSE, WebSocket support (99% compatibility)
- ✅ Tool Standards: Full MCP specification compliance
- ✅ Context Passing: Rich context with sessionId, userId, permissions (15+ fields)
58+ External MCP Servers¶
Supported for extended functionality:
Categories:
- Development: GitHub, GitLab, filesystem access
- Databases: PostgreSQL, MySQL, SQLite
- Cloud Storage: Google Drive, AWS S3
- Communication: Slack, email
- And many more...
Quick Example:
// Add any MCP server dynamically
await neurolink.addExternalMCPServer("github", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"],
transport: "stdio",
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});
// Tools automatically available to AI
const result = await neurolink.generate({
input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});
📖 MCP Integration Guide - Setup and usage 📖 MCP Server Catalog - Complete server list (58+)
Developer Experience Features¶
SDK Features¶
Feature | Description | Documentation |
---|---|---|
Auto Provider Selection | Intelligent provider fallback | SDK Guide |
Streaming Responses | Real-time token streaming | Streaming Guide |
Conversation Memory | Automatic context management | Memory Guide |
Full Type Safety | Complete TypeScript types | Type Reference |
Error Handling | Graceful provider fallback | Error Guide |
Analytics & Evaluation | Usage tracking, quality scores | Analytics Guide |
Middleware System | Request/response hooks | Middleware Guide |
Framework Integration | Next.js, SvelteKit, Express | Framework Guides |
CLI Features¶
Feature | Description | Documentation |
---|---|---|
Interactive Setup | Guided provider configuration | Setup Guide |
Text Generation | CLI-based generation | Generate Command |
Streaming | Real-time streaming output | Stream Command |
Loop Sessions | Persistent interactive mode | Loop Sessions |
Provider Management | Health checks and status | CLI Guide |
Model Evaluation | Automated testing | Eval Command |
MCP Management | Server discovery and installation | MCP CLI |
15+ Commands for every workflow - see Complete CLI Reference
Smart Model Selection & Cost Optimization¶
Cost Optimization Features¶
- 💰 Automatic Cost Optimization: Selects cheapest models for simple tasks
- 🔄 LiteLLM Model Routing: Access 100+ models with automatic load balancing
- 🔍 Capability-Based Selection: Find models with specific features (vision, function calling)
- ⚡ Intelligent Fallback: Seamless switching when providers fail
CLI Examples:
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost
# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"
# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider
Learn more: Provider Orchestration Guide
Interactive Loop Mode¶
NeuroLink features a powerful interactive loop mode that transforms the CLI into a persistent, stateful session.
Key Capabilities¶
- Run any CLI command without restarting session
- Persistent session variables:
set provider openai
,set temperature 0.9
- Conversation memory: AI remembers previous turns within session
- Redis auto-detection: Automatically connects if
REDIS_URL
is set - Export session history as JSON for analytics
Quick Start¶
# Start loop with Redis-backed conversation memory
npx @juspay/neurolink loop --enable-conversation-memory --auto-redis
# Start loop without Redis auto-detection
npx @juspay/neurolink loop --enable-conversation-memory --no-auto-redis
Example Session¶
# Start the interactive session
$ npx @juspay/neurolink loop
neurolink » set provider google-ai
✓ provider set to google-ai
neurolink » set temperature 0.8
✓ temperature set to 0.8
neurolink » generate "Tell me a fun fact about space"
The quietest place on Earth is an anechoic chamber at Microsoft's headquarters...
# Exit the session
neurolink » exit
📖 Complete Loop Guide - Full documentation with all commands
Enterprise & Production Features¶
Production Capabilities¶
Feature | Description | Use Case | Documentation |
---|---|---|---|
Enterprise Proxy | Corporate proxy support | Behind firewalls | Proxy Setup |
Redis Memory | Distributed conversation state | Multi-instance deployment | Redis Guide |
Cost Optimization | Automatic cheapest model selection | Budget control | Cost Guide |
Multi-Provider Failover | Automatic provider switching | High availability | Failover Guide |
Telemetry & Monitoring | OpenTelemetry integration | Observability | Telemetry Guide |
Security Hardening | Credential management, auditing | Compliance | Security Guide |
Custom Model Hosting | SageMaker integration | Private models | SageMaker Guide |
Load Balancing | LiteLLM proxy integration | Scale & routing | Load Balancing Guide |
Audit Trails | Comprehensive logging | Compliance | Audit Guide |
Configuration Management | Environment & credential management | Multi-environment deployment | Config Guide |
Advanced Security Features¶
Human-in-the-Loop (HITL) Policy Engine¶
Enterprise-grade approval system for sensitive operations:
// HITL Policy Configuration
interface HITLPolicy {
requireApprovalFor: string[]; // Tool-specific policies
autoApprove: string[]; // Safe operation whitelist
alwaysDeny: string[]; // Blacklist operations
timeoutBehavior: "deny" | "approve"; // Timeout handling
}
HITL Capabilities:
- ✅ User consent for dangerous operations
- ✅ Configurable policy engine
- ✅ Comprehensive audit trail logging
- ✅ Timeout handling
- ✅ Bulk approval for batch operations
Advanced Proxy Support¶
Corporate network compatibility:
Proxy Type | Support | Features |
---|---|---|
AWS Proxy | ✅ Full | AWS-specific proxy configuration |
HTTP/HTTPS Proxy | ✅ Full | Universal proxy across all providers |
No-Proxy Bypass | ✅ Full | Bypass configuration and utilities |
Enhanced Guardrails¶
AI-powered content security:
- ✅ Content Filtering: Automatic content screening
- ✅ Toxicity Detection: Toxic content filtering
- ✅ PII Redaction: Privacy protection and PII detection
- ✅ Custom Rules: Configurable policy rules
- ✅ Security Reporting: Detailed security event reporting
Security & Compliance Certifications¶
- ✅ SOC2 Type II compliant deployments
- ✅ ISO 27001 certified infrastructure compatible
- ✅ GDPR-compliant data handling (EU providers available)
- ✅ HIPAA compatible (with proper configuration)
- ✅ Hardened OS verified (SELinux, AppArmor)
- ✅ Zero credential logging
- ✅ Encrypted configuration storage
📖 Enterprise Deployment Guide - Complete production patterns
Middleware & Extension System¶
Advanced Middleware Architecture¶
Pluggable request/response processing for custom workflows:
Built-in Middleware¶
Middleware | Purpose | Features | Status |
---|---|---|---|
Analytics | Usage tracking & monitoring | Token counting, timing, performance metrics | ✅ Active |
Guardrails | Content security | Content policies, toxicity detection, PII filtering | ✅ Active |
Auto Evaluation | Quality scoring | LLM-as-judge, accuracy metrics, safety validation | ✅ Active |
Middleware System Capabilities¶
// Middleware Configuration
interface MiddlewareFactoryOptions {
middleware?: NeuroLinkMiddleware[]; // Custom middleware registration
enabledMiddleware?: string[]; // Selective activation
disabledMiddleware?: string[]; // Selective deactivation
middlewareConfig?: Record<string, MiddlewareConfig>; // Per-middleware configuration
preset?: string; // Preset configurations
global?: {
// Global settings
maxExecutionTime?: number;
continueOnError?: boolean;
};
}
Middleware Features:
- ✅ Dynamic middleware registration
- ✅ Pipeline execution with performance tracking
- ✅ Runtime configuration changes
- ✅ Error handling and graceful recovery
- ✅ Priority-based execution order
- ✅ Detailed execution statistics
📖 Custom Middleware Guide - Build your own middleware
Performance & Optimization¶
Intelligent Cost Optimization¶
- 💰 Model Resolver: Cost optimization algorithms and intelligent routing
- ⚡ Performance Routing: Speed-optimized provider selection
- 🔄 Concurrent Initialization: Reduced latency through parallel loading
- 💾 Caching Strategies: Intelligent response and configuration caching
Advanced SageMaker Features¶
Beyond basic integration - enterprise-grade custom model deployment:
Feature | Description | Status |
---|---|---|
Adaptive Semaphore | Dynamic concurrency control for optimal throughput | ✅ Implemented |
Structured Output Parser | Complex response parsing and validation | ✅ Implemented |
Capability Detection | Automatic endpoint capability discovery | ✅ Implemented |
Batch Inference | Efficient batch processing for high-volume workloads | ✅ Implemented |
Diagnostics System | Real-time endpoint monitoring and debugging | ✅ Implemented |
Error Handling & Resilience¶
Production-grade fault tolerance:
- ✅ MCP Circuit Breaker: Fault tolerance with state management
- ✅ Error Hierarchies: Comprehensive error types for HITL, providers, and MCP
- ✅ Graceful Degradation: Intelligent fallback strategies
- ✅ Retry Logic: Configurable retry with exponential backoff
📖 Performance Optimization Guide - Complete optimization strategies
Advanced Integrations¶
Integration | Description |
---|---|
LiteLLM Integration | Access 100+ models from all major providers via LiteLLM routing with unified interface. |
SageMaker Integration | Deploy and call custom endpoints directly from NeuroLink CLI/SDK with full control. |
:material-brain-circuit: Mem0 Integration | Persistent semantic memory with vector store support for long-term conversations. |
Enterprise Proxy | Configure outbound policies and compliance posture for corporate environments. |
Configuration Management | Manage environments, regions, and credentials safely across deployments. |
Advanced Features¶
Feature | Description |
---|---|
Factory Pattern Architecture | Unified provider interface with automatic fallbacks and type-safe implementations. |
Conversation Memory | Deep dive into memory management, Redis integration, and Mem0 support. |
Custom Middleware | Build request/response hooks for logging, filtering, and custom processing. |
Performance Optimization | Caching, connection pooling, and latency optimization strategies. |
Telemetry & Observability | OpenTelemetry integration for distributed tracing and monitoring. |
Testing Guide | Provider-agnostic testing, mocking, and quality assurance strategies. |
Analytics & Evaluation | Usage tracking, cost monitoring, and quality scoring for AI responses. |
Streaming | Real-time token streaming with provider-specific optimizations. |
See Also¶
- Getting Started - Quick start and installation
- CLI Reference - Command-line interface documentation
- SDK Reference - TypeScript API documentation
- Enterprise Guides - Production deployment patterns
- Tutorials - Step-by-step implementation guides
- Examples - Real-world code samples