โ๏ธ NeuroLink Configuration Guide¶
โ IMPLEMENTATION STATUS: COMPLETE (2025-01-07)¶
Generate Function Migration completed - Configuration examples updated
- โ
All code examples now show
generate()
as primary method - โ
Legacy
generate()
examples preserved for reference - โ Factory pattern configuration benefits documented
- โ Zero configuration changes required for migration
Migration Note: Configuration remains identical for both
generate()
andgenerate()
. All existing configurations continue working unchanged.
Version: v1.8.0 Last Updated: January 9, 2025
๐ Overview¶
This guide covers all configuration options for NeuroLink, including AI provider setup, dynamic model configuration, MCP integration, and environment configuration.
Basic Usage Examples¶
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// NEW: Primary method (recommended)
const result = await neurolink.generate({
input: { text: "Configure AI providers" },
provider: "google-ai",
temperature: 0.7,
});
// LEGACY: Still fully supported
const legacyResult = await neurolink.generate({
prompt: "Configure AI providers",
provider: "google-ai",
temperature: 0.7,
});
๐ค AI Provider Configuration¶
Environment Variables¶
NeuroLink supports multiple AI providers. Set up one or more API keys:
# Google AI Studio (Recommended - Free tier available)
export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
# OpenAI
export OPENAI_API_KEY="sk-your-openai-api-key"
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-api-key"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-azure-key"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
# AWS Bedrock
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION="us-east-1"
# Hugging Face
export HUGGING_FACE_API_KEY="hf_your-hugging-face-token"
# Mistral AI
export MISTRAL_API_KEY="your-mistral-api-key"
.env File Configuration¶
Create a .env
file in your project root:
# .env file - automatically loaded by NeuroLink
GOOGLE_AI_API_KEY=AIza-your-google-ai-api-key
OPENAI_API_KEY=sk-your-openai-api-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key
# Optional: Provider preferences
NEUROLINK_PREFERRED_PROVIDER=google-ai
NEUROLINK_DEBUG=false
Provider Selection Priority¶
NeuroLink automatically selects the best available provider:
- Google AI Studio (if
GOOGLE_AI_API_KEY
is set) - OpenAI (if
OPENAI_API_KEY
is set) - Anthropic (if
ANTHROPIC_API_KEY
is set) - Other providers in order of availability
Force specific provider:
# CLI
npx neurolink generate "Hello" --provider openai
# SDK
const provider = createAIProvider('openai');
๐ฏ Dynamic Model Configuration (v1.8.0+)¶
Overview¶
The dynamic model system enables intelligent model selection, cost optimization, and runtime model configuration without code changes.
Environment Variables¶
# Dynamic Model System Configuration
export MODEL_SERVER_URL="http://localhost:3001" # Model config server URL
export MODEL_CONFIG_PATH="./config/models.json" # Model configuration file
export ENABLE_DYNAMIC_MODELS="true" # Enable dynamic models
export DEFAULT_MODEL_PREFERENCE="quality" # 'cost', 'speed', or 'quality'
export FALLBACK_MODEL="gpt-4o-mini" # Fallback when preferred unavailable
Model Configuration Server¶
Start the model configuration server to enable dynamic model features:
# Start the model server (provides REST API for model configs)
npm run start:model-server
# Server provides endpoints at http://localhost:3001:
# GET /models - List all models
# GET /models/search?capability=vision - Search by capability
# GET /models/provider/anthropic - Get provider models
# GET /models/resolve/claude-latest - Resolve aliases
Model Configuration File¶
Create or modify config/models.json
to define available models:
{
"models": [
{
"id": "claude-3-5-sonnet",
"name": "Claude 3.5 Sonnet",
"provider": "anthropic",
"pricing": { "input": 0.003, "output": 0.015 },
"capabilities": ["functionCalling", "vision", "code"],
"contextWindow": 200000,
"deprecated": false,
"aliases": ["claude-latest", "best-coding"]
}
],
"aliases": {
"claude-latest": "claude-3-5-sonnet",
"fastest": "gpt-4o-mini",
"cheapest": "claude-3-haiku"
}
}
Dynamic Model Usage¶
CLI Usage¶
# Use model aliases for convenience
npx neurolink generate "Write code" --model best-coding
# Capability-based selection
npx neurolink generate "Describe image" --capability vision --optimize-cost
# Search and discover models
npx neurolink models search --capability functionCalling --max-price 0.001
npx neurolink models list
npx neurolink models best --use-case coding
SDK Usage¶
import { AIProviderFactory, DynamicModelRegistry } from "@juspay/neurolink";
const factory = new AIProviderFactory();
const registry = new DynamicModelRegistry();
// Use aliases for easy access
const provider = await factory.createProvider({
provider: "anthropic",
model: "claude-latest", // Auto-resolves to latest Claude
});
// Capability-based selection
const visionProvider = await factory.createProvider({
provider: "auto",
capability: "vision", // Automatically selects best vision model
optimizeFor: "cost", // Prefer cost-effective options
});
// Find optimal model for specific needs
const bestModel = await registry.findBestModel({
capability: "code",
maxPrice: 0.005, // Max $0.005 per 1K tokens
provider: "anthropic", // Prefer Anthropic models
});
Benefits¶
- โ Runtime Updates: Add new models without code deployment
- โ Smart Selection: Automatic model selection based on capabilities
- โ Cost Optimization: Choose models based on price constraints
- โ Easy Aliases: Use friendly names like "claude-latest", "fastest"
- โ Provider Agnostic: Unified interface across all AI providers
๐ ๏ธ MCP Configuration (v1.7.1)¶
Built-in Tools Configuration¶
Built-in tools are automatically available in v1.7.1:
{
"builtInTools": {
"enabled": true,
"tools": ["time", "utilities", "registry", "configuration", "validation"]
}
}
Test built-in tools:
External MCP Server Configuration¶
External servers are auto-discovered from all major AI tools:
Auto-Discovery Locations¶
macOS:
~/Library/Application Support/Claude/
~/Library/Application Support/Code/User/
~/.cursor/
~/.codeium/windsurf/
Linux:
Windows:
Manual MCP Configuration¶
Create .mcp-config.json
in your project root:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/"],
"transport": "stdio"
}
}
}
MCP Discovery Commands¶
# Discover all external servers
npx neurolink mcp discover --format table
# Export discovery results
npx neurolink mcp discover --format json > discovered-servers.json
# Test discovery
npx neurolink mcp discover --format yaml
๐ฅ๏ธ CLI Configuration¶
Global CLI Options¶
# Debug mode
export NEUROLINK_DEBUG=true
# Preferred provider
export NEUROLINK_PREFERRED_PROVIDER=google-ai
# Custom timeout
export NEUROLINK_TIMEOUT=30000
Command-line Options¶
# Provider selection
npx neurolink generate "Hello" --provider openai
# Debug output
npx neurolink generate "Hello" --debug
# Temperature control
npx neurolink generate "Hello" --temperature 0.7
# Token limits
npx neurolink generate "Hello" --max-tokens 1000
# Disable tools
npx neurolink generate "Hello" --disable-tools
๐ Development Configuration¶
TypeScript Configuration¶
For TypeScript projects, add to your tsconfig.json
:
{
"compilerOptions": {
"moduleResolution": "node",
"allowSyntheticDefaultImports": true,
"esModuleInterop": true,
"strict": true
},
"include": ["src/**/*", "node_modules/@juspay/neurolink/dist/**/*"]
}
Package.json Scripts¶
Add useful scripts to your package.json
:
{
"scripts": {
"neurolink:status": "npx neurolink status --verbose",
"neurolink:test": "npx neurolink generate 'Test message'",
"neurolink:mcp-discover": "npx neurolink mcp discover --format table",
"neurolink:mcp-test": "npx neurolink generate 'What time is it?' --debug"
}
}
Environment Setup Script¶
Create setup-neurolink.sh
:
#!/bin/bash
echo "๐ง NeuroLink Environment Setup"
# Check Node.js version
if ! command -v node &> /dev/null; then
echo "โ Node.js not found. Please install Node.js v18+"
exit 1
fi
NODE_VERSION=$(node -v | cut -d'v' -f2 | cut -d'.' -f1)
if [ "$NODE_VERSION" -lt 18 ]; then
echo "โ Node.js v18+ required. Current version: $(node -v)"
exit 1
fi
# Install NeuroLink
echo "๐ฆ Installing NeuroLink..."
npm install @juspay/neurolink
# Create .env template
if [ ! -f .env ]; then
echo "๐ Creating .env template..."
cat > .env << EOF
# NeuroLink Configuration
# Set at least one API key:
# Google AI Studio (Free tier available)
GOOGLE_AI_API_KEY=AIza-your-google-ai-api-key
# OpenAI (Paid service)
# OPENAI_API_KEY=sk-your-openai-api-key
# Optional settings
NEUROLINK_DEBUG=false
NEUROLINK_PREFERRED_PROVIDER=google-ai
EOF
echo "โ
Created .env template. Please add your API keys."
else
echo "โน๏ธ .env file already exists"
fi
# Test installation
echo "๐งช Testing installation..."
if npx neurolink status > /dev/null 2>&1; then
echo "โ
NeuroLink installed successfully"
# Test MCP discovery
echo "๐ Testing MCP discovery..."
SERVERS=$(npx neurolink mcp discover --format json 2>/dev/null | jq '.servers | length' 2>/dev/null || echo "0")
echo "โ
Discovered $SERVERS external MCP servers"
echo ""
echo "๐ Setup complete! Next steps:"
echo "1. Add your API key to .env file"
echo "2. Test: npx neurolink generate 'Hello'"
echo "3. Test MCP tools: npx neurolink generate 'What time is it?' --debug"
else
echo "โ Installation test failed"
exit 1
fi
๐ง Advanced Configuration¶
Custom Provider Configuration¶
import { createAIProvider } from "@juspay/neurolink";
// Custom provider settings
const provider = createAIProvider("openai", {
apiKey: process.env.OPENAI_API_KEY,
baseURL: "https://api.openai.com/v1",
timeout: 30000,
retries: 3,
});
Tool Configuration¶
// Enable/disable tools
const result = await provider.generate({
prompt: "Hello",
tools: {
enabled: true,
allowedTools: ["time", "utilities"],
maxToolCalls: 5,
},
});
Logging Configuration¶
# Enable detailed logging
export NEUROLINK_DEBUG=true
export NEUROLINK_LOG_LEVEL=verbose
# Custom log format
export NEUROLINK_LOG_FORMAT=json
๐ก๏ธ Security Configuration¶
API Key Security¶
# Use environment variables (not hardcoded)
export GOOGLE_AI_API_KEY="$(cat ~/.secrets/google-ai-key)"
# Use .env files (add to .gitignore)
echo ".env" >> .gitignore
Tool Security¶
{
"toolSecurity": {
"allowedDomains": ["api.example.com"],
"blockedTools": ["dangerous-tool"],
"requireConfirmation": true
}
}
๐งช Testing Configuration¶
Test Environment Setup¶
# Test environment
export NEUROLINK_ENV=test
export NEUROLINK_DEBUG=true
# Mock providers for testing
export NEUROLINK_MOCK_PROVIDERS=true
Validation Commands¶
# Validate configuration
npx neurolink status --verbose
# Test built-in tools (v1.7.1)
npx neurolink generate "What time is it?" --debug
# Test external discovery
npx neurolink mcp discover --format table
# Full system test
npm run build && npm run test:run -- test/mcp-comprehensive.test.ts
๐ Configuration Examples¶
Minimal Setup (Google AI)¶
Multi-Provider Setup¶
GOOGLE_AI_API_KEY=AIza-your-google-key
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
NEUROLINK_PREFERRED_PROVIDER=google-ai
Development Setup¶
NEUROLINK_DEBUG=true
NEUROLINK_LOG_LEVEL=verbose
NEUROLINK_TIMEOUT=60000
NEUROLINK_MOCK_PROVIDERS=false
๐ก For most users, setting GOOGLE_AI_API_KEY
is sufficient to get started with NeuroLink and test all MCP functionality!