Frequently Asked Questions¶
Common questions and answers about NeuroLink usage, configuration, and troubleshooting.
🚀 Getting Started¶
Q: What is NeuroLink?¶
A: NeuroLink is an enterprise AI development platform that provides unified access to multiple AI providers (OpenAI, Google AI, Anthropic, AWS Bedrock, etc.) through a single SDK and CLI. It includes built-in tools, analytics, evaluation capabilities, and supports the Model Context Protocol (MCP) for extended functionality.
Q: Which AI providers does NeuroLink support?¶
A: NeuroLink supports 9+ AI providers:
- OpenAI (GPT-4, GPT-4o, GPT-3.5-turbo)
- Google AI Studio (Gemini models)
- Google Vertex AI (Gemini, Claude via Vertex)
- Anthropic (Claude 3.5 Sonnet, Haiku, Opus)
- AWS Bedrock (Claude, Titan models)
- Azure OpenAI (GPT models)
- Hugging Face (Open source models)
- Ollama (Local AI models)
- Mistral AI (Mistral models)
Q: Do I need to install anything?¶
A: No installation required! You can use NeuroLink directly with npx
:
For frequent use, you can install globally: npm install -g @juspay/neurolink
🔧 Configuration¶
Q: How do I set up API keys?¶
A: Create a .env
file in your project directory:
# .env file
OPENAI_API_KEY="sk-your-openai-key"
GOOGLE_AI_API_KEY="AIza-your-google-ai-key"
ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
# ... other providers
NeuroLink automatically loads these environment variables.
Q: Can I use NeuroLink behind a corporate proxy?¶
A: Yes! NeuroLink automatically detects and uses corporate proxy settings:
export HTTPS_PROXY="http://proxy.company.com:8080"
export HTTP_PROXY="http://proxy.company.com:8080"
export NO_PROXY="localhost,127.0.0.1,.company.com"
No additional configuration needed.
Q: How do I configure multiple environments (dev/staging/prod)?¶
A: Use environment-specific .env
files:
# .env.development
NEUROLINK_LOG_LEVEL="debug"
NEUROLINK_CACHE_ENABLED="false"
# .env.production
NEUROLINK_LOG_LEVEL="warn"
NEUROLINK_CACHE_ENABLED="true"
NEUROLINK_ANALYTICS_ENABLED="true"
🎯 Usage¶
Q: What's the difference between CLI and SDK?¶
A:
Feature | CLI | SDK |
---|---|---|
Best for | Scripts, automation, testing | Applications, integration |
Installation | None required (npx) | npm install required |
Output | Text, JSON | Native JavaScript objects |
Batch processing | Built-in batch command |
Manual implementation |
Learning curve | Low | Medium |
Q: How do I choose the best provider for my use case?¶
A: NeuroLink can auto-select the best provider, or you can choose based on:
- Speed: Google AI (fastest responses)
- Coding: Anthropic Claude (best for code analysis)
- Creative: OpenAI (best for creative content)
- Cost: Google AI Studio (free tier available)
- Enterprise: AWS Bedrock or Azure OpenAI
# Auto-selection
npx @juspay/neurolink gen "Your prompt" --provider auto
# Specific provider
npx @juspay/neurolink gen "Your prompt" --provider google-ai
Q: Can I use multiple providers in the same application?¶
A: Yes! You can specify different providers for different requests:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Use different providers for different tasks
const code = await neurolink.generate({
input: { text: "Write a Python function" },
provider: "anthropic",
});
const creative = await neurolink.generate({
input: { text: "Write a poem" },
provider: "openai",
});
🔍 Troubleshooting¶
Q: Why am I getting "API key not found" errors?¶
A: Common solutions:
- Check .env file exists and is in the correct directory
- Verify file format: No spaces around
=
signs
- Check file permissions:
.env
file should be readable - Verify key format: Keys should start with provider-specific prefixes
Q: Provider status shows "Authentication failed" - what should I do?¶
A:
- Verify API key is correct and hasn't expired
- Check account status - ensure billing is set up if required
- Test API key manually:
- Check regional restrictions - some providers have geographic limitations
Q: AWS Bedrock shows "Not Authorized" - how do I fix this?¶
A: AWS Bedrock requires additional setup:
- Request model access in AWS Bedrock console
- Use full inference profile ARN for Anthropic models:
- Verify IAM permissions include
AmazonBedrockFullAccess
- Check AWS region - Bedrock isn't available in all regions
Q: Google Vertex AI authentication issues?¶
A: Vertex AI supports multiple authentication methods:
# Method 1: Service account file
GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
# Method 2: Individual environment variables
GOOGLE_AUTH_CLIENT_EMAIL="service-account@project.iam.gserviceaccount.com"
GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----..."
# Required for both methods
GOOGLE_VERTEX_PROJECT="your-gcp-project-id"
GOOGLE_VERTEX_LOCATION="us-central1"
Q: Why are my requests timing out?¶
A: Try these solutions:
- Increase timeout:
- Check network connectivity
- Reduce max tokens for faster responses
- Switch to faster provider (Google AI is typically fastest)
Q: How do I handle rate limits?¶
A:
- Use batch processing with delays:
- Switch providers when rate limited
- Implement exponential backoff in your applications
- Upgrade API plan for higher limits
🚀 Advanced Features¶
Q: What are analytics and evaluation features?¶
A:
- Analytics: Track usage metrics, costs, and performance
- Evaluation: AI-powered quality scoring of responses
# Enable analytics
npx @juspay/neurolink gen "prompt" --enable-analytics
# Enable evaluation
npx @juspay/neurolink gen "prompt" --enable-evaluation
# Both together
npx @juspay/neurolink gen "prompt" --enable-analytics --enable-evaluation
Q: What is MCP integration?¶
A: Model Context Protocol (MCP) allows NeuroLink to use external tools like file systems, databases, and APIs. NeuroLink includes built-in tools and can discover MCP servers from other AI applications.
# List discovered MCP servers
npx @juspay/neurolink mcp list
# Test built-in tools
npx @juspay/neurolink gen "What time is it?" --debug
Q: How do I use streaming responses?¶
A:
# CLI streaming
npx @juspay/neurolink stream "Tell me a story"
# SDK streaming
const stream = await neurolink.stream({
input: { text: "Tell me a story" }
});
for await (const chunk of stream) {
console.log(chunk.content);
}
🏢 Enterprise Usage¶
Q: Is NeuroLink suitable for enterprise use?¶
A: Yes! NeuroLink is designed for enterprise use with:
- Corporate proxy support
- Multiple authentication methods
- Audit logging and analytics
- Provider fallback and reliability
- Comprehensive error handling
- Security best practices
Q: How do I deploy NeuroLink in production?¶
A: Best practices:
- Use environment variables for configuration
- Implement secret management (AWS Secrets Manager, Azure Key Vault)
- Enable analytics for monitoring
- Set up provider fallbacks
- Configure appropriate timeouts
- Monitor provider health
Q: Can I use NeuroLink in CI/CD pipelines?¶
A: Absolutely! Common use cases:
# Generate documentation
npx @juspay/neurolink gen "Create API docs" > docs/api.md
# Code review
npx @juspay/neurolink gen "Review this code for issues" --provider anthropic
# Release notes
npx @juspay/neurolink gen "Generate release notes from git log"
Q: How do I track costs across teams?¶
A: Use analytics with context:
npx @juspay/neurolink gen "prompt" \
--enable-analytics \
--context '{"team":"backend","project":"api","user":"dev123"}'
🔧 Development¶
Q: How do I integrate NeuroLink with React?¶
A:
import { NeuroLink } from "@juspay/neurolink";
import { useState } from "react";
function AIComponent() {
const [response, setResponse] = useState("");
const neurolink = new NeuroLink();
const generate = async () => {
const result = await neurolink.generate({
input: { text: "Hello AI" }
});
setResponse(result.content);
};
return (
<div>
<button onClick={generate}>Generate</button>
<p>{response}</p>
</div>
);
}
Q: How do I handle errors properly?¶
A:
try {
const result = await neurolink.generate({
input: { text: "Your prompt" },
});
console.log(result.content);
} catch (error) {
if (error.code === "RATE_LIMIT_EXCEEDED") {
// Handle rate limiting
} else if (error.code === "AUTHENTICATION_FAILED") {
// Handle auth issues
} else {
// Handle other errors
}
}
Q: Can I create custom tools?¶
A: Yes! NeuroLink supports custom MCP servers:
# Add custom MCP server
npx @juspay/neurolink mcp add myserver "python /path/to/server.py"
# Test custom server
npx @juspay/neurolink mcp test myserver
💰 Pricing and Costs¶
Q: How much does NeuroLink cost?¶
A: NeuroLink itself is free! You only pay for the AI provider usage (OpenAI, Google AI, etc.). NeuroLink helps optimize costs by:
- Auto-selecting cheapest suitable providers
- Analytics to track spending
- Batch processing for efficiency
- Built-in rate limiting
Q: Which provider is most cost-effective?¶
A: Generally:
- Google AI Studio - Free tier available
- Google Vertex AI - Competitive pricing
- OpenAI GPT-4o-mini - Good balance of cost/performance
- Anthropic Claude Haiku - Fast and affordable
Use npx @juspay/neurolink models best --use-case cheapest
to find the most cost-effective option.
Q: How can I monitor and control costs?¶
A:
- Enable analytics to track usage and costs
- Set provider limits in your AI provider dashboards
- Use cheaper models for non-critical tasks
- Implement caching for repeated requests
- Monitor with evaluation to ensure quality
🆘 Getting Help¶
Q: Where can I get help?¶
A:
- Documentation: Comprehensive guides and API reference
- GitHub Issues: Report bugs and request features
- Troubleshooting Guide: Common issues and solutions
- Examples: Practical usage patterns
Q: How do I report a bug?¶
A:
- Check existing issues on GitHub
- Include reproduction steps
- Provide environment details:
- Node.js version
- NeuroLink version
- Operating system
- Error messages
- Share configuration (without API keys!)
Q: How do I request a new feature?¶
A:
- Search existing feature requests
- Open GitHub issue with "enhancement" label
- Describe use case and expected behavior
- Provide examples of how the feature would be used
Q: Can I contribute to NeuroLink?¶
A: Yes! We welcome contributions:
- Read the contributing guide
- Start with good first issues
- Follow code style guidelines
- Include tests and documentation
- Submit pull request
🔄 Migration and Updates¶
Q: How do I update NeuroLink?¶
A:
# For global installation
npm update -g @juspay/neurolink
# For project installation
npm update @juspay/neurolink
# Check version
npx @juspay/neurolink --version
Q: Are there breaking changes between versions?¶
A: NeuroLink follows semantic versioning:
- Patch updates (1.0.1): Bug fixes, no breaking changes
- Minor updates (1.1.0): New features, backward compatible
- Major updates (2.0.0): Breaking changes, migration guide provided
Q: How do I migrate from other AI libraries?¶
A: NeuroLink provides simple migration paths:
// From OpenAI SDK
import OpenAI from "openai";
const openai = new OpenAI();
// To NeuroLink
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Similar API, enhanced features
const result = await neurolink.generate({
input: { text: "Your prompt" },
provider: "openai", // Optional, can use any provider
});
📚 Related Documentation¶
- Quick Start Guide - Get started in 2 minutes
- Installation Guide - Detailed setup instructions
- Troubleshooting Guide - Common issues and solutions
- CLI Commands - Complete CLI reference
- API Reference - SDK documentation