π¨ NeuroLink Troubleshooting Guide¶
β IMPLEMENTATION STATUS: COMPLETE (2025-01-07)¶
Generate Function Migration completed - Updated troubleshooting for new primary method
- β
Added troubleshooting for
generate()
function - β Migration guidance for common issues
- β
Legacy
generate()
troubleshooting preserved - β Factory pattern error handling documented
Migration Note: Most issues apply to both the new
generate()
API and the legacygenerate()
API. Use the newgenerate()
examples for troubleshooting.
Version: v7.47.0 Last Updated: September 26, 2025
π Overview¶
This guide helps diagnose and resolve common issues with NeuroLink, including AI provider connectivity, MCP integration, CLI usage problems, and the new generate function migration.
π New in v7.47 β Quick Fixes¶
Symptom | Resolution |
---|---|
Image not found when using --image |
Provide an absolute path or run the command from the directory containing the asset. URLs must be HTTPS. |
Evaluation model not configured |
Set NEUROLINK_EVALUATION_PROVIDER /NEUROLINK_EVALUATION_MODEL , or disable --enableEvaluation until credentials are added. |
Redis connection failed in loop mode |
Export REDIS_URL before running neurolink loop or start the session with --no-auto-redis . |
Model not available in region |
Confirm the model supports the requested region and update AWS_REGION / GOOGLE_VERTEX_LOCATION accordingly. |
CLI exits after error inside loop | Upgrade to @juspay/neurolink@>=7.47.0 and restart the loop; new builds catch errors without exiting. |
π Q4 2025 Features β Common Issues¶
Human-in-the-Loop (HITL)¶
Issue | Solution |
---|---|
Tool executes without asking permission | Add requiresConfirmation: true to tool definition β See HITL Guide |
Confirmation dialog doesn't appear | Handle USER_CONFIRMATION_REQUIRED error in your UI β See HITL Guide |
Permission flag not resetting | Call setUserConfirmation(false) after tool execution β See HITL Guide |
Guardrails Middleware¶
Issue | Solution |
---|---|
Content not being filtered | Ensure preset: "security" is set in middleware config β See Guardrails Guide |
Too many false positives | Review bad word list, remove common words β See Guardrails Guide |
Model-based filter is slow | Switch to gpt-4o-mini for faster filtering β See Guardrails Guide |
Redis Conversation Export¶
Issue | Solution |
---|---|
Export returns empty history | Verify Redis connection and session ID exists β See Conversation History Guide |
exportConversationHistory method not found |
Ensure conversationMemory.store: "redis" is configured β See Conversation History Guide |
Missing metadata in export | Set includeMetadata: true in export options β See Conversation History Guide |
π― Generate Function Migration Issues¶
Migration Questions¶
Q: Should I update my existing code to use the new generate()
API?
A: Optional. Your existing legacy generate()
code continues working unchanged. Prefer the new stream()
API for new projects.
Q: What's the difference between the new generate()
and the legacy generate()
?
A: The new generate()
has a more extensible interface for future multiβmodal features. Both produce identical results for text generation today.
Q: I see deprecation warnings with the legacy generate()
A: These are informational only. The legacy API remains supported. To remove warnings, migrate to the new generate()
API.
Migration Examples¶
// β
NEW: Recommended usage
const result = await neurolink.generate({
input: { text: "Your prompt" },
provider: "google-ai",
});
// π LEGACY: Still fully supported
const result = await neurolink.generate({
prompt: "Your prompt",
provider: "google-ai",
});
CLI Migration¶
# β
NEW: Options-based API
npx @juspay/neurolink generate --prompt "Your prompt" --provider openai
# π LEGACY: Positional arguments (still works, shows deprecation warning)
npx @juspay/neurolink generate "Your prompt" --provider openai
π§ MCP Integration Issues¶
β Built-in Tools Not Working¶
Status: β RESOLVED in v1.7.1
Previous Issue: Time tool and other built-in tools were not loading due to circular dependencies.
Solution Applied:
# Fixed in v1.7.1 - built-in tools now work
node dist/cli/index.js generate "What time is it?" --debug
# Should return: "The current time is [current date and time]"
If still having issues:
ποΈ Configuration Management Issues (NEW v3.0)¶
Config Update Failures¶
Symptoms: Config updates fail with validation errors or backup issues
Solutions:
# Check config validation
npx @juspay/neurolink config validate
# Check backup system
ls -la .neurolink.backups/
# Manual backup creation
npx @juspay/neurolink config backup --reason "manual-backup"
# Restore from backup
npx @juspay/neurolink config restore --backup latest
Backup System Issues¶
Symptoms: Backups not created or corrupted
Solutions:
# Verify backup directory permissions
ls -la .neurolink.backups/
# Check backup integrity
npx @juspay/neurolink config verify-backups
# Cleanup corrupted backups
npx @juspay/neurolink config cleanup --verify
# Reset backup system
rm -rf .neurolink.backups/
mkdir .neurolink.backups/
Provider Configuration Issues¶
Symptoms: Providers not loading or failing validation
Solutions:
# Test individual provider
npx @juspay/neurolink test-provider google
# Check provider status
npx @juspay/neurolink status
# Reset provider configuration
npx @juspay/neurolink config reset-provider google
# Validate environment variables
npx @juspay/neurolink env check
π§ TypeScript Compilation Issues (NEW v3.0)¶
Build Failures¶
Symptoms: pnpm run build:cli
fails with TypeScript errors
Common Errors & Solutions:
// ERROR: Argument of type 'unknown' is not assignable to parameter of type 'string'
// SOLUTION: Use type casting
const value = String(unknownValue || "default");
// ERROR: Property 'success' does not exist on type 'unknown'
// SOLUTION: Cast to expected type
const result = response as ToolResult;
if (result.success) {
/* ... */
}
// ERROR: Interface compatibility issues
// SOLUTION: Use optional methods
if (registry.executeTool) {
const result = await registry.executeTool("tool", args, context);
}
Build Validation:
# Check TypeScript compilation
npx tsc --noEmit --project tsconfig.cli.json
# Full CLI build
pnpm run build:cli
# Check for type errors
npx tsc --listFiles --project tsconfig.cli.json
Interface Compatibility Issues¶
Symptoms: Type errors when using new interfaces
Solutions:
// Use optional chaining for new methods
registry.registerServer?.("server", config, context);
// Type casting for unknown returns
const result = (await registry.executeTool("tool", args)) as ToolResult;
// Handle both legacy and new interfaces
if ("registerServer" in registry) {
await registry.registerServer("server", config, context);
} else {
registry.register_server("server", config);
}
β‘ Performance Issues (NEW v3.0)¶
Slow Tool Execution¶
Symptoms: Tool execution taking longer than expected (>1ms target)
Solutions:
# Enable performance monitoring
NEUROLINK_PERFORMANCE_MONITORING=true
# Check execution statistics
npx @juspay/neurolink stats
# Optimize cache settings
NEUROLINK_CACHE_ENABLED=true
NEUROLINK_CACHE_TTL=300
# Reduce timeout for faster failures
NEUROLINK_DEFAULT_TIMEOUT=10000
Pipeline Performance¶
Symptoms: Sequential pipeline execution slower than ~22ms target
Solutions:
// Use parallel execution where possible
const results = await Promise.all([
registry.executeTool("tool1", args1, context),
registry.executeTool("tool2", args2, context),
]);
// Enable caching for repeated operations
const context: ExecutionContext = {
cacheOptions: {
enabled: true,
ttl: 300,
key: "operation-cache",
},
};
// Use fallback options for reliability
const context: ExecutionContext = {
fallbackOptions: {
enabled: true,
maxRetries: 2,
providers: ["openai", "anthropic"],
},
};
π Interface Migration Issues (NEW v3.0)¶
Property Name Errors¶
Symptoms: Property 'session_id' does not exist
type errors
Solutions:
// OLD (snake_case) - causes errors
const context = {
session_id: "session123",
user_id: "user456",
ai_provider: "google",
};
// NEW (camelCase) - correct
const context: ExecutionContext = {
sessionId: "session123",
userId: "user456",
aiProvider: "google",
};
Method Call Issues¶
Symptoms: Cannot call undefined method
runtime errors
Solutions:
// WRONG: Direct call may fail
registry.executeTool("tool", args);
// CORRECT: Use optional chaining
registry.executeTool?.("tool", args, context);
// ALTERNATIVE: Check method exists
if (registry.executeTool) {
const result = await registry.executeTool("tool", args, context);
}
Generic Type Issues¶
Symptoms: Type 'unknown' is not assignable
errors
Solutions:
// WRONG: Unknown return type
const result = await registry.executeTool("tool", args);
// CORRECT: Use generics
const result = await registry.executeTool<MyResultType>("tool", args, context);
// ALTERNATIVE: Type assertion
const result = (await registry.executeTool("tool", args)) as MyResultType;
π‘οΈ Error Recovery (NEW v3.0)¶
Automatic Recovery¶
Config Auto-Restore:
# Check if auto-restore triggered
grep "Config restored" ~/.neurolink/logs/config.log
# Verify restored config
npx @juspay/neurolink config validate
# Manual recovery if needed
npx @juspay/neurolink config restore --backup latest
Provider Fallback:
// Configure automatic fallback
const context: ExecutionContext = {
fallbackOptions: {
enabled: true,
providers: ["google-ai", "openai", "anthropic"],
maxRetries: 3,
retryDelay: 1000,
},
};
Manual Recovery¶
Reset to Defaults:
# Reset all configuration
npx @juspay/neurolink config reset --confirm
# Reset specific provider
npx @juspay/neurolink config reset-provider google
# Restore from specific backup
npx @juspay/neurolink config restore --backup neurolink-config-2025-01-07T10-30-00.js
If still having issues:
- Ensure you're using v1.7.1 or later:
npm list @juspay/neurolink
- Clear node modules and reinstall:
rm -rf node_modules && npm install
- Rebuild the project:
npm run build
π External MCP Server Discovery Issues¶
Symptom: No external MCP servers found during discovery
Diagnosis:
# Check if discovery is working
npx @juspay/neurolink mcp discover --format table
# Should show 58+ discovered servers
# Check discovery with debug info
npx @juspay/neurolink mcp discover --format json | jq '.servers | length'
# Should return a number > 50
Solutions:
- No Servers Found:
# Check if you have AI tools installed (VS Code, Claude, Cursor, etc.)
ls -la ~/Library/Application\ Support/Claude/
ls -la ~/.config/Code/User/
ls -la ~/.cursor/
- Partial Discovery:
# Check for configuration file issues
npx @juspay/neurolink mcp discover --format json > discovery.json
# Review discovery.json for parsing errors
- Discovery Errors:
π§ External MCP Server Activation Issues¶
Status: π§ In Development - External servers are discovered but not yet activated
Current Behavior: Servers show as discovered but cannot be executed directly
Expected in Next Version (v1.8.0):
# Coming Soon: Direct tool execution
npx @juspay/neurolink mcp exec filesystem read_file --params '{"path": "index.md"}'
Current Workaround: Use built-in tools while external activation is developed
π LiteLLM Provider Issues¶
LiteLLM Proxy Server Not Available¶
Symptom: LiteLLM proxy server not available. Please start the LiteLLM proxy server at http://localhost:4000
Diagnosis:
# Check if LiteLLM proxy is running
curl http://localhost:4000/health
# Check if process is running
ps aux | grep litellm
Solutions:
- Start LiteLLM Proxy Server:
# Install LiteLLM
pip install litellm
# Start proxy server
litellm --port 4000
# Server should start and show available models
- Verify Environment Variables:
# Check configuration
echo $LITELLM_BASE_URL # Should be http://localhost:4000
echo $LITELLM_API_KEY # Should be sk-anything or configured value
echo $LITELLM_MODEL # Optional default model
- Test Proxy Connectivity:
# Test health endpoint
curl http://localhost:4000/health
# Check available models
curl http://localhost:4000/models
# Test basic completion
curl -X POST http://localhost:4000/v1/completions \
-H "Content-Type: application/json" \
-d '{"model": "openai/gpt-4o-mini", "prompt": "Hello", "max_tokens": 5}'
LiteLLM Model Format Issues¶
Symptom: Model not found
or Invalid model format
errors
Diagnosis:
Solutions:
- Use Correct Model Format:
# Correct format: provider/model-name
npx @juspay/neurolink generate "Hello" --provider litellm --model "openai/gpt-4o-mini"
npx @juspay/neurolink generate "Hello" --provider litellm --model "anthropic/claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello" --provider litellm --model "google/gemini-2.0-flash"
- Popular Model Formats:
// OpenAI models
"openai/gpt-4o";
"openai/gpt-4o-mini";
"openai/gpt-4";
// Anthropic models
"anthropic/claude-3-5-sonnet";
"anthropic/claude-3-haiku";
// Google models
"google/gemini-2.0-flash";
"vertex_ai/gemini-pro";
// Mistral models
"mistral/mistral-large";
"mistral/mixtral-8x7b";
- Check LiteLLM Configuration:
# litellm_config.yaml
model_list:
- model_name: openai/gpt-4o
litellm_params:
model: gpt-4o
api_key: os.environ/OPENAI_API_KEY
- model_name: anthropic/claude-3-5-sonnet
litellm_params:
model: claude-3-5-sonnet-20241022
api_key: os.environ/ANTHROPIC_API_KEY
LiteLLM API Key Configuration Issues¶
Symptom: Authentication errors when using specific models through LiteLLM
Diagnosis:
# Check if LiteLLM proxy has access to underlying provider API keys
curl -X POST http://localhost:4000/v1/completions \
-H "Content-Type: application/json" \
-d '{"model": "openai/gpt-4o-mini", "prompt": "test", "max_tokens": 5}'
Solutions:
- Configure Provider API Keys for LiteLLM:
# Set underlying provider API keys that LiteLLM will use
export OPENAI_API_KEY="sk-your-openai-key"
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
export GOOGLE_AI_API_KEY="AIza-your-google-key"
# Then start LiteLLM proxy
litellm --port 4000
- Use LiteLLM Configuration File:
- Set NeuroLink LiteLLM Variables:
# NeuroLink LiteLLM configuration
export LITELLM_BASE_URL="http://localhost:4000"
export LITELLM_API_KEY="sk-anything" # Can be any value for local proxy
LiteLLM Connection Timeout Issues¶
Symptom: Requests to LiteLLM proxy timing out
Diagnosis:
# Test proxy response time
time curl http://localhost:4000/health
# Check proxy logs for performance issues
Solutions:
- Increase Timeout Values:
# Set longer timeout for LiteLLM requests
export LITELLM_TIMEOUT=60000 # 60 seconds
# Test with longer timeout
npx @juspay/neurolink generate "Complex reasoning task" \
--provider litellm \
--timeout 60s
- Optimize LiteLLM Configuration:
- Check System Resources:
LiteLLM Provider Selection Issues¶
Symptom: LiteLLM not included in auto-provider selection
Diagnosis:
# Check if LiteLLM is available
npx @juspay/neurolink status --verbose | grep litellm
# Test LiteLLM specific generation
npx @juspay/neurolink generate "Hello" --provider litellm --debug
Solutions:
- Ensure LiteLLM Service is Running:
# Check proxy health before using auto-selection
curl http://localhost:4000/health
# If healthy, LiteLLM should be included in auto-selection
npx @juspay/neurolink generate "Hello" --debug
- Force LiteLLM Provider:
# Explicitly use LiteLLM when auto-selection fails
npx @juspay/neurolink generate "Hello" --provider litellm
- Check Provider Priority:
// In your code, you can set provider preferences
const provider = await AIProviderFactory.createProvider("litellm");
// Or use with fallback
const { primary, fallback } = AIProviderFactory.createProviderWithFallback(
"litellm",
"openai",
);
LiteLLM Debugging¶
Enable Debug Mode:
# Enable NeuroLink debug output
export NEUROLINK_DEBUG=true
# Test LiteLLM with debug info
npx @juspay/neurolink generate "Hello" --provider litellm --debug
# Enable LiteLLM proxy debug mode
litellm --port 4000 --debug
Check LiteLLM Logs:
# LiteLLM proxy shows request/response logs
# Monitor the terminal where you started `litellm --port 4000`
# Check curl responses for detailed error info
curl -v http://localhost:4000/health
Common LiteLLM Error Messages:
ECONNREFUSED
: LiteLLM proxy not runningModel not found
: Invalid model format or model not configuredAuthentication failed
: Underlying provider API keys not setTimeout
: Proxy taking too long to respond
π€ AI Provider Issues¶
Provider Authentication Errors¶
Symptom: "Authentication failed" or "Invalid API key" errors
Diagnosis:
Solutions:
- OpenAI Issues:
# Set API key
export OPENAI_API_KEY="sk-your-openai-api-key"
# Test connection
npx @juspay/neurolink generate "Hello" --provider openai
- Google AI Studio Issues:
# Set API key (recommended for free tier)
export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
# Test connection
npx @juspay/neurolink generate "Hello" --provider google-ai
- Multiple Provider Setup:
# Create .env file
cat > .env << EOF
OPENAI_API_KEY=sk-your-openai-key
GOOGLE_AI_API_KEY=AIza-your-google-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
EOF
# Test auto-selection
npx @juspay/neurolink generate "Hello"
Provider Selection Issues¶
Symptom: Wrong provider selected or fallback not working
Diagnosis:
# Check available providers
npx @juspay/neurolink status
# Test specific provider
npx @juspay/neurolink generate "Hello" --provider google-ai --debug
Solutions:
- Force Specific Provider:
- Check Fallback Logic:
π₯οΈ CLI Issues¶
Command Not Found¶
Symptom: neurolink: command not found
Solutions:
- Using NPX (Recommended):
- Global Installation:
- Local Project Usage:
Build Issues¶
Symptom: CLI commands failing or TypeScript errors
Diagnosis:
Solutions:
Model Parameter Not Working¶
Symptom: CLI --model
parameter is ignored, always uses default model
Example Issue:
# Command specifies model but output shows default model being used
node dist/cli/index.js generate "test" --provider google-ai --model gemini-2.5-flash
# Output shows: modelName: 'gemini-2.5-pro' (default instead of specified)
Status: β FIXED in latest version
Solution: Update to latest version where model parameter fix has been applied.
Verification:
# Test that model parameter works correctly
node dist/cli/index.js generate "what is deepest you can think?" --provider google-ai --model gemini-2.5-flash --debug
# Should show: modelName: 'gemini-2.5-flash' in debug output
Available Models for Google AI:
gemini-2.5-flash
- Fast, efficient responsesgemini-2.5-pro
- Comprehensive, detailed responses
Build Issue Solutions:
- Clean Build:
- Dependencies Issues:
π§ͺ Testing and Validation¶
Comprehensive System Test¶
Run this test suite to validate everything is working:
# 1. Build the system
npm run build
# 2. Test built-in tools
echo "Testing built-in tools..."
node dist/cli/index.js generate "What time is it?" --debug
# 3. Test tool discovery
echo "Testing tool discovery..."
node dist/cli/index.js generate "What tools do you have access to?" --debug
# 4. Test external server discovery
echo "Testing external server discovery..."
npx @juspay/neurolink mcp discover --format table
# 5. Test AI provider
echo "Testing AI provider..."
npx @juspay/neurolink status --verbose
# 6. Run comprehensive tests
echo "Running comprehensive tests..."
npm run test:run -- test/mcp-comprehensive.test.ts
Expected Results:
- β Build: Successful compilation
- β Built-in tools: Time tool returns current time
- β Tool discovery: Lists 5+ built-in tools
- β External discovery: Shows 58+ discovered servers
- β AI provider: At least one provider available
- β Tests: All MCP foundation tests pass
Debug Mode¶
Enable detailed logging for troubleshooting:
# Enable debug mode
export NEUROLINK_DEBUG=true
# Run commands with debug output
npx @juspay/neurolink generate "Hello" --debug
npx @juspay/neurolink mcp discover --format table
npx @juspay/neurolink status --verbose
π System Requirements¶
Minimum Requirements¶
- Node.js: v18+ (recommended: v20+)
- NPM: v8+
- TypeScript: v5+ (for development)
- Operating System: macOS, Linux, Windows
Recommended Setup¶
# Check versions
node --version # Should be v18+
npm --version # Should be v8+
# For development
npx tsc --version # Should be v5+
π Getting Help¶
Quick Diagnostics¶
# System status
npx @juspay/neurolink status --verbose
# MCP status
npx @juspay/neurolink mcp discover --format table
# Debug output
export NEUROLINK_DEBUG=true
npx @juspay/neurolink generate "Test" --debug
Report Issues¶
When reporting issues, please include:
- System Information:
- Debug Output:
-
Error Logs: Full error messages and stack traces
-
Steps to Reproduce: Exact commands that cause the issue
Community Support¶
- GitHub Issues: https://github.com/juspay/neurolink/issues
- Documentation: https://github.com/juspay/neurolink/docs
π’ Enterprise Proxy Issues¶
Proxy Not Working¶
Symptoms: Connection errors when HTTPS_PROXY
is set
Diagnosis:
# Check proxy environment variables
echo $HTTPS_PROXY
echo $HTTP_PROXY
# Test proxy connectivity
curl -I --proxy $HTTPS_PROXY https://api.openai.com
Solutions:
- Verify proxy format:
# Correct format
export HTTPS_PROXY="http://proxy.company.com:8080"
# Not: https:// (use http:// even for HTTPS_PROXY)
- Check authentication:
# URL encode special characters
export HTTPS_PROXY="http://user%40domain.com:pass%3Aword@proxy:8080"
- Test bypass:
Corporate Firewall Blocking¶
Symptoms: Network timeouts or SSL certificate errors
Solutions:
- Contact IT team for allowlist:
generativelanguage.googleapis.com
(Google AI)api.anthropic.com
(Anthropic)api.openai.com
(OpenAI)bedrock.amazonaws.com
(Bedrock)-
aiplatform.googleapis.com
(Vertex AI) -
Check SSL verification:
Debug Proxy Connection¶
# Enable detailed proxy logging
export DEBUG=neurolink:proxy
npx @juspay/neurolink generate "test proxy" --debug
For detailed proxy setup β See Enterprise & Proxy Setup Guide
π SageMaker Provider Issues¶
Common SageMaker Errors¶
"Endpoint not found" Error¶
# Symptoms
Error: The endpoint 'my-endpoint' was not found.
# Solutions
1. Check endpoint exists in SageMaker console
2. Verify endpoint is in 'InService' status
3. Check AWS region matches endpoint region
"Access denied" Error¶
# Symptoms
AccessDeniedException: User: arn:aws:iam::123456789012:user/myuser is not authorized
# Solutions
1. Add SageMaker invoke permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["sagemaker:InvokeEndpoint"],
"Resource": "arn:aws:sagemaker:*:*:endpoint/*"
}
]
}
2. Check AWS credentials are valid:
aws sts get-caller-identity
"Model not loading" Error¶
# Symptoms
ModelError: The model is not ready to serve requests
# Solutions
1. Check endpoint status:
npx @juspay/neurolink sagemaker status
2. Monitor CloudWatch logs:
aws logs describe-log-groups --log-group-name-prefix /aws/sagemaker/Endpoints
3. Wait for endpoint to be in 'InService' status
SageMaker Configuration Issues¶
Invalid AWS Credentials¶
# Check configuration
npx @juspay/neurolink sagemaker config
# Set required variables
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION="us-east-1"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
Timeout Issues¶
# Increase timeout for large models
export SAGEMAKER_TIMEOUT="60000" # 60 seconds
# Use in CLI
npx @juspay/neurolink generate "complex task" --provider sagemaker --timeout 60s
SageMaker Debug Mode¶
# Enable debug output
export NEUROLINK_DEBUG=true
npx @juspay/neurolink generate "test" --provider sagemaker --debug
# SageMaker-specific debugging
export SAGEMAKER_DEBUG=true
npx @juspay/neurolink sagemaker status --verbose
SageMaker CLI Commands¶
# Check endpoint health
npx @juspay/neurolink sagemaker status
# Validate configuration
npx @juspay/neurolink sagemaker validate
# Test specific endpoint
npx @juspay/neurolink sagemaker test my-endpoint
# Performance benchmark
npx @juspay/neurolink sagemaker benchmark my-endpoint
# List available endpoints (requires AWS CLI)
npx @juspay/neurolink sagemaker list-endpoints
π Additional Resources¶
- MCP Integration Guide - Complete MCP setup and usage
- CLI Guide - Comprehensive CLI documentation
- API Reference - Complete API documentation
- Configuration Guide - Environment and setup guide
π‘ Most issues are resolved by ensuring you're using v1.7.1+ and running npm run build
after installation.