AI Provider Guides¶
Complete setup guides for all supported AI providers.
🆓 Free Tier Providers¶
Start with zero cost using these free-tier options:
Hugging Face¶
100,000+ open-source models
- ✅ Free inference API
- 🌍 Largest model collection
- 🔓 Fully open source
- 📊 Models by task: chat, classification, NER, summarization
Google AI Studio¶
Gemini models with generous free tier
- ✅ 1,500 requests/day free
- ⚡ Fast Gemini 2.0 Flash
- 🎯 15 requests/minute
- 💰 Pay-as-you-go option
🏢 Enterprise Providers¶
Production-grade providers for enterprise deployments:
Azure OpenAI¶
Enterprise AI with Microsoft Azure
- 🔒 SOC2, HIPAA, ISO 27001 compliant
- 🌍 Multi-region deployment (30+ regions)
- 🛡️ Private endpoints with VNet
- 💼 Enterprise SLAs
Google Vertex AI¶
Google Cloud ML platform
- ☁️ GCP integration
- 🔐 IAM, VPC, service accounts
- 🌏 Global deployment
- 🎯 Gemini, PaLM, Codey models
AWS Bedrock¶
Serverless AI on AWS
- 📦 13 foundation models (Claude, Llama, Mistral)
- 🔐 IAM, VPC integration
- 🌍 Multi-region (us-east-1, eu-west-1, ap-southeast-1)
- 💰 Pay-per-use pricing
🌍 Compliance-Focused¶
Providers with specific compliance certifications:
Mistral AI¶
European AI with GDPR compliance
- 🇪🇺 EU data residency
- ✅ GDPR compliant by default
- 🔓 Open source models
- 💰 Cost-effective
🔌 Aggregators & Proxies¶
Access multiple providers through unified interfaces:
OpenAI Compatible¶
OpenRouter, vLLM, LocalAI, and more
- 🌐 100+ models through OpenRouter
- 💻 Local deployment with vLLM
- 🔓 Self-hosted with LocalAI
- 🔄 Drop-in OpenAI replacement
LiteLLM¶
100+ providers through proxy
- 🔄 Unified API for 100+ providers
- 📊 Load balancing and fallbacks
- 💰 Cost tracking
- 🎯 Model routing
Quick Comparison¶
Provider | Free Tier | Enterprise | GDPR | Latency | Best For |
---|---|---|---|---|---|
Hugging Face | ✅ | ❌ | ✅ | Medium | Open source, experimentation |
Google AI | ✅ | ✅ | ✅ | Low | Free tier, Gemini |
Mistral AI | ❌ | ✅ | ✅ | Low | EU compliance, cost |
OpenAI Compatible | Varies | ✅ | Varies | Varies | Flexibility, local deployment |
LiteLLM | ❌ | ✅ | Varies | Low | Multi-provider, unified API |
Azure OpenAI | ❌ | ✅ | ✅ | Low | Enterprise, Microsoft ecosystem |
Vertex AI | ❌ | ✅ | ✅ | Low | Enterprise, GCP ecosystem |
AWS Bedrock | ❌ | ✅ | ✅ | Low | Enterprise, AWS ecosystem |
Setup Strategies¶
Strategy 1: Free Tier First (Recommended for Development)¶
```typescript const ai = new NeuroLink({ providers: [ { name: 'google-ai', priority: 1, config: { apiKey: process.env.GOOGLE_AI_KEY }, quotas: { daily: 1500 } }, { name: 'openai', priority: 2, config: { apiKey: process.env.OPENAI_API_KEY } } ], failoverConfig: { enabled: true, fallbackOnQuota: true } });
const result = await ai.generate({
input: { text: "Hello world" }
});
```
```bash
Set up environment variables¶
export GOOGLE_AI_KEY="your-key" export OPENAI_API_KEY="your-key"
# Use with automatic failover
npx @juspay/neurolink generate "Hello world" \
--provider google-ai
```
Strategy 2: Multi-Region Enterprise¶
const ai = new NeuroLink({
providers: [
{
name: "azure-us",
region: "us-east",
config: {
/* Azure US */
},
},
{
name: "azure-eu",
region: "eu-west",
config: {
/* Azure EU */
},
},
{
name: "bedrock-us",
region: "us-east",
config: {
/* Bedrock US */
},
},
],
loadBalancing: "latency-based",
});
Strategy 3: GDPR Compliance¶
const ai = new NeuroLink({
providers: [
{
name: "mistral",
priority: 1,
config: { apiKey: process.env.MISTRAL_API_KEY },
},
{
name: "azure-eu",
priority: 2,
config: {
/* Azure EU region */
},
},
],
compliance: {
framework: "GDPR",
dataResidency: "EU",
},
});
Next Steps¶
- Choose a provider based on your requirements (free tier, compliance, region)
- Follow the setup guide to get your API key
- Configure NeuroLink with the provider
- Test the integration with a simple request
- Add failover for production reliability
Related Documentation¶
- Multi-Provider Failover - High availability patterns
- Cost Optimization - Reduce costs by 80-95%
- Compliance & Security - GDPR, SOC2, HIPAA
- Load Balancing - Distribution strategies