Skip to content

AI Provider Guides

Complete setup guides for all supported AI providers.


🆓 Free Tier Providers

Start with zero cost using these free-tier options:

Hugging Face

100,000+ open-source models

  • ✅ Free inference API
  • 🌍 Largest model collection
  • 🔓 Fully open source
  • 📊 Models by task: chat, classification, NER, summarization

Setup Guide →

Google AI Studio

Gemini models with generous free tier

  • ✅ 1,500 requests/day free
  • ⚡ Fast Gemini 2.0 Flash
  • 🎯 15 requests/minute
  • 💰 Pay-as-you-go option

Setup Guide →


🏢 Enterprise Providers

Production-grade providers for enterprise deployments:

Azure OpenAI

Enterprise AI with Microsoft Azure

  • 🔒 SOC2, HIPAA, ISO 27001 compliant
  • 🌍 Multi-region deployment (30+ regions)
  • 🛡️ Private endpoints with VNet
  • 💼 Enterprise SLAs

Setup Guide →

Google Vertex AI

Google Cloud ML platform

  • ☁️ GCP integration
  • 🔐 IAM, VPC, service accounts
  • 🌏 Global deployment
  • 🎯 Gemini, PaLM, Codey models

Setup Guide →

AWS Bedrock

Serverless AI on AWS

  • 📦 13 foundation models (Claude, Llama, Mistral)
  • 🔐 IAM, VPC integration
  • 🌍 Multi-region (us-east-1, eu-west-1, ap-southeast-1)
  • 💰 Pay-per-use pricing

Setup Guide →


🌍 Compliance-Focused

Providers with specific compliance certifications:

Mistral AI

European AI with GDPR compliance

  • 🇪🇺 EU data residency
  • ✅ GDPR compliant by default
  • 🔓 Open source models
  • 💰 Cost-effective

Setup Guide →


🔌 Aggregators & Proxies

Access multiple providers through unified interfaces:

OpenAI Compatible

OpenRouter, vLLM, LocalAI, and more

  • 🌐 100+ models through OpenRouter
  • 💻 Local deployment with vLLM
  • 🔓 Self-hosted with LocalAI
  • 🔄 Drop-in OpenAI replacement

Setup Guide →

LiteLLM

100+ providers through proxy

  • 🔄 Unified API for 100+ providers
  • 📊 Load balancing and fallbacks
  • 💰 Cost tracking
  • 🎯 Model routing

Setup Guide →


Quick Comparison

Provider Free Tier Enterprise GDPR Latency Best For
Hugging Face Medium Open source, experimentation
Google AI Low Free tier, Gemini
Mistral AI Low EU compliance, cost
OpenAI Compatible Varies Varies Varies Flexibility, local deployment
LiteLLM Varies Low Multi-provider, unified API
Azure OpenAI Low Enterprise, Microsoft ecosystem
Vertex AI Low Enterprise, GCP ecosystem
AWS Bedrock Low Enterprise, AWS ecosystem

Setup Strategies

```typescript const ai = new NeuroLink({ providers: [ { name: 'google-ai', priority: 1, config: { apiKey: process.env.GOOGLE_AI_KEY }, quotas: { daily: 1500 } }, { name: 'openai', priority: 2, config: { apiKey: process.env.OPENAI_API_KEY } } ], failoverConfig: { enabled: true, fallbackOnQuota: true } });

const result = await ai.generate({
  input: { text: "Hello world" }
});
```

```bash

Set up environment variables

export GOOGLE_AI_KEY="your-key" export OPENAI_API_KEY="your-key"

# Use with automatic failover
npx @juspay/neurolink generate "Hello world" \
  --provider google-ai
```

Strategy 2: Multi-Region Enterprise

const ai = new NeuroLink({
  providers: [
    {
      name: "azure-us",
      region: "us-east",
      config: {
        /* Azure US */
      },
    },
    {
      name: "azure-eu",
      region: "eu-west",
      config: {
        /* Azure EU */
      },
    },
    {
      name: "bedrock-us",
      region: "us-east",
      config: {
        /* Bedrock US */
      },
    },
  ],
  loadBalancing: "latency-based",
});

Strategy 3: GDPR Compliance

const ai = new NeuroLink({
  providers: [
    {
      name: "mistral",
      priority: 1,
      config: { apiKey: process.env.MISTRAL_API_KEY },
    },
    {
      name: "azure-eu",
      priority: 2,
      config: {
        /* Azure EU region */
      },
    },
  ],
  compliance: {
    framework: "GDPR",
    dataResidency: "EU",
  },
});

Next Steps

  1. Choose a provider based on your requirements (free tier, compliance, region)
  2. Follow the setup guide to get your API key
  3. Configure NeuroLink with the provider
  4. Test the integration with a simple request
  5. Add failover for production reliability