100% Self-Hosted & Open Source

AI Backends Open Source

Simple AI API server which supports multiple models and providers.
Works with projects built with AI app builders.

Quick Start

terminal
$
git clone https://github.com/donvito/ai-backend.git
$
cd ai-backend
$
bun install
$
bun run dev
Server running at: http://localhost:3000

Available Endpoints

Powerful AI capabilities accessible through simple REST API endpoints

Text Summarization

/api/summarize

Extract key insights and create concise summaries from long text content

Language Translation

/api/translate

Translate text between multiple languages with high accuracy and context awareness

Sentiment Analysis

/api/sentiment

Analyze emotional tone and sentiment in text content with detailed confidence scores

Keyword Extraction

/api/keywords

Extract important keywords and phrases from text with relevance scoring

Email Reply Generation

/api/email-reply

Generate contextual and professional email responses based on conversation history

Image Analysis

/api/describeImage

Analyze and describe images with detailed visual understanding and context

Text Q&A

/api/ask-text

Answer questions based on provided text context using LLM comprehension

Highlights Extraction

/api/highlights

Extract and identify the most important highlights and key points from your text content

Project Planner

/api/project-planner

Generate structured project plans with tasks, timelines, and dependencies for complex projects

Meeting Notes

/api/meeting-notes

Extract structured meeting notes with attendees, decisions, action items, and summaries

Outline Generation

/api/outline

Create structured outlines from text with customizable depth, style, and optional intro/conclusion sections

PDF Summarizer

/api/pdf-summarizer

Extract and summarize content from PDF documents using AI with support for streaming responses

Supported LLM Providers

Choose from multiple AI providers with unified API access. Switch between providers without changing your code.

OpenAI

GPT Models

GPT-4 and GPT-5 with structured outputs & vision.

Function calling
Vision API

Ollama

Self-Hosted

Run open-source models locally with complete privacy.

100% private
JSON mode

Anthropic

Claude Models

Advanced reasoning AI with safety focus and large context windows.

200K+ context
Safety-focused

OpenRouter

Multiple Models

Access 480+ AI models through a single unified API.

480+ models
OpenAI-compatible

LMStudio

Local Desktop

Run LLMs locally with user-friendly desktop interface.

Desktop GUI
Supports HuggingFace models and MLX format

Vercel AI Gateway

Mutiple Models

Unified OpenAI-compatible gateway with routing, caching, and observability.

One endpoint, all your models
Eliminate overhead, ship faster.
Intelligent failovers, increase uptime

Unified Configuration

Switch between any provider with a simple configuration change. No code refactoring required.

Request Configuration

json request
{
  "config": {
    "provider": "openai",
    "model": "gpt-4"
  },
  "text": "Your input text here..."
}

Provider Examples

"provider": "openai"
Models: gpt-4, gpt-3.5-turbo
"provider": "ollama"
Models: llama3.2, mistral, codellama
"provider": "anthropic"
Models: claude-3-5-sonnet, claude-3-haiku
"provider": "openrouter"
Models: any supported model
"provider": "lmstudio"
Models: any locally loaded model
"provider": "aigateway"
Models: route to upstream via gateway