Chapter 3

Getting started

Welcome to LocalAI! This section covers everything you need to know after installation to start using LocalAI effectively.

Tip

Haven’t installed LocalAI yet?

See the Installation guide to install LocalAI first. Docker is the recommended installation method for most users.

What’s in This Section

Subsections of Getting started

Quickstart

LocalAI is a free, open-source alternative to OpenAI (Anthropic, etc.), functioning as a drop-in replacement REST API for local inferencing. It allows you to run LLMs, generate images, and produce audio, all locally or on-premises with consumer-grade hardware, supporting multiple model families and architectures.

Tip

Security considerations

If you are exposing LocalAI remotely, make sure you protect the API endpoints adequately with a mechanism which allows to protect from the incoming traffic or alternatively, run LocalAI with API_KEY to gate the access with an API key. The API key guarantees a total access to the features (there is no role separation), and it is to be considered as likely as an admin role.

Quickstart

This guide assumes you have already installed LocalAI. If you haven’t installed it yet, see the Installation guide first.

Starting LocalAI

Once installed, start LocalAI. For Docker installations:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest

The API will be available at http://localhost:8080.

Downloading models on start

When starting LocalAI (either via Docker or via CLI) you can specify as argument a list of models to install automatically before starting the API, for example:

local-ai run llama-3.2-1b-instruct:q4_k_m
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
local-ai run ollama://gemma:2b
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
local-ai run oci://localai/phi-2:latest
Tip

Automatic Backend Detection: When you install models from the gallery or YAML files, LocalAI automatically detects your system’s GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see GPU Acceleration.

For a full list of options, you can run LocalAI with --help or refer to the Linux Installation guide for installer configuration options.

Using LocalAI and the full stack with LocalAGI

LocalAI is part of the Local family stack, along with LocalAGI and LocalRecall.

LocalAGI is a powerful, self-hostable AI Agent platform designed for maximum privacy and flexibility which encompassess and uses all the software stack. It provides a complete drop-in replacement for OpenAI’s Responses APIs with advanced agentic capabilities, working entirely locally on consumer-grade hardware (CPU and GPU).

Quick Start

git clone https://github.com/mudler/LocalAGI
cd LocalAGI

docker compose up

docker compose -f docker-compose.nvidia.yaml up

docker compose -f docker-compose.intel.yaml up

MODEL_NAME=gemma-3-12b-it docker compose up

MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-4_5 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up

Key Features

  • Privacy-Focused: All processing happens locally, ensuring your data never leaves your machine
  • Flexible Deployment: Supports CPU, NVIDIA GPU, and Intel GPU configurations
  • Multiple Model Support: Compatible with various models from Hugging Face and other sources
  • Web Interface: User-friendly chat interface for interacting with AI agents
  • Advanced Capabilities: Supports multimodal models, image generation, and more
  • Docker Integration: Easy deployment using Docker Compose

Environment Variables

You can customize your LocalAGI setup using the following environment variables:

  • MODEL_NAME: Specify the model to use (e.g., gemma-3-12b-it)
  • MULTIMODAL_MODEL: Set a custom multimodal model
  • IMAGE_MODEL: Configure an image generation model

For more advanced configuration and API documentation, visit the LocalAGI GitHub repository.

What’s Next?

There is much more to explore with LocalAI! You can run any model from Hugging Face, perform video generation, and also voice cloning. For a comprehensive overview, check out the features section.

Explore additional resources and community contributions:

Setting Up Models

This section covers everything you need to know about installing and configuring models in LocalAI. You’ll learn multiple methods to get models running.

Prerequisites

  • LocalAI installed and running (see Quickstart if you haven’t set it up yet)
  • Basic understanding of command line usage

The Model Gallery is the simplest way to install models. It provides pre-configured models ready to use.

Via WebUI

  1. Open the LocalAI WebUI at http://localhost:8080
  2. Navigate to the “Models” tab
  3. Browse available models
  4. Click “Install” on any model you want
  5. Wait for installation to complete

For more details, refer to the Gallery Documentation.

Via CLI

# List available models
local-ai models list

# Install a specific model
local-ai models install llama-3.2-1b-instruct:q4_k_m

# Start LocalAI with a model from the gallery
local-ai run llama-3.2-1b-instruct:q4_k_m

To run models available in the LocalAI gallery, you can use the model name as the URI. For example, to run LocalAI with the Hermes model, execute:

local-ai run hermes-2-theta-llama-3-8b

To install only the model, use:

local-ai models install hermes-2-theta-llama-3-8b

Note: The galleries available in LocalAI can be customized to point to a different URL or a local directory. For more information on how to setup your own gallery, see the Gallery Documentation.

Browse Online

Visit models.localai.io to browse all available models in your browser.

Method 1.5: Import Models via WebUI

The WebUI provides a powerful model import interface that supports both simple and advanced configuration:

Simple Import Mode

  1. Open the LocalAI WebUI at http://localhost:8080
  2. Click “Import Model”
  3. Enter the model URI (e.g., https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct-GGUF)
  4. Optionally configure preferences:
    • Backend selection
    • Model name
    • Description
    • Quantizations
    • Embeddings support
    • Custom preferences
  5. Click “Import Model” to start the import process

Advanced Import Mode

For full control over model configuration:

  1. In the WebUI, click “Import Model”
  2. Toggle to “Advanced Mode”
  3. Edit the YAML configuration directly in the code editor
  4. Use the “Validate” button to check your configuration
  5. Click “Create” or “Update” to save

The advanced editor includes:

  • Syntax highlighting
  • YAML validation
  • Format and copy tools
  • Full configuration options

This is especially useful for:

  • Custom model configurations
  • Fine-tuning model parameters
  • Setting up complex model setups
  • Editing existing model configurations

Method 2: Installing from Hugging Face

LocalAI can directly install models from Hugging Face:

# Install and run a model from Hugging Face
local-ai run huggingface://TheBloke/phi-2-GGUF

The format is: huggingface://<repository>/<model-file> ( is optional)

Examples

local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf

Method 3: Installing from OCI Registries

Ollama Registry

local-ai run ollama://gemma:2b

Standard OCI Registry

local-ai run oci://localai/phi-2:latest

Run Models via URI

To run models via URI, specify a URI to a model file or a configuration file when starting LocalAI. Valid syntax includes:

  • file://path/to/model (absolute path to a file within your models directory)
  • huggingface://repository_id/model_file (e.g., huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf)
  • From OCIs: oci://container_image:tag, ollama://model_id:tag
  • From configuration files: https://gist.githubusercontent.com/.../phi-2.yaml
Note

When using file:// URLs, the path must point to a file within your models directory (specified by MODELS_PATH). Files outside this directory are rejected for security reasons.

Configuration files can be used to customize the model defaults and settings. For advanced configurations, refer to the Customize Models section.

Examples

local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
local-ai run ollama://gemma:2b
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
local-ai run oci://localai/phi-2:latest

Method 4: Manual Installation

For full control, you can manually download and configure models.

Step 1: Download a Model

Download a GGUF model file. Popular sources:

Example:

mkdir -p models

wget https://huggingface.co/TheBloke/phi-2-GGUF/resolve/main/phi-2.Q4_K_M.gguf \
  -O models/phi-2.Q4_K_M.gguf

Step 2: Create a Configuration File (Optional)

Create a YAML file to configure the model:

# models/phi-2.yaml
name: phi-2
parameters:
  model: phi-2.Q4_K_M.gguf
  temperature: 0.7
context_size: 2048
threads: 4
backend: llama-cpp

Customize model defaults and settings with a configuration file. For advanced configurations, refer to the Advanced Documentation.

Step 3: Run LocalAI

Choose one of the following methods to run LocalAI:

mkdir models

cp your-model.gguf models/

docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.gguf",
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'
Tip

Other Docker Images:

For other Docker images, please refer to the table in the container images section.

Example:

mkdir models

wget https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/resolve/main/luna-ai-llama2-uncensored.Q4_0.gguf -O models/luna-ai-llama2

cp -rf prompt-templates/getting_started.tmpl models/luna-ai-llama2.tmpl

docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4

curl http://localhost:8080/v1/models

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "luna-ai-llama2",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9
   }'
Note
  • If running on Apple Silicon (ARM), it is not recommended to run on Docker due to emulation. Follow the build instructions to use Metal acceleration for full GPU support.
  • If you are running on Apple x86_64, you can use Docker without additional gain from building it from source.
git clone https://github.com/go-skynet/LocalAI

cd LocalAI

cp your-model.gguf models/

docker compose up -d --pull always

curl http://localhost:8080/v1/models

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.gguf",
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'
Tip

Other Docker Images:

For other Docker images, please refer to the table in Getting Started.

Note: If you are on Windows, ensure the project is on the Linux filesystem to avoid slow model loading. For more information, see the Microsoft Docs.

For Kubernetes deployment, see the Kubernetes installation guide.

LocalAI binary releases are available on GitHub.

# With binary
local-ai --models-path ./models
Tip

If installing on macOS, you might encounter a message saying:

“local-ai-git-Darwin-arm64” (or the name you gave the binary) can’t be opened because Apple cannot check it for malicious software.

Hit OK, then go to Settings > Privacy & Security > Security and look for the message:

“local-ai-git-Darwin-arm64” was blocked from use because it is not from an identified developer.

Press “Allow Anyway.”

For instructions on building LocalAI from source, see the Build from Source guide.

GPU Acceleration

For instructions on GPU acceleration, visit the GPU Acceleration page.

For more model configurations, visit the Examples Section.

Understanding Model Files

File Formats

  • GGUF: Modern format, recommended for most use cases
  • GGML: Older format, still supported but deprecated

Quantization Levels

Models come in different quantization levels (quality vs. size trade-off):

QuantizationSizeQualityUse Case
Q8_0LargestHighestBest quality, requires more RAM
Q6_KLargeVery HighHigh quality
Q4_K_MMediumHighBalanced (recommended)
Q4_K_SSmallMediumLower RAM usage
Q2_KSmallestLowerMinimal RAM, lower quality

Choosing the Right Model

Consider:

  • RAM available: Larger models need more RAM
  • Use case: Different models excel at different tasks
  • Speed: Smaller quantizations are faster
  • Quality: Higher quantizations produce better output

Model Configuration

Basic Configuration

Create a YAML file in your models directory:

name: my-model
parameters:
  model: model.gguf
  temperature: 0.7
  top_p: 0.9
context_size: 2048
threads: 4
backend: llama-cpp

Advanced Configuration

See the Model Configuration guide for all available options.

Managing Models

List Installed Models

# Via API
curl http://localhost:8080/v1/models

# Via CLI
local-ai models list

Remove Models

Simply delete the model file and configuration from your models directory:

rm models/model-name.gguf
rm models/model-name.yaml  # if exists

Troubleshooting

Model Not Loading

  1. Check backend: Ensure the required backend is installed

    local-ai backends list
    local-ai backends install llama-cpp  # if needed
  2. Check logs: Enable debug mode

    DEBUG=true local-ai
  3. Verify file: Ensure the model file is not corrupted

Out of Memory

  • Use a smaller quantization (Q4_K_S or Q2_K)
  • Reduce context_size in configuration
  • Close other applications to free RAM

Wrong Backend

Check the Compatibility Table to ensure you’re using the correct backend for your model.

Best Practices

  1. Start small: Begin with smaller models to test your setup
  2. Use quantized models: Q4_K_M is a good balance for most use cases
  3. Organize models: Keep your models directory organized
  4. Backup configurations: Save your YAML configurations
  5. Monitor resources: Watch RAM and disk usage

Try it out

Once LocalAI is installed, you can start it (either by using docker, or the cli, or the systemd service).

By default the LocalAI WebUI should be accessible from http://localhost:8080. You can also use 3rd party projects to interact with LocalAI as you would use OpenAI (see also Integrations ).

After installation, install new models by navigating the model gallery, or by using the local-ai CLI.

Tip

To install models with the WebUI, see the Models section. With the CLI you can list the models with local-ai models list and install them with local-ai models install <model-name>.

You can also run models manually by copying files into the models directory.

You can test out the API endpoints using curl, few examples are listed below. The models we are referring here (gpt-4, gpt-4-vision-preview, tts-1, whisper-1) are the default models that come with the AIO images - you can also use any other model you have installed.

Text Generation

Creates a model response for the given chat conversation. OpenAI documentation.

curl http://localhost:8080/v1/chat/completions \
      -H "Content-Type: application/json" \
      -d '{ "model": "gpt-4", "messages": [{"role": "user", "content": "How are you doing?", "temperature": 0.1}] }' 

GPT Vision

Understand images.

curl http://localhost:8080/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{ 
        "model": "gpt-4-vision-preview", 
        "messages": [
          {
            "role": "user", "content": [
              {"type":"text", "text": "What is in the image?"},
              {
                "type": "image_url", 
                "image_url": {
                  "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" 
                }
              }
            ], 
          "temperature": 0.9
          }
        ]
      }' 

Function calling

Call functions

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "messages": [
      {
        "role": "user",
        "content": "What is the weather like in Boston?"
      }
    ],
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_current_weather",
          "description": "Get the current weather in a given location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
              },
              "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"]
              }
            },
            "required": ["location"]
          }
        }
      }
    ],
    "tool_choice": "auto"
  }'

Anthropic Messages API

LocalAI supports the Anthropic Messages API for Claude-compatible models. Anthropic documentation.

curl http://localhost:8080/v1/messages \
  -H "Content-Type: application/json" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "gpt-4",
    "max_tokens": 1024,
    "messages": [
      {"role": "user", "content": "How are you doing?"}
    ],
    "temperature": 0.7
  }'

Open Responses API

LocalAI supports the Open Responses API specification with support for background processing, streaming, and advanced features. Open Responses documentation.

curl http://localhost:8080/v1/responses \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "input": "Say this is a test!",
    "max_output_tokens": 1024,
    "temperature": 0.7
  }'

For background processing:

curl http://localhost:8080/v1/responses \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "input": "Generate a long story",
    "max_output_tokens": 4096,
    "background": true
  }'

Then retrieve the response:

curl http://localhost:8080/v1/responses/<response_id>

Image Generation

Creates an image given a prompt. OpenAI documentation.

curl http://localhost:8080/v1/images/generations \
      -H "Content-Type: application/json" -d '{
          "prompt": "A cute baby sea otter",
          "size": "256x256"
        }'

Text to speech

Generates audio from the input text. OpenAI documentation.

curl http://localhost:8080/v1/audio/speech \
  -H "Content-Type: application/json" \
  -d '{
    "model": "tts-1",
    "input": "The quick brown fox jumped over the lazy dog.",
    "voice": "alloy"
  }' \
  --output speech.mp3

Audio Transcription

Transcribes audio into the input language. OpenAI Documentation.

Download first a sample to transcribe:

wget --quiet --show-progress -O gb1.ogg https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg 

Send the example audio file to the transcriptions endpoint :

curl http://localhost:8080/v1/audio/transcriptions \
    -H "Content-Type: multipart/form-data" \
    -F file="@$PWD/gb1.ogg" -F model="whisper-1"

Embeddings Generation

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms. OpenAI Embeddings.

curl http://localhost:8080/embeddings \
    -X POST -H "Content-Type: application/json" \
    -d '{ 
        "input": "Your text string goes here", 
        "model": "text-embedding-ada-002"
      }'
Tip

Don’t use the model file as model in the request unless you want to handle the prompt template for yourself.

Use the model names like you would do with OpenAI like in the examples below. For instance gpt-4-vision-preview, or gpt-4.

Customizing the Model

To customize the prompt template or the default settings of the model, a configuration file is utilized. This file must adhere to the LocalAI YAML configuration standards. For comprehensive syntax details, refer to the advanced documentation. The configuration file can be located either remotely (such as in a Github Gist) or within the local filesystem or a remote URL.

LocalAI can be initiated using either its container image or binary, with a command that includes URLs of model config files or utilizes a shorthand format (like huggingface:// or github://), which is then expanded into complete URLs.

The configuration can also be set via an environment variable. For instance:

local-ai github://owner/repo/file.yaml@branch

MODELS="github://owner/repo/file.yaml@branch,github://owner/repo/file.yaml@branch" local-ai

Here’s an example to initiate the phi-2 model:

docker run -p 8080:8080 localai/localai:v3.12.1 https://gist.githubusercontent.com/mudler/ad601a0488b497b69ec549150d9edd18/raw/a8a8869ef1bb7e3830bf5c0bae29a0cce991ff8d/phi-2.yaml

You can also check all the embedded models configurations here.

Tip

The model configurations used in the quickstart are accessible here: https://github.com/mudler/LocalAI/tree/master/embedded/models. Contributions are welcome; please feel free to submit a Pull Request.

The phi-2 model configuration from the quickstart is expanded from https://github.com/mudler/LocalAI/blob/master/examples/configurations/phi-2.yaml.

Example: Customizing the Prompt Template

To modify the prompt template, create a Github gist or a Pastebin file, and copy the content from https://github.com/mudler/LocalAI/blob/master/examples/configurations/phi-2.yaml. Alter the fields as needed:

name: phi-2
context_size: 2048
f16: true
threads: 11
gpu_layers: 90
mmap: true
parameters:
  # Reference any HF model or a local file here
  model: huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
  temperature: 0.2
  top_k: 40
  top_p: 0.95
template:
  
  chat: &template |
    Instruct: {{.Input}}
    Output:
  # Modify the prompt template here ^^^ as per your requirements
  completion: *template

Then, launch LocalAI using your gist’s URL:

## Important! Substitute with your gist's URL!
docker run -p 8080:8080 localai/localai:v3.12.1 https://gist.githubusercontent.com/xxxx/phi-2.yaml

Next Steps

Build LocalAI from source

Building LocalAI from source is an installation method that allows you to compile LocalAI yourself, which is useful for custom configurations, development, or when you need specific build options.

For complete build instructions, see the Build from Source documentation in the Installation section.

Run with container images

LocalAI provides a variety of images to support different environments. These images are available on quay.io and Docker Hub.

All-in-One images comes with a pre-configured set of models and backends, standard images instead do not have any model pre-configured and installed.

For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don’t have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the build section.

Tip

Available Images Types:

  • Images ending with -core are smaller images without predownload python dependencies. Use these images if you plan to use llama.cpp, stablediffusion-ncn or rwkv backends - if you are not sure which one to use, do not use these images.
  • Images containing the aio tag are all-in-one images with all the features enabled, and come with an opinionated set of configuration.

Prerequisites

Before you begin, ensure you have a container engine installed if you are not using the binaries. Suitable options include Docker or Podman. For installation instructions, refer to the following guides:

Tip

Hardware Requirements: The hardware requirements for LocalAI vary based on the model size and quantization method used. For performance benchmarks with different backends, such as llama.cpp, visit this link. The rwkv backend is noted for its lower resource consumption.

Standard container images

Standard container images do not have pre-installed models. Use these if you want to configure models manually.

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:masterlocalai/localai:master
Latest tagquay.io/go-skynet/local-ai:latestlocalai/localai:latest
Versioned imagequay.io/go-skynet/local-ai:v3.12.1localai/localai:v3.12.1
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-nvidia-cuda-12localai/localai:master-gpu-nvidia-cuda-12
Latest tagquay.io/go-skynet/local-ai:latest-gpu-nvidia-cuda-12localai/localai:latest-gpu-nvidia-cuda-12
Versioned imagequay.io/go-skynet/local-ai:v3.12.1-gpu-nvidia-cuda-12localai/localai:v3.12.1-gpu-nvidia-cuda-12
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-nvidia-cuda-13localai/localai:master-gpu-nvidia-cuda-13
Latest tagquay.io/go-skynet/local-ai:latest-gpu-nvidia-cuda-13localai/localai:latest-gpu-nvidia-cuda-13
Versioned imagequay.io/go-skynet/local-ai:v3.12.1-gpu-nvidia-cuda-13localai/localai:v3.12.1-gpu-nvidia-cuda-13
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-intellocalai/localai:master-gpu-intel
Latest tagquay.io/go-skynet/local-ai:latest-gpu-intellocalai/localai:latest-gpu-intel
Versioned imagequay.io/go-skynet/local-ai:v3.12.1-gpu-intellocalai/localai:v3.12.1-gpu-intel
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-hipblaslocalai/localai:master-gpu-hipblas
Latest tagquay.io/go-skynet/local-ai:latest-gpu-hipblaslocalai/localai:latest-gpu-hipblas
Versioned imagequay.io/go-skynet/local-ai:v3.12.1-gpu-hipblaslocalai/localai:v3.12.1-gpu-hipblas
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-vulkanlocalai/localai:master-vulkan
Latest tagquay.io/go-skynet/local-ai:latest-gpu-vulkanlocalai/localai:latest-gpu-vulkan
Versioned imagequay.io/go-skynet/local-ai:v3.12.1-vulkanlocalai/localai:v3.12.1-vulkan

These images are compatible with Nvidia ARM64 devices with CUDA 12, such as the Jetson Nano, Jetson Xavier NX, and Jetson AGX Orin. For more information, see the Nvidia L4T guide.

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64localai/localai:master-nvidia-l4t-arm64
Latest tagquay.io/go-skynet/local-ai:latest-nvidia-l4t-arm64localai/localai:latest-nvidia-l4t-arm64
Versioned imagequay.io/go-skynet/local-ai:v3.12.1-nvidia-l4t-arm64localai/localai:v3.12.1-nvidia-l4t-arm64

These images are compatible with Nvidia ARM64 devices with CUDA 13, such as the Nvidia DGX Spark. For more information, see the Nvidia L4T guide.

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64-cuda-13localai/localai:master-nvidia-l4t-arm64-cuda-13
Latest tagquay.io/go-skynet/local-ai:latest-nvidia-l4t-arm64-cuda-13localai/localai:latest-nvidia-l4t-arm64-cuda-13
Versioned imagequay.io/go-skynet/local-ai:v3.12.1-nvidia-l4t-arm64-cuda-13localai/localai:v3.12.1-nvidia-l4t-arm64-cuda-13

All-in-one images

All-In-One images are images that come pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and require no configuration. Models configuration can be found here separated by size.

In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below

CategoryModel nameReal model (CPU)Real model (GPU)
Text Generationgpt-4phi-2hermes-2-pro-mistral
Multimodal Visiongpt-4-vision-previewbakllavallava-1.6-mistral
Image Generationstablediffusionstablediffusiondreamshaper-8
Speech to Textwhisper-1whisper with whisper-base model<= same
Text to Speechtts-1en-us-amy-low.onnx from rhasspy/piper<= same
Embeddingstext-embedding-ada-002all-MiniLM-L6-v2 in Q4all-MiniLM-L6-v2

Usage

Select the image (CPU or GPU) and start the container with Docker:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu

LocalAI will automatically download all the required models, and the API will be available at localhost:8080.

Or with a docker-compose file:

version: "3.9"
services:
  api:
    image: localai/localai:latest-aio-cpu
    # For a specific version:
    # image: localai/localai:v3.12.1-aio-cpu
    # For Nvidia GPUs decomment one of the following (cuda12 or cuda13):
    # image: localai/localai:v3.12.1-aio-gpu-nvidia-cuda-12
    # image: localai/localai:v3.12.1-aio-gpu-nvidia-cuda-13
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-12
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-13
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 1m
      timeout: 20m
      retries: 5
    ports:
      - 8080:8080
    environment:
      - DEBUG=true
      # ...
    volumes:
      - ./models:/models:cached
    # decomment the following piece if running with Nvidia GPUs
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities: [gpu]
Tip

Models caching: The AIO image will download the needed models on the first run if not already present and store those in /models inside the container. The AIO models will be automatically updated with new versions of AIO images.

You can change the directory inside the container by specifying a MODELS_PATH environment variable (or --models-path).

If you want to use a named model or a local directory, you can mount it as a volume to /models:

docker run -p 8080:8080 --name local-ai -ti -v $PWD/models:/models localai/localai:latest-aio-cpu

or associate a volume:

docker volume create localai-models
docker run -p 8080:8080 --name local-ai -ti -v localai-models:/models localai/localai:latest-aio-cpu

Available AIO images

DescriptionQuayDocker Hub
Latest images for CPUquay.io/go-skynet/local-ai:latest-aio-cpulocalai/localai:latest-aio-cpu
Versioned image (e.g. for CPU)quay.io/go-skynet/local-ai:v3.12.1-aio-cpulocalai/localai:v3.12.1-aio-cpu
Latest images for Nvidia GPU (CUDA12)quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12localai/localai:latest-aio-gpu-nvidia-cuda-12
Latest images for Nvidia GPU (CUDA13)quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-13localai/localai:latest-aio-gpu-nvidia-cuda-13
Latest images for AMD GPUquay.io/go-skynet/local-ai:latest-aio-gpu-hipblaslocalai/localai:latest-aio-gpu-hipblas
Latest images for Intel GPUquay.io/go-skynet/local-ai:latest-aio-gpu-intellocalai/localai:latest-aio-gpu-intel

Available environment variables

The AIO Images are inheriting the same environment variables as the base images and the environment of LocalAI (that you can inspect by calling --help). However, it supports additional environment variables available only from the container image

VariableDefaultDescription
PROFILEAuto-detectedThe size of the model to use. Available: cpu, gpu-8g
MODELSAuto-detectedA list of models YAML Configuration file URI/URL (see also running models)

See Also

Run with Kubernetes

For installing LocalAI in Kubernetes, the deployment file from the examples can be used and customized as preferred:

kubectl apply -f https://raw.githubusercontent.com/mudler/LocalAI-examples/refs/heads/main/kubernetes/deployment.yaml

For Nvidia GPUs:

kubectl apply -f https://raw.githubusercontent.com/mudler/LocalAI-examples/refs/heads/main/kubernetes/deployment-nvidia.yaml

Alternatively, the helm chart can be used as well:

helm repo add go-skynet https://go-skynet.github.io/helm-charts/
helm repo update
helm show values go-skynet/local-ai > values.yaml


helm install local-ai go-skynet/local-ai -f values.yaml

Troubleshooting

This guide covers common issues you may encounter when using LocalAI, organized by category. For each issue, diagnostic steps and solutions are provided.

Quick Diagnostics

Before diving into specific issues, run these commands to gather diagnostic information:

# Check LocalAI is running and responsive
curl http://localhost:8080/readyz

# List loaded models
curl http://localhost:8080/v1/models

# Check LocalAI version
local-ai --version

# Enable debug logging for detailed output
DEBUG=true local-ai run
# or
local-ai run --log-level=debug

For Docker deployments:

# View container logs
docker logs local-ai

# Check container status
docker ps -a | grep local-ai

# Test GPU access (NVIDIA)
docker run --rm --gpus all nvidia/cuda:12.8.0-base-ubuntu24.04 nvidia-smi

Installation Issues

Binary Won’t Execute on Linux

Symptoms: Permission denied or “cannot execute binary file” errors.

Solution:

chmod +x local-ai-*
./local-ai-Linux-x86_64 run

If you see “cannot execute binary file: Exec format error”, you downloaded the wrong architecture. Verify with:

uname -m
# x86_64 → download the x86_64 binary
# aarch64 → download the arm64 binary

macOS: Application Is Quarantined

Symptoms: macOS blocks LocalAI from running because the DMG is not signed by Apple.

Solution: See GitHub issue #6268 for quarantine bypass instructions. This is tracked for resolution in issue #6244.

Model Loading Problems

Model Not Found

Symptoms: API returns 404 or "model not found" error.

Diagnostic steps:

  1. Check the model exists in your models directory:

    ls -la /path/to/models/
  2. Verify your models path is correct:

    # Check what path LocalAI is using
    local-ai run --models-path /path/to/models --log-level=debug
  3. Confirm the model name matches your request:

    # List available models
    curl http://localhost:8080/v1/models | jq '.data[].id'

Model Fails to Load (Backend Error)

Symptoms: Model is found but fails to load, with backend errors in the logs.

Common causes and fixes:

  • Wrong backend: Ensure the backend in your model YAML matches the model format. GGUF models use llama-cpp, diffusion models use diffusers, etc. See the compatibility table for details.
  • Backend not installed: Check installed backends:
    local-ai backends list
    # Install a missing backend:
    local-ai backends install llama-cpp
  • Corrupt model file: Re-download the model. Partial downloads or disk errors can corrupt files.
  • Wrong model format: LocalAI uses GGUF format for llama.cpp models. Older GGML format is deprecated.

Model Configuration Issues

Symptoms: Model loads but produces unexpected results or errors during inference.

Check your model YAML configuration:

# Example model config
name: my-model
backend: llama-cpp
parameters:
  model: my-model.gguf  # Relative to models directory
context_size: 2048
threads: 4  # Should match physical CPU cores

Common mistakes:

  • model path must be relative to the models directory, not an absolute path
  • threads set higher than physical CPU cores causes contention
  • context_size too large for available RAM causes OOM errors

GPU and Memory Issues

GPU Not Detected

NVIDIA (CUDA):

# Verify CUDA is available
nvidia-smi

# For Docker, verify GPU passthrough
docker run --rm --gpus all nvidia/cuda:12.8.0-base-ubuntu24.04 nvidia-smi

When working correctly, LocalAI logs should show: ggml_init_cublas: found X CUDA devices.

Ensure you are using a CUDA-enabled container image (tags containing cuda11, cuda12, or cuda13). CPU-only images cannot use NVIDIA GPUs.

AMD (ROCm):

# Verify ROCm installation
rocminfo

# Docker requires device passthrough
docker run --device=/dev/kfd --device=/dev/dri --group-add=video ...

If your GPU is not in the default target list, open up an Issue. Supported targets include: gfx900, gfx906, gfx908, gfx90a, gfx940, gfx941, gfx942, gfx1030, gfx1031, gfx1100, gfx1101.

Intel (SYCL):

# Docker requires device passthrough
docker run --device /dev/dri ...

Use container images with gpu-intel in the tag. Known issue: SYCL hangs when mmap: true is set — disable it in your model config:

mmap: false

Overriding backend auto-detection:

If LocalAI picks the wrong GPU backend, override it:

LOCALAI_FORCE_META_BACKEND_CAPABILITY=nvidia local-ai run
# Options: default, nvidia, amd, intel

Out of Memory (OOM)

Symptoms: Model loading fails or the process is killed by the OS.

Solutions:

  1. Use smaller quantizations: Q4_K_S or Q2_K use significantly less memory than Q8_0 or Q6_K
  2. Reduce context size: Lower context_size in your model YAML
  3. Enable low VRAM mode: Add low_vram: true to your model config
  4. Limit active models: Only keep one model loaded at a time:
    local-ai run --max-active-backends=1
  5. Enable idle watchdog: Automatically unload unused models:
    local-ai run --enable-watchdog-idle --watchdog-idle-timeout=10m
  6. Manually unload a model:
    curl -X POST http://localhost:8080/backend/shutdown \
      -H "Content-Type: application/json" \
      -d '{"model": "model-name"}'

Models Stay Loaded and Consume Memory

By default, models remain loaded in memory after first use. This can exhaust VRAM when switching between models.

Configure LRU eviction:

# Keep at most 2 models loaded; evict least recently used
local-ai run --max-active-backends=2

Configure watchdog auto-unload:

local-ai run \
  --enable-watchdog-idle --watchdog-idle-timeout=15m \
  --enable-watchdog-busy --watchdog-busy-timeout=5m

These can also be set via environment variables (LOCALAI_WATCHDOG_IDLE=true, LOCALAI_WATCHDOG_IDLE_TIMEOUT=15m) or in the Web UI under Settings → Watchdog Settings.

See the VRAM Management guide for more details.

API Connection Problems

Connection Refused

Symptoms: curl: (7) Failed to connect to localhost port 8080: Connection refused

Diagnostic steps:

  1. Verify LocalAI is running:

    # Direct install
    ps aux | grep local-ai
    
    # Docker
    docker ps | grep local-ai
  2. Check the bind address and port:

    # Default is :8080. Override with:
    local-ai run --address=0.0.0.0:8080
    # or
    LOCALAI_ADDRESS=":8080" local-ai run
  3. Check for port conflicts:

    ss -tlnp | grep 8080

Authentication Errors (401)

Symptoms: 401 Unauthorized response.

If API key authentication is enabled (LOCALAI_API_KEY or --api-keys), include the key in your requests:

curl http://localhost:8080/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY"

Keys can also be passed via x-api-key or xi-api-key headers.

Request Errors (400/422)

Symptoms: 400 Bad Request or 422 Unprocessable Entity.

Common causes:

  • Malformed JSON in request body
  • Missing required fields (e.g., model or messages)
  • Invalid parameter values (e.g., negative top_n for reranking)

Enable debug logging to see the full request/response:

DEBUG=true local-ai run

See the API Errors reference for a complete list of error codes and their meanings.

Performance Issues

Slow Inference

Diagnostic steps:

  1. Enable debug mode to see inference timing:

    DEBUG=true local-ai run
  2. Use streaming to measure time-to-first-token:

    curl http://localhost:8080/v1/chat/completions \
      -H "Content-Type: application/json" \
      -d '{"model": "my-model", "messages": [{"role": "user", "content": "Hello"}], "stream": true}'

Common causes and fixes:

  • Model on HDD: Move models to an SSD. If stuck with HDD, disable memory mapping (mmap: false) to load the model entirely into RAM.
  • Thread overbooking: Set --threads to match your physical CPU core count (not logical/hyperthreaded count).
  • Default sampling: LocalAI uses mirostat sampling by default, which produces better quality output but is slower. Disable it for benchmarking:
    # In model config
    mirostat: 0
  • No GPU offloading: Ensure gpu_layers is set in your model config to offload layers to GPU:
    gpu_layers: 99  # Offload all layers
  • Context size too large: Larger context sizes require more memory and slow down inference. Use the smallest context size that meets your needs.

High Memory Usage

  • Use quantized models (Q4_K_M is a good balance of quality and size)
  • Reduce context_size
  • Enable low_vram: true in model config
  • Disable mmlock (memory locking) if it’s enabled
  • Set --max-active-backends=1 to keep only one model in memory

Docker-Specific Problems

Container Fails to Start

Diagnostic steps:

# Check container logs
docker logs local-ai

# Check if port is already in use
ss -tlnp | grep 8080

# Verify the image exists
docker images | grep localai

GPU Not Available Inside Container

NVIDIA:

# Ensure nvidia-container-toolkit is installed, then:
docker run --gpus all ...

AMD:

docker run --device=/dev/kfd --device=/dev/dri --group-add=video ...

Intel:

docker run --device /dev/dri ...

Health Checks Failing

Add a health check to your Docker Compose configuration:

services:
  local-ai:
    image: localai/localai:latest
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 30s
      timeout: 10s
      retries: 3

Models Not Persisted Between Restarts

Mount a volume for your models directory:

services:
  local-ai:
    volumes:
      - ./models:/build/models:cached

Network and P2P Issues

P2P Workers Not Discovered

Symptoms: Distributed inference setup but workers are not found.

Key requirements:

  • Use --net host or network_mode: host in Docker
  • Share the same P2P token across all nodes

Debug P2P connectivity:

LOCALAI_P2P_LOGLEVEL=debug \
LOCALAI_P2P_LIB_LOGLEVEL=debug \
LOCALAI_P2P_ENABLE_LIMITS=true \
LOCALAI_P2P_TOKEN="<TOKEN>" \
local-ai run

If DHT is causing issues, try disabling it to use local mDNS discovery instead:

LOCALAI_P2P_DISABLE_DHT=true local-ai run

P2P Limitations

  • Only a single model is currently supported for distributed inference
  • Workers must be detected before inference starts — you cannot add workers mid-inference
  • Workers mode supports llama-cpp compatible models only

See the Distributed Inferencing guide for full setup instructions.

Still Having Issues?

If your issue isn’t covered here:

  1. Search existing issues: Check the GitHub Issues for similar problems
  2. Enable debug logging: Run with DEBUG=true or --log-level=debug and include the logs when reporting
  3. Open a new issue: Include your OS, hardware (CPU/GPU), LocalAI version, model being used, full error logs, and steps to reproduce
  4. Community help: Join the LocalAI Discord for community support