Skip to content

🔧 Installation Guide

You can install ChainForge locally or try it out on the web. This document concerns local installation.

Installing ChainForge on your local machine provides the benefit of being able to:

  • load API keys from environment variables,
  • run Python evaluator nodes, and
  • query self-hosted (Ollama) models.

If you are a developer looking to run ChainForge from source to modify or extend it, see the For Developers page.

🚀 Installation

Step 0. Prerequisites 🐍

System Requirements

  • Python 3.10+: ChainForge requires Python 3.10+. You can check your version by running python --version in the terminal.

Installation links:

  • If you don't have Python installed, download it from python.org

Step 1. Install on your machine 💻

Choose your preferred installation method based on your needs and experience level:

Why UV?

UV is 10-100x faster than pip and provides excellent dependency management!

1. Install uv (if not already present)

curl -LsSf https://astral.sh/uv/install.sh | sh

2. Install Chainforge as a uv tool

uv tool install chainforge

1. Create a new directory and cd into it

2. Create a virtual environment (Highly recommended for RAG features)

python -m venv venv 
source venv/bin/activate

3. Install ChainForge via pip

pip install chainforge 

Perfect for managing multiple Python versions

If you need to install Python 3.10+ or want precise version control, use pyenv-virtualenv:

1. Install specific Python version

pyenv install 3.10.11

2. Create virtual environment

pyenv virtualenv 3.10.11 chainforge-venv

3. Activate environment

pyenv activate chainforge-venv

4. Install ChainForge

pip install chainforge

5. To deactivate later

pyenv deactivate

Step 2. Running ChainForge Server 🖥️

Basic server setup for simple use

chainforge serve

Your server will be available at localhost:8000

Advanced setup with security and custom cache directory.

1. Set up cache directory environment variable

# Add to your shell profile (.bashrc, .zshrc, etc.)
export CHAINFORGE_CACHE_DIR=<path_to_a_directory> # e.g. "$HOME/chainforge-cache"

# Create the directory
mkdir -p "$CHAINFORGE_CACHE_DIR"

2. Run

chainforge serve --dir "$CHAINFORGE_CACHE_DIR" --secure all # `--help` flag for more explanation
📖 Details about the Command Line Options of the serve subcommand

Available flags and options:

Flag Description
-h, --help Show help message and exit
--port [PORT] The port to run the server on. Defaults to 8000
--host [HOST] The host to run the server on. Defaults to 'localhost'
--dir DIR Set a custom directory for saving flows and autosaving. By default, ChainForge uses the user data location suggested by the platformdirs module. Should be an absolute path
--secure {off,settings,all} Encrypts locally stored files with a password

🔐 Security Encryption Modes:

  • off (default) = No encryption
  • settings = Only encrypt the settings file (that may contain API keys entered via the UI)
  • all = Encrypt all files (flows, settings, favorites, etc)

🔑 Password Management

  • You must provide a password at every startup when using encryption
  • Save your password somewhere safe - it's not stored anywhere
  • If you lose your password, you cannot access your files

📤 Export Behavior

Clicking the 'Export' button in the UI will still export a non-encrypted flow, so you can share files normally. This setting only affects local storage.

🌐 Browser Compatibility

If you'd like to run ChainForge on a different hostname and port, specify --host and --port.

For instance: chainforge serve --host 0.0.0.0 --port 3400

ChainForge currently supports Chrome, Firefox, Edge, and Brave browsers. For other browser support, please open an Issue on our GitHub.

🎉 Server Started!

Your ChainForge server is now running! The interface will be available in your browser with all features enabled.

Step 3. Get and set API keys for model providers 🔑

Required for LLM Access

Though you can run ChainForge, you can't do anything meaningful without the ability to call an LLM!

Currently supported model providers:

  • OpenAI models GPT-4 and o3 models, including all variants and function calls
  • Anthropic models (Claude-3.5, etc)
  • Google Gemini and PaLM2 (chat and text bison models)
  • DeepSeek models
  • 🤗 HuggingFace models (via the HuggingFace Inference and Inference Endpoints API)
  • AlephAlpha models
  • Microsoft Azure OpenAI Endpoints
  • Amazon Bedrock Endpoints
  • Together.ai models (LLaMA2, Mistral, etc)
  • (Locally run) models hosted via Ollama
  • ...and any other provider through custom provider scripts!

🔗 Ollama Setup

For Ollama: Install Ollama, download the models you want, and use ollama serve in the console. Add Ollama in the "Model" section of Prompt or Chat Nodes, then set the model name appropriately.

🔐 How to Set API keys for specific model providers

Two simple steps:

1. 🔑 Get an API key

2. ⚙️ Set the API key in ChainForge

Input your API keys manually via the Settings button in the top-right corner of ChainForge.

Set them as environment variables to avoid re-entering keys every time:

Supported environment variable names:

  • OpenAI: OPENAI_API_KEY For environment variable setup guidance: OpenAI Best Practices Guide (section 3)
  • OpenAI Base URL: OPENAI_BASE_URL (optional, for custom endpoints)
  • HuggingFace: HUGGINGFACE_API_KEY
  • Anthropic: ANTHROPIC_API_KEY
  • Google (Gemini or PaLM2): PALM_API_KEY
  • Together.ai: TOGETHER_API_KEY
  • Amazon Bedrock: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, and optionally AWS_SESSION_TOKEN. For Amazon Bedrock setup: Supported Models pag
  • AlephAlpha: ALEPH_ALPHA_API_KEY
  • DeepSeek: DEEPSEEK_API_KEY
  • Azure OpenAI: AZURE_OPENAI_KEY and AZURE_OPENAI_ENDPOINT For Azure OpenAI examples: Azure OpenAI Documentation

Example for macOS/Linux:

# Add to your shell profile (.bashrc, .zshrc, etc.)
echo "export OPENAI_API_KEY='your-api-key-here'" >> ~/.zshrc
source ~/.zshrc
echo $OPENAI_API_KEY  # Verify it's set

🔄 Restart Terminal

Reopen your terminal after setting environment variables! The terminal loads environment variables when it starts, so it needs to be refreshed before running chainforge serve.

Step 4. Check out Examples! 🎯

🚀 You're Ready!

Click Example Flows to get a sense of what ChainForge is capable of. A popular choice is ground truth evaluations, which use Tabular Data nodes.

🔍 What to Try First

  • Compare different prompts across multiple LLMs
  • Evaluate response quality with built-in metrics
  • Explore multimodal capabilities with image and text inputs
  • Test prompt robustness against variations

🎉 Congratulations! You now have ChainForge running locally with all the power of visual prompt engineering at your fingertips!