github.com/www-zaq-ai/zaq

Your AI brain,
open & sovereign.

ZAQ is an open-source, self-hosted AI knowledge platform built with Elixir and Phoenix. Deploy on your infrastructure. Keep full control of your data.

v0.6.4Elixir / OTPPhoenix LiveViewPostgreSQL + pgvectorDocker

Up and running in seconds

The local installer bootstraps everything automatically — Docker, database, secrets, and the app.

MacOS / Unix
Docker Desktop
No Elixir or PostgreSQL needed
1Paste in your terminal and press Enter
$mkdir zaq && cd zaq && curl -O https://raw.githubusercontent.com/www-zaq-ai/zaq/refs/heads/main/zaq-local.sh && chmod +x zaq-local.sh && ./zaq-local.sh

Creates a zaq/ folder, downloads the installer, and starts the full stack in Docker.

2Open ZAQ in your browser
3Connect your LLM & upload documents
Go to /bo/system-config to configure your LLM endpoint, embedding model, and start ingesting documents.

Useful commands

Stop displaying logs (app keeps running)Ctrl+C
Stop the applicationdocker compose stop
Restart / display logs again./zaq-local.sh
Update to latest releasedocker compose pull
To update: go inside your zaq/ folder and run docker compose pull, then restart with ./zaq-local.sh. Check your current version at the bottom of the left sidebar.

Architecture

A single Elixir/OTP application composed of five internal services, each running under its own supervision tree.

Engine

Central orchestrator. Sessions, ontology, API routing, and conversation workflows.

SessionsOntologyAPI

Agent

AI layer. RAG retrieval, LLM interaction, classifier, and knowledge gap detection.

RAGLLMClassifier

Ingestion

Document processing pipeline. Chunking, embedding generation, PGVector writes, PDF-to-markdown.

ChunkingEmbeddingsPDF

Channels

Multi-channel communication adapter. Mattermost today — Slack and Email planned.

MattermostSlack*Email*

Back Office

Admin panel built with Phoenix LiveView. Ontology management, document management, system config.

LiveViewAdminConfig

Data Layer

Primary datastore

PostgreSQL + pgvector — sessions, chat history, ontology, documents, embeddings, and configuration in a single Zaq.Repo.

Customer LLM

On-premise, customer-provided. Connected via a configurable endpoint set from Back Office. No data leaves your infrastructure.

Container Images

Published to GitHub Container Registry on every release.

ghcr.io / www-zaq-ai / zaqGitHub Container Registry
latestLatest stable release
v0.6.4Current release
0.6.4Without v prefix
0.6Minor version tag
0Major version tag

Flexible Deployment

Run everything on a single node or distribute services across multiple nodes using Erlang distribution.

Single Node

Run all five services on one machine. Recommended for most deployments.

$ ./zaq-local.sh

Role-Based

Enable specific services per node. Scale AI services independently of the back office.

$ ROLES=agent,ingestion mix phx.server

Multi-Node

Distribute across nodes using Erlang distribution. Peer discovery is automatic.

$ ROLES=engine,bo iex --sname bo@host

Contributing

We follow Conventional Commits and a trunk-based flow. All contributions are welcome.

1

Fork the repository

gh repo fork www-zaq-ai/zaq
2

Create a feature branch

git checkout -b feature/my-feature
3

Commit with Conventional Commits

git commit -m "feat(scope): description"
4

Run precommit checks

mix precommit
5

Push and open a Pull Request

git push origin feature/my-feature

Use Conventional Commits for your PR title

Examples: feat(agent): add streaming responses · fix(ingestion): handle empty PDF

Run mix precommit before submitting. Releases are automated via release-please.

Ready to run your own
knowledge brain?

Deploy ZAQ on your infrastructure today — or book a demo to see what the managed version can do.