Local LLM integration via Ollama. Issue summarization, description generation, sprint planning suggestions, and label auto-suggestions โ all running on your hardware. Zero data leaves your server.
You want AI features in Plane โ issue summaries, description generation, sprint planning โ but you can't send your project data to OpenAI or Anthropic.
Local LLM integration via Ollama. Issue summarization, description generation, sprint planning suggestions, and label auto-suggestions โ all running on your hardware. Zero data leaves your server.
Install Ollama and pull a model (e.g., llama3.2)
Deploy the Plane AI service
API endpoints provide summaries, descriptions, plans, and label suggestions
All inference runs locally โ no external API calls
Here's what it looks like in action:
Summarize long issues with comments into concise overviews.
Give it a title, get a structured issue description with acceptance criteria.
Feed it a list of issues, get prioritization and risk assessment suggestions.
All data stays on your server. No OpenAI, no Anthropic, no cloud AI APIs.
Full source code, Dockerfile, docker-compose.yml, README, email support
โ ๏ธ Requirements: Plane 0.14+ (self-hosted), Ollama with at least one model pulled, GPU recommended
Llama 3.2 works well for most tasks. Larger models give better results but need more hardware.
Recommended but not required. CPU inference works but is slower.
No. One-time purchase. Run on your hardware.
Questions? Email [email protected] ยท 14-day money-back guarantee