OpenWork is built on OpenCode primitives, we support everything that you could modify in .opencode.json, like adding a model. However, our recommendation is to add it to /path-to-your-workspace/.config/opencode/opencode.json instead of modifying ~/.config/opencode/opencode.json.
Inside /path-to-your-workspace/.config/opencode/opencode.json:
Functional example: Ollama (Qwen3 8B)
This setup a local Ollama instance runningqwen3:8b.
ollama pull qwen3:8b, then make sure the Ollama server is reachable at http://localhost:11434 with ollama serve
Open your desktop, change the provider name and voila! You have an fully local LLM running in your machine.
For teams, you can manage the provider in OpenWork Cloud under LLM Providers, then import it into each workspace from Settings -> Cloud.