ollama mini edits (#1408)

* ollama mini edits

* ollama llama
This commit is contained in:
Jan Carbonell
2026-04-08 16:57:02 -07:00
committed by GitHub
parent b3fdcd868f
commit 453995b5f1
3 changed files with 22 additions and 13 deletions

View File

@@ -5,6 +5,10 @@ description: "Create a custom provider definition in OpenWork Cloud and import i
Use this when the provider is not in the catalog. The Cloud dashboard accepts a models.dev-style JSON definition, stores the shared credential once, and then lets desktop workspaces import it.
<Frame>
![Custom LLM provider detail in OpenWork Cloud](/images/cloud-custom-llm-provider-detail.png)
</Frame>
## Create the custom provider
1. Open `LLM Providers`.
@@ -57,9 +61,21 @@ The JSON must include `id`, `name`, `npm`, `env`, `doc`, and `models`. `api` is
}
```
## Import it into the desktop app
1. Open `Settings -> Cloud`.
2. Choose the correct `Active org`.
3. Under `Cloud providers`, click `Import`.
4. Reload the workspace when OpenWork asks.
### Functional example: Ollama (Qwen3 8B)
This wires up a local [Ollama](https://docs.ollama.com/api/openai-compatibility) instance running `qwen3:8b`. Pull the model first with `ollama pull qwen3:8b`, then make sure the Ollama server is reachable at `http://localhost:11434`. Ollama's OpenAI-compatible endpoint requires an API key but ignores the value, so paste anything (for example `ollama`) into the `API key / credential` field when creating the provider.
This setup a local [Ollama](https://docs.ollama.com/api/openai-compatibility) instance running `qwen3:8b`.
Ollama's OpenAI-compatible endpoint requires an API key but ignores the value, so paste anything (for example `ollama`) into the `API key / credential` field when creating the provider.
<Frame>
![Config to add Ollama as a custom provider](/images/ollama-custom-provider.png)
</Frame>
```json
{
@@ -101,21 +117,14 @@ This wires up a local [Ollama](https://docs.ollama.com/api/openai-compatibility)
}
```
Because the endpoint is `localhost`, each teammate who imports this provider into their desktop app needs their own Ollama running locally — the shared credential only covers the JSON definition, not the runtime.
<Frame>
![Custom LLM provider detail in OpenWork Cloud](/images/cloud-custom-llm-provider-detail.png)
![Ollama added as a provider](/images/ollama-added-as-provider.png)
</Frame>
## Import it into the desktop app
Pull the model first with `ollama pull qwen3:8b`, then make sure the Ollama server is reachable at `http://localhost:11434` with `ollama serve`
1. Open `Settings -> Cloud`.
2. Choose the correct `Active org`.
3. Under `Cloud providers`, click `Import`.
4. Reload the workspace when OpenWork asks.
Import it in your desktop, change the provider name and voila! You have an fully local LLM running in your machine.
## When to use a cloud provider
## When to use this instead of a desktop-only provider
Use the Cloud flow when the provider should be shared across an org or team. If the provider is only for one workspace, keep it local with [Adding a custom LLM provider/model](/how-to-connect-a-custom-provider).
We recommend using Cloud-bsaed provider when the setup is meant to be shared across an org or team. Otherwise, doing it directly in desktop is easier.

Binary file not shown.

After

Width:  |  Height:  |  Size: 591 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 356 KiB