mirror of
https://github.com/different-ai/openwork
synced 2026-04-25 17:15:34 +02:00
@@ -5,6 +5,10 @@ description: "Create a custom provider definition in OpenWork Cloud and import i
|
||||
|
||||
Use this when the provider is not in the catalog. The Cloud dashboard accepts a models.dev-style JSON definition, stores the shared credential once, and then lets desktop workspaces import it.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
## Create the custom provider
|
||||
|
||||
1. Open `LLM Providers`.
|
||||
@@ -57,9 +61,21 @@ The JSON must include `id`, `name`, `npm`, `env`, `doc`, and `models`. `api` is
|
||||
}
|
||||
```
|
||||
|
||||
## Import it into the desktop app
|
||||
|
||||
1. Open `Settings -> Cloud`.
|
||||
2. Choose the correct `Active org`.
|
||||
3. Under `Cloud providers`, click `Import`.
|
||||
4. Reload the workspace when OpenWork asks.
|
||||
|
||||
### Functional example: Ollama (Qwen3 8B)
|
||||
|
||||
This wires up a local [Ollama](https://docs.ollama.com/api/openai-compatibility) instance running `qwen3:8b`. Pull the model first with `ollama pull qwen3:8b`, then make sure the Ollama server is reachable at `http://localhost:11434`. Ollama's OpenAI-compatible endpoint requires an API key but ignores the value, so paste anything (for example `ollama`) into the `API key / credential` field when creating the provider.
|
||||
This setup a local [Ollama](https://docs.ollama.com/api/openai-compatibility) instance running `qwen3:8b`.
|
||||
Ollama's OpenAI-compatible endpoint requires an API key but ignores the value, so paste anything (for example `ollama`) into the `API key / credential` field when creating the provider.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -101,21 +117,14 @@ This wires up a local [Ollama](https://docs.ollama.com/api/openai-compatibility)
|
||||
}
|
||||
```
|
||||
|
||||
Because the endpoint is `localhost`, each teammate who imports this provider into their desktop app needs their own Ollama running locally — the shared credential only covers the JSON definition, not the runtime.
|
||||
|
||||
<Frame>
|
||||

|
||||

|
||||
</Frame>
|
||||
|
||||
## Import it into the desktop app
|
||||
Pull the model first with `ollama pull qwen3:8b`, then make sure the Ollama server is reachable at `http://localhost:11434` with `ollama serve`
|
||||
|
||||
1. Open `Settings -> Cloud`.
|
||||
2. Choose the correct `Active org`.
|
||||
3. Under `Cloud providers`, click `Import`.
|
||||
4. Reload the workspace when OpenWork asks.
|
||||
Import it in your desktop, change the provider name and voila! You have an fully local LLM running in your machine.
|
||||
|
||||
## When to use a cloud provider
|
||||
|
||||
|
||||
## When to use this instead of a desktop-only provider
|
||||
|
||||
Use the Cloud flow when the provider should be shared across an org or team. If the provider is only for one workspace, keep it local with [Adding a custom LLM provider/model](/how-to-connect-a-custom-provider).
|
||||
We recommend using Cloud-bsaed provider when the setup is meant to be shared across an org or team. Otherwise, doing it directly in desktop is easier.
|
||||
|
||||
BIN
packages/docs/images/ollama-added-as-provider.png
Normal file
BIN
packages/docs/images/ollama-added-as-provider.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 591 KiB |
BIN
packages/docs/images/ollama-custom-provider.png
Normal file
BIN
packages/docs/images/ollama-custom-provider.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 356 KiB |
Reference in New Issue
Block a user