mirror of
https://github.com/Mintplex-Labs/anything-llm
synced 2026-04-25 17:15:37 +02:00
feat: support setting maxConcurrentChunks for Generic OpenAI embedder (#2655)
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500 * Update new field to new UI make getting to ensure proper type and format --------- Co-authored-by: timothycarambat <rambat1010@gmail.com>
This commit is contained in:
@@ -164,6 +164,7 @@ GID='1000'
|
||||
# EMBEDDING_MODEL_MAX_CHUNK_LENGTH=8192
|
||||
# EMBEDDING_BASE_PATH='http://127.0.0.1:4000'
|
||||
# GENERIC_OPEN_AI_EMBEDDING_API_KEY='sk-123abc'
|
||||
# GENERIC_OPEN_AI_EMBEDDING_MAX_CONCURRENT_CHUNKS=500
|
||||
|
||||
###########################################
|
||||
######## Vector Database Selection ########
|
||||
@@ -299,4 +300,4 @@ GID='1000'
|
||||
|
||||
# Enable simple SSO passthrough to pre-authenticate users from a third party service.
|
||||
# See https://docs.anythingllm.com/configuration#simple-sso-passthrough for more information.
|
||||
# SIMPLE_SSO_ENABLED=1
|
||||
# SIMPLE_SSO_ENABLED=1
|
||||
|
||||
Reference in New Issue
Block a user