feat: support setting maxConcurrentChunks for Generic OpenAI embedder (#2655)

* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500

* Update new field to new UI
make getting to ensure proper type and format

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
This commit is contained in:
hdelossantos
2024-11-21 14:29:44 -05:00
committed by GitHub
parent b66017d024
commit 304796ec59
6 changed files with 69 additions and 4 deletions

View File

@@ -164,6 +164,7 @@ GID='1000'
# EMBEDDING_MODEL_MAX_CHUNK_LENGTH=8192
# EMBEDDING_BASE_PATH='http://127.0.0.1:4000'
# GENERIC_OPEN_AI_EMBEDDING_API_KEY='sk-123abc'
# GENERIC_OPEN_AI_EMBEDDING_MAX_CONCURRENT_CHUNKS=500
###########################################
######## Vector Database Selection ########
@@ -299,4 +300,4 @@ GID='1000'
# Enable simple SSO passthrough to pre-authenticate users from a third party service.
# See https://docs.anythingllm.com/configuration#simple-sso-passthrough for more information.
# SIMPLE_SSO_ENABLED=1
# SIMPLE_SSO_ENABLED=1