mirror of
https://github.com/Mintplex-Labs/anything-llm
synced 2026-04-25 17:15:37 +02:00
Microsoft Foundry Local LLM provider & agent provider (#4435)
* add microsoft foundry local llm and agent providers * minor change to fix early stop token + overloading of context window always use user defined window _unless_ it is larger than the models real contenxt window cache the context windows when we can from the API (0.7.*)+ Unload model forcefully on model change to prevent resource hogging * add back token preference since some models have very large windows and can crash a machine normalize cases --------- Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
This commit is contained in:
@@ -569,6 +569,11 @@ const SystemSettings = {
|
||||
GenericOpenAiKey: !!process.env.GENERIC_OPEN_AI_API_KEY,
|
||||
GenericOpenAiMaxTokens: process.env.GENERIC_OPEN_AI_MAX_TOKENS,
|
||||
|
||||
// Foundry Keys
|
||||
FoundryBasePath: process.env.FOUNDRY_BASE_PATH,
|
||||
FoundryModelPref: process.env.FOUNDRY_MODEL_PREF,
|
||||
FoundryModelTokenLimit: process.env.FOUNDRY_MODEL_TOKEN_LIMIT,
|
||||
|
||||
AwsBedrockLLMConnectionMethod:
|
||||
process.env.AWS_BEDROCK_LLM_CONNECTION_METHOD || "iam",
|
||||
AwsBedrockLLMAccessKeyId: !!process.env.AWS_BEDROCK_LLM_ACCESS_KEY_ID,
|
||||
|
||||
Reference in New Issue
Block a user