Coordinated Disclosure Timeline
- 2025-05-07: Reported via Private Vulnerabilty Reporting (PVR) on GitHub: GHSA-7hpg-6pc7-cx86
- 2025-05-07: Vulnerability is fixed
Summary
If AnythingLLM is configured to use Ollama with an authentication token, this token could be exposed in plain text to unauthenticated users at the /api/setup-complete
endpoint.
Project
AnythingLLM
Tested Version
Details
Ollama token leak in systemSettings.js
(GHSL-2025-056
)
AnythingLLM has an endpoint /api/setup-complete
that does not require any credentials to use even if the main AnythingLLM is protected with authentication. This endpoint reveals some system information about the instance, but masks most of the sensitive values. At the same time, if AnythingLLM is set up to use Ollama with an authentication token, this token is not masked because of the error on line 475.
Proof of concept
curl localhost:3001/api/setup-complete | grep OllamaLLMAuthToken
Vulnerable code location
Impact
Ollama token leakage on AnythingLLM grants complete access to the Ollama instance. Since Ollama offers an API for configuring the models, a potential attacker could modify the model’s template or system prompt to change the model’s behavior. This would enable attackers to hijack conversations of other users, invoke any tools or MCP servers utilized by AnythingLLM, and potentially access documents uploaded to AnythingLLM by other users.
CWE
- CWE-200: Exposure of Sensitive Information to an Unauthorized Actor
Credit
This issue was discovered and reported by GHSL team member @artsploit (Michael Stepankin).
Contact
You can contact the GHSL team at securitylab@github.com
, please include a reference to GHSL-2025-056
in any communication regarding this issue.
Links
https://github.com/Mintplex-Labs/anything-llm/security/advisories/GHSA-7hpg-6pc7-cx86