skip to content
Back to
Home Bounties Research Advisories CodeQL Wall of Fame Get Involved Events
April 6, 2023

GHSL-2023-005: GPU memory accessed after it's freed

Man Yue Mo

Coordinated Disclosure Timeline


GPU memory can be accessed after it is freed



Tested Version

Pixel 6 Device fingerprint: google/oriole/oriole:13/TQ1A.230105.002/9325679:user/release-keys Patch level: January 2023 Android 13


Use-after-free in jit memory of the Pixel 6 branch of the Arm Mali driver (GHSL-2023-005)

This is a vulnerability that affects Pixel 6 and Pixel 6 Pro. It appears to only exist in the Pixel branch of the Arm Mali GPU driver (e.g. and does not affect the Arm Mali driver in general.

In the January update, various patches from the r40 version of the Arm mali driver were applied to the Pixel firmware, addressing various security issues that were fixed in r40. However, the following change in the function kbase_mem_commit in mali_kbase_mem_linux.c has not been applied:

@@ -2262,10 +2258,13 @@ int kbase_mem_commit(struct kbase_context *kctx, u64 gpu_addr, u64 new_pages)

        if (atomic_read(&reg->cpu_alloc->kernel_mappings) > 0)
                goto out_unlock;
        if (reg->flags & KBASE_REG_DONT_NEED)
                goto out_unlock;

+       if (reg->flags & KBASE_REG_NO_USER_FREE)
+               goto out_unlock;

Seeing that other security related patches from r40 are applied, I assume this is either intentional or an oversight. Unfortunately, this change is necessary to fix a use-after-free bug that can be exploited to gain root on a Pixel 6(Pro), leaving these devices vulnerable to issues that are already fixed upstream.

When JIT memory is allocated using the kbase_jit_allocate method, it’ll first try to find a memory region in the jit_pool_head free list to fulfil the request (1a and 1b below):

struct kbase_va_region *kbase_jit_allocate(struct kbase_context *kctx,
		const struct base_jit_alloc_info *info,
		bool ignore_pressure_limit)
	if (info->usage_id != 0)
		/* First scan for an allocation with the same usage ID */
		reg = find_reasonable_region(info, &kctx->jit_pool_head, false);  //<---- 1a
	if (!reg)
		/* No allocation with the same usage ID, or usage IDs not in
		 * use. Search for an allocation we can reuse.
		reg = find_reasonable_region(info, &kctx->jit_pool_head, true);  //<----- 1b
	if (reg) {
		/* kbase_jit_grow() can release & reacquire 'kctx->reg_lock',
		 * so any state protected by that lock might need to be
		 * re-evaluated if more code is added here in future.
		ret = kbase_jit_grow(kctx, info, reg, prealloc_sas,
				     mmu_sync_info);                                    //<------ 2

If a region is found, it’ll enter kbase_jit_grow (2), which, as the comment suggests, can release the kctx->reg_lock (3 in below):

static int kbase_jit_grow(struct kbase_context *kctx,
			  const struct base_jit_alloc_info *info,
			  struct kbase_va_region *reg,
			  struct kbase_sub_alloc **prealloc_sas,
			  enum kbase_caller_mmu_sync_info mmu_sync_info)
	if (!kbase_mem_evictable_unmake(reg->gpu_alloc))        //<-------- 4.
		goto update_failed;
	old_size = reg->gpu_alloc->nents;
	/* Allocate some more pages */
	delta = info->commit_pages - reg->gpu_alloc->nents;    //<--------- 5.
	pages_required = delta;
	while (kbase_mem_pool_size(pool) < pages_required) {
		int pool_delta = pages_required - kbase_mem_pool_size(pool);
		int ret;
		kbase_gpu_vm_unlock(kctx);                        //<---------- 3.
		ret = kbase_mem_pool_grow(pool, pool_delta);
	gpu_pages = kbase_alloc_phy_pages_helper_locked(reg->gpu_alloc, pool,   //<------ 6.
			delta, &prealloc_sas[0]);
	ret = kbase_mem_grow_gpu_mapping(kctx, reg, info->commit_pages,         //<------ 7.
					 old_size, mmu_sync_info);
	return ret;

At (4) in the above, the flag KBASE_REG_DONT_NEED is removed from reg, meaning that, without the aforemention change, which prevents the sizes of regions with the KBASE_REG_NO_USER_FREE from changing in kbase_mem_commit, the region can be shrinked by another thread calling kbase_mem_commit at (3) while the kctx->reg_lock is dropped.

This then causes inconsistency in the size of the region, as (6) and (7) use delta and old_size, which are assumed to be unchanged during the execution of kbase_jit_grow. In particular, by shrinking the region to size zero during the execution of (3), new pages allocated at (6) will be placed at the start of the region, while gpu mapping created at (7) will be created in the middle of the region, where old_size used to be.

For example, if the start address of reg was at start, and old_size, which is the initial size of reg when it enters kbase_jit_grow, is at start + old_size, then after shrinking the region to size zero, delta pages will be inserted at the start of the page array in reg->gpu_alloc, while new gpu mappings are created between start + old_size and start + old_size + delta.

This means that the first start + old_size pages in the region remain unmapped, while the gpu addresses between start + delta and start + delta + old_size are backed by null pages.

This, for example, can be exploited by creating a corrupted region in which the start of the region is not mapped, which can then be abused to access already freed memory as follows.

When a region is freed, kbase_mmu_teardown_pages is called to remove the gpu mappings to its backing pages, before the backing pages themselves are freed.

int kbase_mmu_teardown_pages(struct kbase_device *kbdev,
	struct kbase_mmu_table *mmut, u64 vpfn, size_t nr, int as_nr)
	while (nr) {
		for (level = MIDGARD_MMU_TOPLEVEL;
				level <= MIDGARD_MMU_BOTTOMLEVEL; level++) {
			phys_addr_t next_pgd;
			index = (vpfn >> ((3 - level) * 9)) & 0x1FF;
			page = kmap(p);
			if (mmu_mode->ate_is_valid(page[index], level))
				break; /* keep the mapping */
			else if (!mmu_mode->pte_is_valid(page[index], level)) {
				/* nothing here, advance */       //<-------------- 1.
				switch (level) {
				case MIDGARD_MMU_LEVEL(0):
					count = 134217728;
				case MIDGARD_MMU_LEVEL(1):
					count = 262144;
				case MIDGARD_MMU_LEVEL(2):
					count = 512;
				case MIDGARD_MMU_LEVEL(3):
					count = 1;
				if (count > nr)
					count = nr;
				goto next;

When kbase_mmu_teardown_pages removes the gpu mapping, if it encounters an invalid pte entry (1. in the above), it’ll skip unmapping a number of gpu addresses that belongs to the entry. For example, if a level 2 entry is found to be invalidated, then 512 addresses will be skipped, which corresponds to the size of a level 2 entry. This assumes the start of the region is mapped. When the start of a region is unmapped, this can lead to mappings not being removed. For example, consider a region with N pages (N > 512), whose starting point is not aligned with 512 pages, such that:

  1. Its start address, start is such that start % (512 * 0x1000) = offset * 0x1000, and that its first (512 - offset) pages are unmapped.
  2. The offset * 0x1000 addresses prior to start are also unmapped. Together with 1., this means the level 2 page table entry corresponding to these 512 addresses is invalid.

In this case, when the region is freed and kbase_mmu_teardown_pages is run to remove the mappings, it’ll find that the level 2 page table for start is invalid and skip 512 entries, so the addresses between start + (512 - offset) * 0x1000 and start + 512 * 0x1000 remain mapped while their backing pages are freed as the region gets freed. A region satisfying conditions both 1. and 2. can be created using the bug mentioned above. By freeing such a region, the gpu can have access to already freed memory. This can then be easily exploited to gain root on a device.


This can be exploited to gain root and arbitrary kernel code execution from an untrusted app.


This issue was discovered and reported by GHSL team member @m-y-mo (Man Yue Mo).


You can contact the GHSL team at, please include a reference to GHSL-2023-005 in any communication regarding this issue.