Skip to content

Conversation

@kernel-patches-daemon-bpf
Copy link

Pull request for series with
subject: Remove task and cgroup local storage percpu counters
version: 3
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=1034704

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: ec439c3
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1034704
version: 3

@kernel-patches-review-bot
Copy link

commit 8ba48be3912399547ff846e66dbe976e09dad8e3
Author: Amery Hung <ameryhung@gmail.com>

bpf: Convert bpf_selem_link_map to failable

This patch prepares for the rqspinlock conversion by changing
bpf_selem_link_map() to return int instead of void. Currently the
function always returns 0 with no functional change.

> diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
> index 850dd736ccd1..4f8e917f49d9 100644
> --- a/net/core/bpf_sk_storage.c
> +++ b/net/core/bpf_sk_storage.c
> @@ -191,7 +191,9 @@ int bpf_sk_storage_clone(const struct sock *sk, struct sock *newsk)
>  		}
>
>  		if (new_sk_storage) {
> -			bpf_selem_link_map(smap, copy_selem);
> +			ret = bpf_selem_link_map(smap, copy_selem);
> +			if (ret)
> +				goto out;
>  			bpf_selem_link_storage_nolock(new_sk_storage, copy_selem);
>  		} else {
>  			ret = bpf_local_storage_alloc(newsk, smap, copy_selem, GFP_ATOMIC);

When bpf_selem_link_map() fails, is there missing cleanup here? Looking
at the else branch below (bpf_local_storage_alloc failure), the error
handling includes:

  - bpf_selem_free(copy_selem, true)
  - atomic_sub(smap->elem_size, &newsk->sk_omem_alloc)
  - bpf_map_put(map)

The new error path for bpf_selem_link_map() failure goes directly to the
out label without any of this cleanup. At this point in the code:

  - copy_selem was allocated by bpf_sk_storage_clone_elem() and charged
    via bpf_selem_alloc()->mem_charge()
  - map reference was taken via bpf_map_inc_not_zero()
  - copy_selem has not yet been linked to new_sk_storage (that happens
    in bpf_selem_link_storage_nolock() after the failing call)

The comment at the out label says the caller is responsible for cleanup
via bpf_sk_storage_free(), but since copy_selem was never linked to the
storage list, bpf_local_storage_destroy() will not find it.

Currently this is not triggerable since bpf_selem_link_map() always
returns 0, but commit 615477732ee8 ("bpf: Change local_storage->lock and
b->lock to rqspinlock") later in this series makes the function actually
return errors.



AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

In-Reply-To-Subject: bpf: Convert bpf_selem_link_map to failable
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/20346662777

@kernel-patches-daemon-bpf
Copy link
Author

@kernel-patches-review-bot
Copy link

commit 88fc7349625f8f5f10aba236ad6005c2b7f05227
Author: Amery Hung <ameryhung@gmail.com>

bpf: Convert bpf_selem_unlink to failable

This commit converts bpf_selem_unlink() from void to int return type
as preparation for switching to rqspinlock. The commit message states
"It still always succeeds and returns 0" and "No functional change."

> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> index 0e3fa5fbaaf3..fa629a180e9e 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c
> @@ -367,7 +367,7 @@ static void bpf_selem_link_map_nolock(struct bpf_local_storage_map *smap,
>  	hlist_add_head_rcu(&selem->map_node, &b->list);
>  }
>
> -void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now)
> +int bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now)
>  {
>  	struct bpf_local_storage *local_storage;
>  	bool free_local_storage = false;
>  	HLIST_HEAD(selem_free_list);
>  	unsigned long flags;
>  	int err;

The err variable is declared here but not initialized.

>  	if (unlikely(!selem_linked_to_storage_lockless(selem)))
>  		/* selem has already been unlinked from sk */
> -		return;
> +		return 0;

[ ... ]

>  	raw_spin_lock_irqsave(&local_storage->lock, flags);
>  	if (likely(selem_linked_to_storage(selem))) {

If this condition is false, the if-block is skipped and err is never
assigned. There is a race window between the lockless check above and
this locked check where another CPU could unlink the selem.

>  		err = bpf_selem_unlink_map(selem);
>  		if (err)
>  			goto out;
>
>  		free_local_storage = bpf_selem_unlink_storage_nolock(
>  			local_storage, selem, &selem_free_list);
>  	}

[ ... ]

> +	return err;
>  }

Could this return an uninitialized value when selem_linked_to_storage()
returns false? The commit message says "always succeeds and returns 0"
but err is not initialized to 0.

I note that this appears to be addressed in a later commit in this
series (615477732ee8 "bpf: Change local_storage->lock and b->lock to
rqspinlock") where the lock function returns into err before the
condition check. Should err be initialized to 0 here for correctness
in the intermediate state?



AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

In-Reply-To-Subject: bpf: Convert bpf_selem_unlink to failable
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/20346662777

@kernel-patches-review-bot
Copy link

commit e94f5418920de11eb2ecc2ac9ba09afad1770cd4
Author: Amery Hung <ameryhung@gmail.com>

bpf: Convert bpf_selem_unlink_map to failable

This commit prepares for changing bpf_local_storage_map_bucket::lock to
rqspinlock by converting bpf_selem_unlink_map() to return int. It also
introduces _nolock variants and modifies bpf_local_storage_update() to
hold bucket locks during the 3-step update operation.

> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> index e2fe6c32822b..4e3f227fd634 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c

[ ... ]

> @@ -488,9 +516,10 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
>  {
>  	struct bpf_local_storage_data *old_sdata = NULL;
>  	struct bpf_local_storage_elem *alloc_selem, *selem = NULL;
> +	struct bpf_local_storage_map_bucket *b, *old_b = NULL;
> +	unsigned long flags, b_flags, old_b_flags;
>  	struct bpf_local_storage *local_storage;
>  	HLIST_HEAD(old_selem_free_list);
> -	unsigned long flags;
>  	int err;

[ ... ]

> @@ -574,20 +603,37 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
>  		goto unlock;
>  	}
>
> +	b = select_bucket(smap, selem);
> +
> +	if (old_sdata) {
> +		old_b = select_bucket(smap, SELEM(old_sdata));
> +		old_b = old_b == b ? NULL : old_b;
> +	}
> +
> +	raw_spin_lock_irqsave(&b->lock, b_flags);
> +
> +	if (old_b)
> +		raw_spin_lock_irqsave(&old_b->lock, old_b_flags);
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Could this introduce an ABBA deadlock between two bucket locks? If two
threads concurrently update storage on the same map with different owners,
their new and old selems could hash to different buckets with reversed
order:

  Thread 1: new_selem -> bucket_A, old_selem -> bucket_B
            acquires b->lock (A), then waits for old_b->lock (B)

  Thread 2: new_selem -> bucket_B, old_selem -> bucket_A
            acquires b->lock (B), then waits for old_b->lock (A)

Since the locks are regular raw_spinlocks (not rqspinlocks yet), they
will block indefinitely in this scenario.

Note: I see that commit 615477732ee8 ("bpf: Change local_storage->lock
and b->lock to rqspinlock") later in this series changes to rqspinlock
which has deadlock detection and can return -EDEADLK instead of blocking.
However, in the current commit the locks are still raw_spinlocks.

> +
>  	alloc_selem = NULL;
>  	/* First, link the new selem to the map */
> -	bpf_selem_link_map(smap, selem);
> +	bpf_selem_link_map_nolock(smap, selem, b);
>
>  	/* Second, link (and publish) the new selem to local_storage */
>  	bpf_selem_link_storage_nolock(local_storage, selem);
>
>  	/* Third, remove old selem, SELEM(old_sdata) */
>  	if (old_sdata) {
> -		bpf_selem_unlink_map(SELEM(old_sdata));
> +		bpf_selem_unlink_map_nolock(SELEM(old_sdata));
>  		bpf_selem_unlink_storage_nolock(local_storage, SELEM(old_sdata),
>  						&old_selem_free_list);
>  	}
>
> +	if (old_b)
> +		raw_spin_unlock_irqrestore(&old_b->lock, old_b_flags);
> +
> +	raw_spin_unlock_irqrestore(&b->lock, b_flags);
> +
>  unlock:
>  	raw_spin_unlock_irqrestore(&local_storage->lock, flags);



AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

In-Reply-To-Subject: bpf: Convert bpf_selem_unlink_map to failable
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/20346662777

@kernel-patches-review-bot
Copy link

commit 615477732ee820965d38b2c135c37bd39bb14f71
Author: Amery Hung <ameryhung@gmail.com>

bpf: Change local_storage->lock and b->lock to rqspinlock

This commit converts raw_spinlock_t to rqspinlock_t for BPF local storage
locks. The rqspinlock can fail to acquire the lock, returning an error
instead of spinning forever.

The commit message notes: "In bpf_local_storage_destroy(), WARN_ON for now.
A later patch will handle this properly."

Note: These regressions are fixed later in the series by commit 1ddcc5cdf114
("bpf: Switch to bpf_selem_unlink_lockless in bpf_local_storage_{map_free,
destroy}").

> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> index fa629a180e9e..1d21ec11c80e 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c

[ ... ]

> @@ -382,7 +390,10 @@ int bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now)
>  	local_storage = rcu_dereference_check(selem->local_storage,
>  					      bpf_rcu_lock_held());
>
> -	raw_spin_lock_irqsave(&local_storage->lock, flags);
> +	err = raw_res_spin_lock_irqsave(&local_storage->lock, flags);
> +	if (err)
> +		return err;

This change makes bpf_selem_unlink() failable, but bpf_local_storage_map_free()
calls it in a while loop that continues iterating while hlist_first_rcu()
returns elements:

    while ((selem = hlist_entry_safe(
            rcu_dereference_raw(hlist_first_rcu(&b->list)), ...))) {
        WARN_ON(bpf_selem_unlink(selem, true));
    }

If bpf_selem_unlink() fails (returns error), the selem is not removed from
b->list, so hlist_first_rcu() returns the same selem again. Would this cause
an infinite loop printing warnings until the system becomes unresponsive?

[ ... ]

> @@ -719,7 +743,7 @@ void bpf_local_storage_destroy(struct bpf_local_storage *local_storage)
>  	 * when unlinking elem from the local_storage->list and
>  	 * the map's bucket->list.
>  	 */
> -	raw_spin_lock_irqsave(&local_storage->lock, flags);
> +	WARN_ON(raw_res_spin_lock_irqsave(&local_storage->lock, flags));
>  	hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
>  		/* Always unlink from map before unlinking from
>  		 * local_storage.
>  		 */
> -		bpf_selem_unlink_map(selem);
> +		WARN_ON(bpf_selem_unlink_map(selem));

If raw_res_spin_lock_irqsave() fails, WARN_ON() prints a warning but does
not prevent execution from continuing. The code then proceeds to iterate
local_storage->list without holding local_storage->lock, and eventually
calls raw_res_spin_unlock_irqrestore() at the end without having acquired
the lock. Does this cause issues with the rqspinlock held-lock tracking,
which unconditionally decrements its counter in res_spin_unlock()?

Additionally, if bpf_selem_unlink_map() fails, the selem remains linked to
the map's bucket list, but execution continues and
bpf_selem_unlink_storage_nolock() removes it from local_storage->list and
adds it to the free list. When the selem is later freed, would the map's
bucket still hold a dangling reference to it?

> @@ -734,7 +758,7 @@ void bpf_local_storage_destroy(struct bpf_local_storage *local_storage)
>  		free_storage = bpf_selem_unlink_storage_nolock(
>  			local_storage, selem, &free_selem_list);
>  	}
> -	raw_spin_unlock_irqrestore(&local_storage->lock, flags);
> +	raw_res_spin_unlock_irqrestore(&local_storage->lock, flags);

[ ... ]



AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

In-Reply-To-Subject: bpf: Change local_storage->lock and b->lock to rqspinlock
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/20346662777

@kernel-patches-daemon-bpf
Copy link
Author

@kernel-patches-daemon-bpf
Copy link
Author

@kernel-patches-daemon-bpf
Copy link
Author

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 3d60306
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1034704
version: 3

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: d2749ae
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1034704
version: 3

To prepare for changing bpf_local_storage_map_bucket::lock to rqspinlock,
convert bpf_selem_unlink_map() to failable. It still always succeeds and
returns 0 for now.

Since some operations updating local storage cannot fail in the middle,
open-code bpf_selem_unlink_map() to take the b->lock before the
operation. There are two such locations:

- bpf_local_storage_alloc()

  The first selem will be unlinked from smap if cmpxchg owner_storage_ptr
  fails, which should not fail. Therefore, hold b->lock when linking
  until allocation complete. Helpers that assume b->lock is held by
  callers are introduced: bpf_selem_link_map_nolock() and
  bpf_selem_unlink_map_nolock().

- bpf_local_storage_update()

  The three step update process: link_map(new_selem),
  link_storage(new_selem), and unlink_map(old_selem) should not fail in
  the middle.

In bpf_selem_unlink(), bpf_selem_unlink_map() and
bpf_selem_unlink_storage() should either all succeed or fail as a whole
instead of failing in the middle. So, return if unlink_map() failed.

In bpf_local_storage_destroy(), since it cannot deadlock with itself or
bpf_local_storage_map_free() who the function might be racing with,
retry if bpf_selem_unlink_map() fails due to rqspinlock returning
errors.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
To prepare for changing bpf_local_storage_map_bucket::lock to rqspinlock,
convert bpf_selem_link_map() to failable. It still always succeeds and
returns 0 until the change happens. No functional change.

__must_check is added to the function declaration locally to make sure
all the callers are accounted for during the conversion.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
To prepare for changing bpf_local_storage::lock to rqspinlock, open code
bpf_selem_unlink_storage() in the only caller, bpf_selem_unlink(), since
unlink_map and unlink_storage must be done together after all the
necessary locks are acquired.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
To prepare changing both bpf_local_storage_map_bucket::lock and
bpf_local_storage::lock to rqspinlock, convert bpf_selem_unlink() to
failable. It still always succeeds and returns 0 until the change
happens. No functional change.

For bpf_local_storage_map_free(), WARN_ON() for now as no real error
will happen until we switch to rqspinlock.

__must_check is added to the function declaration locally to make sure
all callers are accounted for during the conversion.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Change bpf_local_storage::lock and bpf_local_storage_map_bucket::lock to
from raw_spin_lock to rqspinlock.

Finally, propagate errors from raw_res_spin_lock_irqsave() to syscall
return or BPF helper return.

In bpf_local_storage_destroy(), WARN_ON for now. A later patch will
handle this properly.

For, __bpf_local_storage_map_cache(), instead of handling the error,
skip updating the cache.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
The percpu counter in task local storage is no longer needed as the
underlying bpf_local_storage can now handle deadlock with the help of
rqspinlock. Remove the percpu counter and related migrate_{disable,
enable}.

Since the percpu counter is removed, merge back bpf_task_storage_get()
and bpf_task_storage_get_recur(). This will allow the bpf syscalls and
helpers to run concurrently on the same CPU, removing the spurious
-EBUSY error. bpf_task_storage_get(..., F_CREATE) will now always
succeed with enough free memory unless being called recursively.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
The percpu counter in cgroup local storage is no longer needed as the
underlying bpf_local_storage can now handle deadlock with the help of
rqspinlock. Remove the percpu counter and related migrate_{disable,
enable}.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Percpu locks have been removed from cgroup and task local storage. Now
that all local storage no longer use percpu variables as locks preventing
recursion, there is no need to pass them to bpf_local_storage_map_free().
Remove the argument from the function.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
A later patch will introduce bpf_selem_unlink_lockless() to handle
rqspinlock errors. bpf_selem_unlink_lockless() will allow an selem
to be partially unlinked from map or local storage. Therefore,
bpf_selem_free() needs to be decoupled from map and local storage
as SDATA(selem)->smap or selem->local_storage may be NULL.
Decoupling from local storage is already done when local storage
migrated from BPF memory allocator to kmalloc_nolock(). This patch
prepares to decouple from map.

Currently, map is still needed in bpf_selem_free() to:

  1. Uncharge memory
    a. map->ops->map_local_storage_uncharge
    b. map->elem_size
  2. Infer how memory should be freed
    a. map->use_kmalloc_nolock
  3. Free special fields
    a. map->record

The dependency of 1.a will be addressed by a later patch by returning
the amount of memory to uncharge directly to the owner who calls
bpf_local_storage_destroy().

The dependency of 3.a will be addressed by a later patch by freeing
special fields under b->lock, when the map is still alive.

This patch handles 1.b and 2.a by simply saving the informnation in
bpf_local_storage_elem.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Introduce bpf_selem_unlink_lockless() to properly handle errors returned
from rqspinlock in bpf_local_storage_map_free() and
bpf_local_storage_destroy() where the operation must succeeds.

The idea of bpf_selem_unlink_lockless() is to allow an selem to be
partially linked and use refcount to determine when and who can free the
selem. An selem initially is fully linked to a map and a local storage
and therefore selem->link_cnt is set to 2. Under normal circumstances,
bpf_selem_unlink_lockless() will be able to grab locks and unlink
an selem from map and local storage in sequeunce, just like
bpf_selem_unlink(), and then add it to a local tofree list provide by
the caller. However, if any of the lock attempts fails, it will
only clear SDATA(selem)->smap or selem->local_storage depending on the
caller and decrement link_cnt to signal that the corresponding data
structure holding a reference to the selem is gone. Then, only when both
map and local storage are gone, an selem can be free by the last caller
that turns link_cnt to 0.

To make sure bpf_obj_free_fields() is done only once and when map is
still present, it is called when unlinking an selem from b->list under
b->lock.

To make sure uncharging memory is only done once and when owner is still
present, only unlink selem from local_storage->list in
bpf_local_storage_destroy() and return the amount of memory to uncharge
to the caller (i.e., owner) since the map associated with an selem may
already be gone and map->ops->map_local_storage_uncharge can no longer
be referenced.

Finally, access of selem, SDATA(selem)->smap and selem->local_storage
are racy. Callers will protect these fields with RCU.

Co-developed-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Amery Hung <ameryhung@gmail.com>
…ee, destroy}

Take care of rqspinlock error in bpf_local_storage_{map_free, destroy}()
properly by switching to bpf_selem_unlink_lockless().

Pass reuse_now == false when calling bpf_selem_free_list() since both
callers iterate lists of selem without lock. An selem can only be freed
after an RCU grace period.

Similarly, SDATA(selem)->smap and selem->local_storage need to be
protected by RCU as well since a caller can update these fields
which may also be seen by the other at the same time. Pass reuse_now
== false when calling bpf_local_storage_free(). The local storage map is
already protected as bpf_local_storage_map_free() waits for an RCU grace
period after iterating b->list and before freeing itself.

Co-developed-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Amery Hung <ameryhung@gmail.com>
Check sk_omem_alloc when the caller of bpf_local_storage_destroy()
returns. bpf_local_storage_destroy() now returns the memory to uncharge
to the caller instead of directly uncharge. Therefore, in the
sk_storage_omem_uncharge, check sk_omem_alloc when bpf_sk_storage_free()
returns instead of bpf_local_storage_destroy().

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Update the expected result of the selftest as recursion of task local
storage syscall and helpers have been relaxed. Now that the percpu
counter is removed, task local storage helpers, bpf_task_storage_get()
and bpf_task_storage_delete() can now run on the same CPU at the same
time unless they cause deadlock.

Note that since there is no percpu counter preventing recursion in
task local storage helpers, bpf_trampoline now catches the recursion
of on_update as reported by recursion_misses.

on_enter: tp_btf/sys_enter
on_update: fentry/bpf_local_storage_update

           Old behavior                         New behavior
           ____________                         ____________
on_enter                             on_enter
  bpf_task_storage_get(&map_a)         bpf_task_storage_get(&map_a)
    bpf_task_storage_trylock succeed     bpf_local_storage_update(&map_a)
    bpf_local_storage_update(&map_a)

    on_update                            on_update
      bpf_task_storage_get(&map_a)         bpf_task_storage_get(&map_a)
        bpf_task_storage_trylock fail        on_update::misses++ (1)
        return NULL                        create and return map_a::ptr

                                           map_a::ptr += 1 (1)

                                           bpf_task_storage_delete(&map_a)
                                             return 0

      bpf_task_storage_get(&map_b)         bpf_task_storage_get(&map_b)
        bpf_task_storage_trylock fail        on_update::misses++ (2)
        return NULL                        create and return map_b::ptr

                                           map_b::ptr += 1 (1)

    create and return map_a::ptr         create and return map_a::ptr
  map_a::ptr = 200                     map_a::ptr = 200

  bpf_task_storage_get(&map_b)         bpf_task_storage_get(&map_b)
    bpf_task_storage_trylock succeed     lockless lookup succeed
    bpf_local_storage_update(&map_b)     return map_b::ptr

    on_update
      bpf_task_storage_get(&map_a)
        bpf_task_storage_trylock fail
        lockless lookup succeed
        return map_a::ptr

      map_a::ptr += 1 (201)

      bpf_task_storage_delete(&map_a)
        bpf_task_storage_trylock fail
        return -EBUSY
      nr_del_errs++ (1)

      bpf_task_storage_get(&map_b)
        bpf_task_storage_trylock fail
        return NULL

    create and return ptr

  map_b::ptr = 100

Expected result:

map_a::ptr = 201                          map_a::ptr = 200
map_b::ptr = 100                          map_b::ptr = 1
nr_del_err = 1                            nr_del_err = 0
on_update::recursion_misses = 0           on_update::recursion_misses = 2
On_enter::recursion_misses = 0            on_enter::recursion_misses = 0

Signed-off-by: Amery Hung <ameryhung@gmail.com>
@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: f785a31
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1034704
version: 3

Adjsut the error code we are checking against as bpf_task_storage_get()
now returns -EDEADLK or -ETIMEDOUT when deadlock happens.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Remove a test in test_maps that checks if the updating of the percpu
counter in task local storage map is preemption safe as the percpu
counter is now removed.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
bpf_cgrp_storage_busy has been removed. Use bpf_bprintf_nest_level
instead. This percpu variable is also in the bpf subsystem so that
if it is removed in the future, BPF-CI will catch this type of CI-
breaking change.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants