Skip to content

Commit 3e4cdad

Browse files
Sebastian Andrzej Siewiorakpm00
Sebastian Andrzej Siewior
authored andcommitted
ucount: replace get_ucounts_or_wrap() with atomic_inc_not_zero()
get_ucounts_or_wrap() increments the counter and if the counter is negative then it decrements it again in order to reset the previous increment. This statement can be replaced with atomic_inc_not_zero() to only increment the counter if it is not yet 0. This simplifies the get function because the put (if the get failed) can be removed. atomic_inc_not_zero() is implement as a cmpxchg() loop which can be repeated several times if another get/put is performed in parallel. This will be optimized later. Increment the reference counter only if not yet dropped to zero. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Reviewed-by: Paul E. McKenney <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Josh Triplett <[email protected]> Cc: Lai jiangshan <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Mengen Sun <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: "Uladzislau Rezki (Sony)" <[email protected]> Cc: YueHong Wu <[email protected]> Cc: Zqiang <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 35ef5f4 commit 3e4cdad

File tree

1 file changed

+6
-18
lines changed

1 file changed

+6
-18
lines changed

kernel/ucount.c

+6-18
Original file line numberDiff line numberDiff line change
@@ -146,25 +146,16 @@ static void hlist_add_ucounts(struct ucounts *ucounts)
146146
spin_unlock_irq(&ucounts_lock);
147147
}
148148

149-
static inline bool get_ucounts_or_wrap(struct ucounts *ucounts)
150-
{
151-
/* Returns true on a successful get, false if the count wraps. */
152-
return !atomic_add_negative(1, &ucounts->count);
153-
}
154-
155149
struct ucounts *get_ucounts(struct ucounts *ucounts)
156150
{
157-
if (!get_ucounts_or_wrap(ucounts)) {
158-
put_ucounts(ucounts);
159-
ucounts = NULL;
160-
}
161-
return ucounts;
151+
if (atomic_inc_not_zero(&ucounts->count))
152+
return ucounts;
153+
return NULL;
162154
}
163155

164156
struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
165157
{
166158
struct hlist_head *hashent = ucounts_hashentry(ns, uid);
167-
bool wrapped;
168159
struct ucounts *ucounts, *new = NULL;
169160

170161
spin_lock_irq(&ucounts_lock);
@@ -189,14 +180,11 @@ struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
189180
return new;
190181
}
191182
}
192-
193-
wrapped = !get_ucounts_or_wrap(ucounts);
183+
if (!atomic_inc_not_zero(&ucounts->count))
184+
ucounts = NULL;
194185
spin_unlock_irq(&ucounts_lock);
195186
kfree(new);
196-
if (wrapped) {
197-
put_ucounts(ucounts);
198-
return NULL;
199-
}
187+
200188
return ucounts;
201189
}
202190

0 commit comments

Comments
 (0)