diff options
author | Marco Elver <elver@google.com> | 2024-11-04 16:43:09 +0100 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2024-11-05 12:55:35 +0100 |
commit | 183ec5f26b2fc97a4a9871865bfe9b33c41fddb2 (patch) | |
tree | edd0f5c701d061b60f9df5c75df48b4a29ce9955 /rust/helpers | |
parent | seqlock, treewide: Switch to non-raw seqcount_latch interface (diff) | |
download | linux-183ec5f26b2fc97a4a9871865bfe9b33c41fddb2.tar.xz linux-183ec5f26b2fc97a4a9871865bfe9b33c41fddb2.zip |
kcsan, seqlock: Fix incorrect assumption in read_seqbegin()
During testing of the preceding changes, I noticed that in some cases,
current->kcsan_ctx.in_flat_atomic remained true until task exit. This is
obviously wrong, because _all_ accesses for the given task will be
treated as atomic, resulting in false negatives i.e. missed data races.
Debugging led to fs/dcache.c, where we can see this usage of seqlock:
struct dentry *d_lookup(const struct dentry *parent, const struct qstr *name)
{
struct dentry *dentry;
unsigned seq;
do {
seq = read_seqbegin(&rename_lock);
dentry = __d_lookup(parent, name);
if (dentry)
break;
} while (read_seqretry(&rename_lock, seq));
[...]
As can be seen, read_seqretry() is never called if dentry != NULL;
consequently, current->kcsan_ctx.in_flat_atomic will never be reset to
false by read_seqretry().
Give up on the wrong assumption of "assume closing read_seqretry()", and
rely on the already-present annotations in read_seqcount_begin/retry().
Fixes: 88ecd153be95 ("seqlock, kcsan: Add annotations for KCSAN")
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20241104161910.780003-6-elver@google.com
Diffstat (limited to 'rust/helpers')
0 files changed, 0 insertions, 0 deletions