Lines Matching full:avoid
70 * will avoid:
98 * Add NUM_SLOTS-1 entries to account for overflow; this helps avoid having to
105 * per-CPU counter to avoid excessive contention.
201 * In interrupts, use raw_cpu_ptr to avoid unnecessary checks, that would in get_ctx()
283 * via reset_kcsan_skip() to avoid underflow. in should_watch()
416 * To avoid nested interrupts or scheduler (which share kcsan_ctx) in set_reorder_access()
459 * possible -- avoid unneccessarily complex code until consumed. in kcsan_found_watchpoint()
466 * The access_mask check relies on value-change comparison. To avoid in kcsan_found_watchpoint()
487 * avoid erroneously triggering reports if the context is disabled. in kcsan_found_watchpoint()
531 * Always reset kcsan_skip counter in slow-path to avoid underflow; see in kcsan_setup_watchpoint()
553 * therefore we need to take care of 2 cases to avoid false positives: in kcsan_setup_watchpoint()
555 * 1. Races of the reordered access with interrupts. To avoid, if in kcsan_setup_watchpoint()
557 * 2. Avoid races of scoped accesses from nested interrupts (below). in kcsan_setup_watchpoint()
563 * Avoid races of scoped accesses from nested interrupts (or scheduler). in kcsan_setup_watchpoint()
567 * To avoid, disable scoped access checking. in kcsan_setup_watchpoint()
722 * Avoid user_access_save in fast-path: find_watchpoint is safe without in check_access()
934 * in the fast-path (to avoid a READ_ONCE() and potential in kcsan_end_scoped_access()