Lines Matching full:we
76 * The decoder _should_ fail nicely if we pass it a short buffer. in mpx_insn_decode()
77 * But, let's not depend on that implementation detail. If we in mpx_insn_decode()
85 * copy_from_user() tries to get as many bytes as we could see in in mpx_insn_decode()
86 * the largest possible instruction. If the instruction we are in mpx_insn_decode()
87 * after is shorter than that _and_ we attempt to copy from in mpx_insn_decode()
88 * something unreadable, we might get a short read. This is OK in mpx_insn_decode()
90 * instruction. Check to see if we got a partial instruction. in mpx_insn_decode()
97 * We only _really_ need to decode bndcl/bndcn/bndcu in mpx_insn_decode()
117 * Userspace could have, by the time we get here, written
118 * anything it wants in to the instructions. We can not
135 * We know at this point that we are only dealing with in mpx_fault_info()
168 * We were not able to extract an address from the instruction, in mpx_fault_info()
191 * only accessible if we first do an xsave. in mpx_get_bounds_dir()
226 * directory here means that we do not have to do xsave in the in mpx_enable_management()
227 * unmap path; we can just use mm->context.bd_addr instead. in mpx_enable_management()
269 * the pointer that we pass to it to figure out how much in mpx_cmpxchg_bd_entry()
270 * data to cmpxchg. We have to be careful here not to in mpx_cmpxchg_bd_entry()
271 * pass a pointer to a 64-bit data type when we only want in mpx_cmpxchg_bd_entry()
318 * we may race with another CPU instantiating the same table. in allocate_bt()
322 * This can fault, but that's OK because we do not hold in allocate_bt()
333 * for faults, *not* if the cmpxchg itself fails. Now we must in allocate_bt()
337 * We expected an empty 'expected_old_val', but instead found in allocate_bt()
338 * an apparently valid entry. Assume we raced with another in allocate_bt()
346 * We found a non-empty bd_entry but it did not have the in allocate_bt()
366 * the directory, a #BR is generated and we get here in order to
388 * entry via BNDSTATUS, so we don't have to go look it up. in do_mpx_bt_fault()
392 * Make sure the directory entry is within where we think in do_mpx_bt_fault()
427 * 0 means we failed to fault in and get anything, in mpx_resolve_fault()
452 * are ignored by the hardware, so we do the same. in mpx_bd_entry_to_bt_addr()
463 * We only want to do a 4-byte get_user() on 32-bit. Otherwise,
464 * we might run off the end of the bounds table if we are on
512 * If we could not resolve the fault, consider it in get_bt_addr()
525 * *OR* be completely empty. If we see a !valid entry *and* some in get_bt_addr()
526 * data in the address field, we know something is wrong. This in get_bt_addr()
532 * Do we have an completely zeroed bt entry? That is OK. It in get_bt_addr()
573 * We know the size of the table in to which we are in mpx_get_bt_entry_offset_bytes()
574 * indexing, and we have eliminated all the low bits in mpx_get_bt_entry_offset_bytes()
577 * Mask out all the high bits which we do not need in mpx_get_bt_entry_offset_bytes()
584 * We now have an entry offset in terms of *entries* in in mpx_get_bt_entry_offset_bytes()
585 * the table. We need to scale it back up to bytes. in mpx_get_bt_entry_offset_bytes()
595 * Note, we need a long long because 4GB doesn't fit in
633 * if we 'end' on a boundary, the offset will be 0 which in zap_bt_entries_mapping()
634 * is not what we want. Back it up a byte to get the in zap_bt_entries_mapping()
635 * last bt entry. Then once we have the entry itself, in zap_bt_entries_mapping()
658 * be split. So we need to look across the entire 'start -> end' in zap_bt_entries_mapping()
665 * We followed a bounds directory entry down in zap_bt_entries_mapping()
666 * here. If we find a non-MPX VMA, that's bad, in zap_bt_entries_mapping()
687 * There are several ways to derive the bd offsets. We in mpx_get_bd_entry_offset()
689 * 1. We know the size of the virtual address space in mpx_get_bd_entry_offset()
690 * 2. We know the number of entries in a bounds table in mpx_get_bd_entry_offset()
691 * 3. We know that each entry covers a fixed amount of in mpx_get_bd_entry_offset()
693 * So, we can just divide the virtual address by the in mpx_get_bd_entry_offset()
712 * The two return calls above are exact copies. If we in mpx_get_bd_entry_offset()
714 * realize that we're doing a power-of-2 divide and use in mpx_get_bd_entry_offset()
715 * shifts. It uses a real divide. If we put them up in mpx_get_bd_entry_offset()
740 * If we could not resolve the fault, consider it in unmap_entire_bt()
752 * That is OK, since we were both trying to do in unmap_entire_bt()
759 * entry. We hold mmap_sem for read or write in unmap_entire_bt()
767 * Note, we are likely being called under do_munmap() already. To in unmap_entire_bt()
781 * bounds table that we are unmapping. in try_unmap_single_bt()
789 * We already unlinked the VMAs from the mm's rbtree so 'start' in try_unmap_single_bt()
797 * Although theoretically possible, we do not allow bounds in try_unmap_single_bt()
799 * If we count them as neighbors here, we may end up with in try_unmap_single_bt()
800 * lots of tables even though we have no actual table in try_unmap_single_bt()
808 * We know 'start' and 'end' lie within an area controlled in try_unmap_single_bt()
811 * then we can "expand" the are we are unmapping to possibly in try_unmap_single_bt()
837 * We are unmapping an entire table. Either because the in try_unmap_single_bt()
862 * move it back so we only deal with a single one in mpx_unmap_tables()
901 * (start->end), we will not continue follow-up work. This in mpx_notify_unmap()
904 * helps ensure that we do not have an exploitable stack overflow. in mpx_notify_unmap()
930 * Requested len is larger than the whole area we're allowed to map in. in mpx_unmapped_area_check()