Home
last modified time | relevance | path

Searched full:we (Results 1 – 25 of 9368) sorted by relevance

12345678910>>...375

/Linux-v5.4/arch/powerpc/mm/nohash/
Dtlb_low_64e.S97 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */
99 /* We do the user/kernel test for the PID here along with the RW test
101 /* We pre-test some combination of permissions to avoid double
104 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE
109 * writeable, we will take a new fault later, but that should be
112 * We also move ESR_ST in _PAGE_DIRTY position
115 * MAS1 is preset for all we need except for TID that needs to
137 * We are entered with:
185 /* Now we build the MAS:
228 /* We need to check if it was an instruction miss */
[all …]
/Linux-v5.4/drivers/md/bcache/
Djournal.h9 * never spans two buckets. This means (not implemented yet) we can resize the
15 * We also keep some things in the journal header that are logically part of the
20 * rewritten when we want to move/wear level the main journal.
22 * Currently, we don't journal BTREE_REPLACE operations - this will hopefully be
25 * moving gc we work around it by flushing the btree to disk before updating the
35 * We track this by maintaining a refcount for every open journal entry, in a
38 * zero, we pop it off - thus, the size of the fifo tells us the number of open
41 * We take a refcount on a journal entry when we add some keys to a journal
42 * entry that we're going to insert (held by struct btree_op), and then when we
43 * insert those keys into the btree the btree write we're setting up takes a
[all …]
Dbset.h17 * We use two different functions for validating bkeys, bch_ptr_invalid and
27 * them on disk, just unnecessary work - so we filter them out when resorting
30 * We can't filter out stale keys when we're resorting, because garbage
32 * unless we're rewriting the btree node those stale keys still exist on disk.
34 * We also implement functions here for removing some number of sectors from the
44 * There could be many of them on disk, but we never allow there to be more than
45 * 4 in memory - we lazily resort as needed.
47 * We implement code here for creating and maintaining auxiliary search trees
48 * (described below) for searching an individial bset, and on top of that we
62 * Since keys are variable length, we can't use a binary search on a bset - we
[all …]
/Linux-v5.4/net/ipv4/
Dtcp_vegas.c15 * o We do not change the loss detection or recovery mechanisms of
19 * only every-other RTT during slow start, we increase during
22 * we use the rate at which ACKs come back as the "actual"
24 * o To speed convergence to the right rate, we set the cwnd
25 * to achieve the right ("actual") rate when we exit slow start.
26 * o To filter out the noise caused by delayed ACKs, we use the
55 /* There are several situations when we must "re-start" Vegas:
60 * o when we send a packet and there is no outstanding
63 * In these circumstances we cannot do a Vegas calculation at the
64 * end of the first RTT, because any calculation we do is using
[all …]
/Linux-v5.4/fs/xfs/
Dxfs_log_cil.c24 * recover, so we don't allow failure here. Also, we allocate in a context that
25 * we don't want to be issuing transactions from, so we need to tell the
28 * We don't reserve any space for the ticket - we are going to steal whatever
29 * space we require from transactions as they commit. To ensure we reserve all
30 * the space required, we need to set the current reservation of the ticket to
31 * zero so that we know to steal the initial transaction overhead from the
44 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc()
52 * After the first stage of log recovery is done, we know where the head and
53 * tail of the log are. We need this log initialisation done before we can
56 * Here we allocate a log ticket to track space usage during a CIL push. This
[all …]
Dxfs_log_priv.h67 * By covering, we mean changing the h_tail_lsn in the last on-disk
76 * might include space beyond the EOF. So if we just push the EOF a
84 * system is idle. We need two dummy transaction because the h_tail_lsn
96 * we are done covering previous transactions.
97 * NEED -- logging has occurred and we need a dummy transaction
99 * DONE -- we were in the NEED state and have committed a dummy
101 * NEED2 -- we detected that a dummy transaction has gone to the
103 * DONE2 -- we committed a dummy transaction when in the NEED2 state.
105 * There are two places where we switch states:
107 * 1.) In xfs_sync, when we detect an idle log and are in NEED or NEED2.
[all …]
/Linux-v5.4/fs/btrfs/
Ddelalloc-space.c31 /* Make sure we have enough space to handle the data first */ in btrfs_alloc_data_chunk_ondemand()
39 * If we don't have enough free bytes in this space then we need in btrfs_alloc_data_chunk_ondemand()
50 * It is ugly that we don't call nolock join in btrfs_alloc_data_chunk_ondemand()
52 * But it is safe because we only do the data space in btrfs_alloc_data_chunk_ondemand()
79 * If we don't have enough pinned space to deal with this in btrfs_alloc_data_chunk_ondemand()
113 * more space is released. We don't need to in btrfs_alloc_data_chunk_ondemand()
163 * Called if we need to clear a data reservation for this inode
167 * which we can't sleep and is sure it won't affect qgroup reserved space.
188 * Called if we need to clear a data reservation for this inode
210 * @inode - the inode we need to release from.
[all …]
Dspace-info.c23 * after adding space to the filesystem, we need to clear the full flags
186 * If we have dup, raid1 or raid10 then only half of the free in can_overcommit()
188 * doesn't include the parity drive, so we don't have to in can_overcommit()
195 * If we aren't flushing all things, let us overcommit up to in can_overcommit()
196 * 1/2th of the space. If we can flush, don't let us overcommit in can_overcommit()
210 * This is for space we already have accounted in space_info->bytes_may_use, so
211 * basically when we're returning space from block_rsv's.
325 * We needn't worry the filesystem going from r/w to r/o though in btrfs_writeback_inodes_sb_nr()
326 * we don't acquire ->s_umount mutex, because the filesystem in btrfs_writeback_inodes_sb_nr()
368 /* Calc the number of the pages we need flush for space reservation */ in shrink_delalloc()
[all …]
/Linux-v5.4/drivers/misc/vmw_vmci/
Dvmci_route.c33 * which comes from the VMX, so we know it is coming from a in vmci_route()
36 * To avoid inconsistencies, test these once. We will test in vmci_route()
37 * them again when we do the actual send to ensure that we do in vmci_route()
49 * If this message already came from a guest then we in vmci_route()
57 * We must be acting as a guest in order to send to in vmci_route()
63 /* And we cannot send if the source is the host context. */ in vmci_route()
71 * then they probably mean ANY, in which case we in vmci_route()
87 * If it is not from a guest but we are acting as a in vmci_route()
88 * guest, then we need to send it down to the host. in vmci_route()
89 * Note that if we are also acting as a host then this in vmci_route()
[all …]
/Linux-v5.4/Documentation/filesystems/
Dxfs-delayed-logging-design.txt25 That is, if we have a sequence of changes A through to F, and the object was
26 written to disk after change D, we would see in the log the following series
91 relogging technique XFS uses is that we can be relogging changed objects
92 multiple times before they are committed to disk in the log buffers. If we
98 contains all the changes from the previous changes. In other words, we have one
100 wasting space. When we are doing repeated operations on the same set of
103 log would greatly reduce the amount of metadata we write to the log, and this
110 formatting the changes in a transaction to the log buffer. Hence we cannot avoid
113 Delayed logging is the name we've given to keeping and tracking transactional
163 changes to the log buffers, we need to ensure that the object we are formatting
[all …]
/Linux-v5.4/arch/powerpc/kernel/
Dmachine_kexec_64.c45 * Since we use the kernel fault handlers and paging code to in default_machine_kexec_prepare()
46 * handle the virtual mode, we must make sure no destination in default_machine_kexec_prepare()
53 /* We also should not overwrite the tce tables */ in default_machine_kexec_prepare()
83 * We rely on kexec_load to create a lists that properly in copy_segments()
85 * We will still crash if the list is wrong, but at least in copy_segments()
117 * After this call we may not use anything allocated in dynamic in kexec_copy_flush()
125 * we need to clear the icache for all dest pages sometime, in kexec_copy_flush()
142 mb(); /* make sure our irqs are disabled before we say they are */ in kexec_smp_down()
149 * Now every CPU has IRQs off, we can clear out any pending in kexec_smp_down()
165 /* Make sure each CPU has at least made it to the state we need. in kexec_prepare_cpus_wait()
[all …]
/Linux-v5.4/fs/xfs/scrub/
Dbitmap.c90 * @bitmap as the list of blocks that are not accounted for, which we assume
120 * Now that we've sorted both lists, we iterate bitmap once, rolling in xfs_bitmap_disunion()
121 * forward through sub and/or bitmap as necessary until we find an in xfs_bitmap_disunion()
122 * overlap or reach the end of either list. We do not reset lp to the in xfs_bitmap_disunion()
123 * head of bitmap nor do we reset sub_br to the head of sub. The in xfs_bitmap_disunion()
124 * list traversal is similar to merge sort, but we're deleting in xfs_bitmap_disunion()
125 * instead. In this manner we avoid O(n^2) operations. in xfs_bitmap_disunion()
134 * Advance sub_br and/or br until we find a pair that in xfs_bitmap_disunion()
135 * intersect or we run out of extents. in xfs_bitmap_disunion()
147 /* trim sub_br to fit the extent we have */ in xfs_bitmap_disunion()
[all …]
Drepair.c57 * scrub so that we can tell userspace if we fixed the problem. in xrep_attempt()
70 * We tried harder but still couldn't grab all the resources in xrep_attempt()
71 * we needed to fix it. The corruption has not been fixed, in xrep_attempt()
81 * Complain about unfixable problems in the filesystem. We don't log
98 * Repair probe -- userspace uses this to probe if we're willing to repair a
123 /* Keep the AG header buffers locked so we can keep going. */ in xrep_roll_ag_trans()
132 * Roll the transaction. We still own the buffer and the buffer lock in xrep_roll_ag_trans()
135 * kernel. If it succeeds, we join them to the new transaction and in xrep_roll_ag_trans()
155 * reservation can be critical, and we must have enough space (factoring
170 * Figure out how many blocks to reserve for an AG repair. We calculate the
[all …]
Dfscounters.c24 * The basics of filesystem summary counter checking are that we iterate the
27 * Then we compare what we computed against the in-core counters.
30 * While we /could/ freeze the filesystem and scramble around the AGs counting
31 * the free blocks, in practice we prefer not do that for a scan because
32 * freezing is costly. To get around this, we added a per-cpu counter of the
33 * delalloc reservations so that we can rotor around the AGs relatively
34 * quickly, and we allow the counts to be slightly off because we're not taking
35 * any locks while we do this.
37 * So the first thing we do is warm up the buffer cache in the setup routine by
40 * structures as quickly as it can. We snapshot the percpu counters before and
[all …]
/Linux-v5.4/arch/x86/mm/
Dmpx.c76 * The decoder _should_ fail nicely if we pass it a short buffer. in mpx_insn_decode()
77 * But, let's not depend on that implementation detail. If we in mpx_insn_decode()
85 * copy_from_user() tries to get as many bytes as we could see in in mpx_insn_decode()
86 * the largest possible instruction. If the instruction we are in mpx_insn_decode()
87 * after is shorter than that _and_ we attempt to copy from in mpx_insn_decode()
88 * something unreadable, we might get a short read. This is OK in mpx_insn_decode()
90 * instruction. Check to see if we got a partial instruction. in mpx_insn_decode()
97 * We only _really_ need to decode bndcl/bndcn/bndcu in mpx_insn_decode()
117 * Userspace could have, by the time we get here, written
118 * anything it wants in to the instructions. We can not
[all …]
/Linux-v5.4/drivers/usb/dwc2/
Dhcd_queue.c61 /* If we get a NAK, wait this long before retrying */
150 * @num_bits: The number of bits we need per period we want to reserve
152 * @interval: How often we need to be scheduled for the reservation this
156 * the interval or we return failure right away.
157 * @only_one_period: Normally we'll allow picking a start anywhere within the
158 * first interval, since we can still make all repetition
160 * here then we'll return failure if we can't fit within
163 * The idea here is that we want to schedule time for repeating events that all
168 * To keep things "simple", we'll represent our schedule with a bitmap that
170 * but does mean that we need to handle things specially (and non-ideally) if
[all …]
/Linux-v5.4/arch/ia64/lib/
Dcopy_user.S8 * the boundary. When reading from user space we must catch
9 * faults on loads. When writing to user space we must catch
11 * we don't need to worry about overlapping regions.
27 * - handle the case where we have more than 16 bytes and the alignment
39 #define COPY_BREAK 16 // we do byte copy below (must be >=16)
111 // Now we do the byte by byte loop with software pipeline
128 // At this point we know we have more than 16 bytes to copy
133 // The basic idea is that we copy byte-by-byte at the head so
134 // that we can reach 8-byte alignment for both src1 and dst1.
153 // Optimization. If dst1 is 8-byte aligned (quite common), we don't need
[all …]
Dstrlen.S31 // so we need to do a few extra checks at the beginning because the
32 // string may not be 8-byte aligned. In this case we load the 8byte
35 // We use speculative loads and software pipelining to hide memory
36 // latency and do read ahead safely. This way we defer any exception.
38 // Because we don't want the kernel to be relying on particular
39 // settings of the DCR register, we provide recovery code in case
41 // only normal loads. If we still get a fault then we generate a
42 // kernel panic. Otherwise we return the strlen as usual.
50 // It should be noted that we execute recovery code only when we need
51 // to use the data that has been speculatively loaded: we don't execute
[all …]
/Linux-v5.4/arch/x86/entry/
Dentry_64.S76 * We need to change the IDT table before calling TRACE_IRQS_ON/OFF to
149 * We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON,
177 TRACE_IRQS_IRETQ /* we're about to change IF */
180 * Try to use SYSRET instead of IRET if we're returning to
181 * a completely clean 64-bit userspace context. If we're not,
222 * restore RF properly. If the slowpath sets it for whatever reason, we
247 * We win! This label is here just for ease of understanding
266 * We are on the trampoline stack. All regs except RDI are live.
267 * We can do future final exit work right here.
329 * rax: prev task we switched from
[all …]
/Linux-v5.4/kernel/irq/
Dspurious.c26 * We wait here for a poller to finish.
28 * If the poll runs on this CPU, then we yell loudly and return
32 * We wait until the poller is done and then recheck disabled and
33 * action (about to be disabled). Only if it's still active, we return
85 * All handlers must agree on IRQF_SHARED, so we test just the in try_one_irq()
208 * We need to take desc->lock here. note_interrupt() is called in __report_bad_irq()
209 * w/o desc->lock held, but IRQ_PROGRESS set. We might race in __report_bad_irq()
243 /* We didn't actually handle the IRQ - see if it was misrouted? */ in try_misrouted_irq()
248 * But for 'irqfixup == 2' we also do it for handled interrupts if in try_misrouted_irq()
259 * Since we don't get the descriptor lock, "action" can in try_misrouted_irq()
[all …]
/Linux-v5.4/arch/arm64/kvm/hyp/
Dtlb.c28 * For CPUs that are affected by ARM erratum 1165522, we in __tlb_switch_to_guest_vhe()
30 * point. Since we do not want to force a full load of the in __tlb_switch_to_guest_vhe()
31 * vcpu state, we prevent the EL1 page-table walker to in __tlb_switch_to_guest_vhe()
33 * in the TCR_EL1 register. We also need to prevent it to in __tlb_switch_to_guest_vhe()
34 * allocate IPA->PA walks, so we enable the S1 MMU... in __tlb_switch_to_guest_vhe()
45 * With VHE enabled, we have HCR_EL2.{E2H,TGE} = {1,1}, and in __tlb_switch_to_guest_vhe()
47 * guest TLBs (EL1/EL0), we need to change one of these two in __tlb_switch_to_guest_vhe()
52 * as we need to make sure both stages of translation are in in __tlb_switch_to_guest_vhe()
83 * We're done with the TLB operation, let's restore the host's in __tlb_switch_to_host_vhe()
125 * We could do so much better if we had the VA as well. in __kvm_tlb_flush_vmid_ipa()
[all …]
/Linux-v5.4/drivers/gpu/drm/i915/
Di915_request.c66 * We could extend the life of a context to beyond that of all in i915_fence_get_timeline_name()
68 * or we just give them a false name. Since in i915_fence_get_timeline_name()
105 * freed when the slab cache itself is freed, and so we would get in i915_fence_release()
157 * In the future, perhaps when we have an active time-slicing scheduler, in __notify_execute_cb()
160 * quite hairy, we have to carefully rollback the fence and do a in __notify_execute_cb()
204 * engine lock. The simple ploy we use is to take the lock then in remove_from_engine()
235 * We know the GPU must have read the request to have in i915_request_retire()
240 * Note this requires that we are always called in request in i915_request_retire()
252 * As the ->retire() may free the node, we decouple it first and in i915_request_retire()
259 * we may spend an inordinate amount of time simply handling in i915_request_retire()
[all …]
/Linux-v5.4/fs/jbd2/
Dtransaction.c70 * have an existing running transaction: we only make a new transaction
71 * once we have started to commit the old one).
74 * The journal MUST be locked. We don't perform atomic mallocs on the
75 * new transaction and we can't block without protecting against other
121 * unless debugging is enabled, we no longer update t_max_wait, which
180 * We don't call jbd2_might_wait_for_commit() here as there's no in wait_transaction_switching()
196 * Wait until we can add credits for handle to the running transaction. Called
198 * transaction. Returns 1 if we had to wait, j_state_lock is dropped, and
220 * potential buffers requested by this operation, we need to in add_transaction_credits()
227 * then start to commit it: we can then go back and in add_transaction_credits()
[all …]
/Linux-v5.4/arch/openrisc/mm/
Dfault.c58 * We fault-in kernel-space virtual memory on-demand. The in do_page_fault()
61 * NOTE! We MUST NOT take any locks for this case. We may in do_page_fault()
67 * mappings we don't have to walk all processes pgdirs and in do_page_fault()
68 * add the high mappings all at once. Instead we do it as they in do_page_fault()
81 /* If exceptions were enabled, we can reenable them here */ in do_page_fault()
99 * If we're in an interrupt or have no user in do_page_fault()
100 * context, we must not take the fault.. in do_page_fault()
122 * we get page-aligned addresses so we can only check in do_page_fault()
123 * if we're within a page from usp, but that might be in do_page_fault()
133 * Ok, we have a good vm_area for this memory access, so in do_page_fault()
[all …]
/Linux-v5.4/drivers/net/wimax/i2400m/
Dnetdev.c12 * We fake being an ethernet device to simplify the support from user
19 * in what we get from the device). This is a known drawback and
23 * TX error handling is tricky; because we have to FIFO/queue the
24 * buffers for transmission (as the hardware likes it aggregated), we
26 * transmitted, we have long forgotten about it. So we just don't care
29 * Note that when the device is in idle mode with the basestation, we
31 * and possible user space interaction. Thus, we defer to a workqueue
32 * to do all that. By default, we only queue a single packet and drop
75 * station. We add 1sec for good measure. */
93 /* Make sure we wait until init is complete... */ in i2400m_open()
[all …]

12345678910>>...375