Home
last modified time | relevance | path

Searched full:we (Results 1 – 25 of 7723) sorted by relevance

12345678910>>...309

/Linux-v5.15/fs/btrfs/
Dspace-info.c22 * 1) space_info. This is the ultimate arbiter of how much space we can use.
25 * reservations we care about total_bytes - SUM(space_info->bytes_) when
30 * metadata reservation we have. You can see the comment in the block_rsv
34 * 3) btrfs_calc*_size. These are the worst case calculations we used based
35 * on the number of items we will want to modify. We have one for changing
36 * items, and one for inserting new items. Generally we use these helpers to
42 * We call into either btrfs_reserve_data_bytes() or
43 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with
44 * num_bytes we want to reserve.
61 * Assume we are unable to simply make the reservation because we do not have
[all …]
Ddelalloc-space.c23 * We call into btrfs_reserve_data_bytes() for the user request bytes that
24 * they wish to write. We make this reservation and add it to
25 * space_info->bytes_may_use. We set EXTENT_DELALLOC on the inode io_tree
27 * make a real allocation if we are pre-allocating or doing O_DIRECT.
30 * At writepages()/prealloc/O_DIRECT time we will call into
31 * btrfs_reserve_extent() for some part or all of this range of bytes. We
35 * may allocate a smaller on disk extent than we previously reserved.
46 * This is the simplest case, we haven't completed our operation and we know
47 * how much we reserved, we can simply call
60 * We keep track of two things on a per inode bases
[all …]
Dlocking.h18 * We are limited in number of subclasses by MAX_LOCKDEP_SUBCLASSES, which at
19 * the time of this patch is 8, which is how many we use. Keep this in mind if
26 * When we COW a block we are holding the lock on the original block,
28 * when we lock the newly allocated COW'd block. Handle this by having
34 * Oftentimes we need to lock adjacent nodes on the same level while
35 * still holding the lock on the original node we searched to, such as
38 * Because of this we need to indicate to lockdep that this is
46 * When splitting we will be holding a lock on the left/right node when
47 * we need to cow that node, thus we need a new set of subclasses for
54 * When splitting we may push nodes to the left or right, but still use
[all …]
/Linux-v5.15/arch/powerpc/mm/nohash/
Dtlb_low_64e.S95 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */
97 /* We do the user/kernel test for the PID here along with the RW test
99 /* We pre-test some combination of permissions to avoid double
102 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE
107 * writeable, we will take a new fault later, but that should be
110 * We also move ESR_ST in _PAGE_DIRTY position
113 * MAS1 is preset for all we need except for TID that needs to
134 * We are entered with:
182 /* Now we build the MAS:
224 /* We need to check if it was an instruction miss */
[all …]
/Linux-v5.15/drivers/md/bcache/
Djournal.h9 * never spans two buckets. This means (not implemented yet) we can resize the
15 * We also keep some things in the journal header that are logically part of the
20 * rewritten when we want to move/wear level the main journal.
22 * Currently, we don't journal BTREE_REPLACE operations - this will hopefully be
25 * moving gc we work around it by flushing the btree to disk before updating the
35 * We track this by maintaining a refcount for every open journal entry, in a
38 * zero, we pop it off - thus, the size of the fifo tells us the number of open
41 * We take a refcount on a journal entry when we add some keys to a journal
42 * entry that we're going to insert (held by struct btree_op), and then when we
43 * insert those keys into the btree the btree write we're setting up takes a
[all …]
Dbset.h17 * We use two different functions for validating bkeys, bch_ptr_invalid and
27 * them on disk, just unnecessary work - so we filter them out when resorting
30 * We can't filter out stale keys when we're resorting, because garbage
32 * unless we're rewriting the btree node those stale keys still exist on disk.
34 * We also implement functions here for removing some number of sectors from the
44 * There could be many of them on disk, but we never allow there to be more than
45 * 4 in memory - we lazily resort as needed.
47 * We implement code here for creating and maintaining auxiliary search trees
48 * (described below) for searching an individial bset, and on top of that we
62 * Since keys are variable length, we can't use a binary search on a bset - we
[all …]
/Linux-v5.15/fs/xfs/
Dxfs_log_cil.c24 * recover, so we don't allow failure here. Also, we allocate in a context that
25 * we don't want to be issuing transactions from, so we need to tell the
28 * We don't reserve any space for the ticket - we are going to steal whatever
29 * space we require from transactions as they commit. To ensure we reserve all
30 * the space required, we need to set the current reservation of the ticket to
31 * zero so that we know to steal the initial transaction overhead from the
43 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc()
79 * After the first stage of log recovery is done, we know where the head and
80 * tail of the log are. We need this log initialisation done before we can
83 * Here we allocate a log ticket to track space usage during a CIL push. This
[all …]
Dxfs_log_priv.h72 * By covering, we mean changing the h_tail_lsn in the last on-disk
81 * might include space beyond the EOF. So if we just push the EOF a
89 * system is idle. We need two dummy transaction because the h_tail_lsn
101 * we are done covering previous transactions.
102 * NEED -- logging has occurred and we need a dummy transaction
104 * DONE -- we were in the NEED state and have committed a dummy
106 * NEED2 -- we detected that a dummy transaction has gone to the
108 * DONE2 -- we committed a dummy transaction when in the NEED2 state.
110 * There are two places where we switch states:
112 * 1.) In xfs_sync, when we detect an idle log and are in NEED or NEED2.
[all …]
Dxfs_trans_ail.c26 * Called with the ail lock held, but we don't want to assert fail with it
27 * held otherwise we'll lock everything up and won't be able to debug the
28 * cause. Hence we sample and check the state under the AIL lock and return if
29 * everything is fine, otherwise we drop the lock and run the ASSERT checks.
110 * We need the AIL lock in order to get a coherent read of the lsn of the last
191 * When the traversal is complete, we need to remove the cursor from the list
206 * freed object. We set the low bit of the cursor item pointer so we can
289 * Splice the log item list into the AIL at the given LSN. We splice to the
307 * provided. If not, or if the one we got is not valid, in xfs_ail_splice()
315 * If a cursor is provided, we know we're processing the AIL in xfs_ail_splice()
[all …]
/Linux-v5.15/net/ipv4/
Dtcp_vegas.c15 * o We do not change the loss detection or recovery mechanisms of
19 * only every-other RTT during slow start, we increase during
22 * we use the rate at which ACKs come back as the "actual"
24 * o To speed convergence to the right rate, we set the cwnd
25 * to achieve the right ("actual") rate when we exit slow start.
26 * o To filter out the noise caused by delayed ACKs, we use the
55 /* There are several situations when we must "re-start" Vegas:
60 * o when we send a packet and there is no outstanding
63 * In these circumstances we cannot do a Vegas calculation at the
64 * end of the first RTT, because any calculation we do is using
[all …]
/Linux-v5.15/arch/powerpc/kexec/
Dcore_64.c45 * Since we use the kernel fault handlers and paging code to in default_machine_kexec_prepare()
46 * handle the virtual mode, we must make sure no destination in default_machine_kexec_prepare()
53 /* We also should not overwrite the tce tables */ in default_machine_kexec_prepare()
86 * We rely on kexec_load to create a lists that properly in copy_segments()
88 * We will still crash if the list is wrong, but at least in copy_segments()
121 * After this call we may not use anything allocated in dynamic in kexec_copy_flush()
129 * we need to clear the icache for all dest pages sometime, in kexec_copy_flush()
146 mb(); /* make sure our irqs are disabled before we say they are */ in kexec_smp_down()
153 * Now every CPU has IRQs off, we can clear out any pending in kexec_smp_down()
171 /* Make sure each CPU has at least made it to the state we need. in kexec_prepare_cpus_wait()
[all …]
/Linux-v5.15/drivers/misc/vmw_vmci/
Dvmci_route.c33 * which comes from the VMX, so we know it is coming from a in vmci_route()
36 * To avoid inconsistencies, test these once. We will test in vmci_route()
37 * them again when we do the actual send to ensure that we do in vmci_route()
49 * If this message already came from a guest then we in vmci_route()
57 * We must be acting as a guest in order to send to in vmci_route()
63 /* And we cannot send if the source is the host context. */ in vmci_route()
71 * then they probably mean ANY, in which case we in vmci_route()
87 * If it is not from a guest but we are acting as a in vmci_route()
88 * guest, then we need to send it down to the host. in vmci_route()
89 * Note that if we are also acting as a host then this in vmci_route()
[all …]
/Linux-v5.15/drivers/gpu/drm/i915/
Di915_request.c69 * We could extend the life of a context to beyond that of all in i915_fence_get_timeline_name()
71 * or we just give them a false name. Since in i915_fence_get_timeline_name()
120 * freed when the slab cache itself is freed, and so we would get in i915_fence_release()
234 * is-banned?, or we know the request is already inflight. in i915_request_active_engine()
236 * Note that rq->engine is unstable, and so we double in i915_request_active_engine()
237 * check that we have acquired the lock on the final engine. in i915_request_active_engine()
320 * We know the GPU must have read the request to have in i915_request_retire()
325 * Note this requires that we are always called in request in i915_request_retire()
331 /* Poison before we release our space in the ring */ in i915_request_retire()
345 * We only loosely track inflight requests across preemption, in i915_request_retire()
[all …]
/Linux-v5.15/fs/xfs/scrub/
Dbitmap.c90 * @bitmap as the list of blocks that are not accounted for, which we assume
120 * Now that we've sorted both lists, we iterate bitmap once, rolling in xbitmap_disunion()
121 * forward through sub and/or bitmap as necessary until we find an in xbitmap_disunion()
122 * overlap or reach the end of either list. We do not reset lp to the in xbitmap_disunion()
123 * head of bitmap nor do we reset sub_br to the head of sub. The in xbitmap_disunion()
124 * list traversal is similar to merge sort, but we're deleting in xbitmap_disunion()
125 * instead. In this manner we avoid O(n^2) operations. in xbitmap_disunion()
134 * Advance sub_br and/or br until we find a pair that in xbitmap_disunion()
135 * intersect or we run out of extents. in xbitmap_disunion()
147 /* trim sub_br to fit the extent we have */ in xbitmap_disunion()
[all …]
Drepair.c57 * scrub so that we can tell userspace if we fixed the problem. in xrep_attempt()
70 * We tried harder but still couldn't grab all the resources in xrep_attempt()
71 * we needed to fix it. The corruption has not been fixed, in xrep_attempt()
81 * Complain about unfixable problems in the filesystem. We don't log
98 * Repair probe -- userspace uses this to probe if we're willing to repair a
123 /* Keep the AG header buffers locked so we can keep going. */ in xrep_roll_ag_trans()
132 * Roll the transaction. We still own the buffer and the buffer lock in xrep_roll_ag_trans()
135 * kernel. If it succeeds, we join them to the new transaction and in xrep_roll_ag_trans()
155 * reservation can be critical, and we must have enough space (factoring
170 * Figure out how many blocks to reserve for an AG repair. We calculate the
[all …]
/Linux-v5.15/drivers/usb/dwc2/
Dhcd_queue.c61 /* If we get a NAK, wait this long before retrying */
150 * @num_bits: The number of bits we need per period we want to reserve
152 * @interval: How often we need to be scheduled for the reservation this
156 * the interval or we return failure right away.
157 * @only_one_period: Normally we'll allow picking a start anywhere within the
158 * first interval, since we can still make all repetition
160 * here then we'll return failure if we can't fit within
163 * The idea here is that we want to schedule time for repeating events that all
168 * To keep things "simple", we'll represent our schedule with a bitmap that
170 * but does mean that we need to handle things specially (and non-ideally) if
[all …]
/Linux-v5.15/Documentation/filesystems/
Dxfs-delayed-logging-design.rst28 That is, if we have a sequence of changes A through to F, and the object was
29 written to disk after change D, we would see in the log the following series
94 relogging technique XFS uses is that we can be relogging changed objects
95 multiple times before they are committed to disk in the log buffers. If we
101 contains all the changes from the previous changes. In other words, we have one
103 wasting space. When we are doing repeated operations on the same set of
106 log would greatly reduce the amount of metadata we write to the log, and this
113 formatting the changes in a transaction to the log buffer. Hence we cannot avoid
116 Delayed logging is the name we've given to keeping and tracking transactional
167 changes to the log buffers, we need to ensure that the object we are formatting
[all …]
/Linux-v5.15/arch/ia64/lib/
Dcopy_user.S8 * the boundary. When reading from user space we must catch
9 * faults on loads. When writing to user space we must catch
11 * we don't need to worry about overlapping regions.
27 * - handle the case where we have more than 16 bytes and the alignment
39 #define COPY_BREAK 16 // we do byte copy below (must be >=16)
111 // Now we do the byte by byte loop with software pipeline
128 // At this point we know we have more than 16 bytes to copy
133 // The basic idea is that we copy byte-by-byte at the head so
134 // that we can reach 8-byte alignment for both src1 and dst1.
153 // Optimization. If dst1 is 8-byte aligned (quite common), we don't need
[all …]
Dstrlen.S31 // so we need to do a few extra checks at the beginning because the
32 // string may not be 8-byte aligned. In this case we load the 8byte
35 // We use speculative loads and software pipelining to hide memory
36 // latency and do read ahead safely. This way we defer any exception.
38 // Because we don't want the kernel to be relying on particular
39 // settings of the DCR register, we provide recovery code in case
41 // only normal loads. If we still get a fault then we generate a
42 // kernel panic. Otherwise we return the strlen as usual.
50 // It should be noted that we execute recovery code only when we need
51 // to use the data that has been speculatively loaded: we don't execute
[all …]
/Linux-v5.15/fs/jbd2/
Dtransaction.c66 * Base amount of descriptor blocks we reserve for each transaction.
80 * Revoke descriptors are accounted separately so we need to reserve in jbd2_descriptor_blocks_per_trans()
92 * have an existing running transaction: we only make a new transaction
93 * once we have started to commit the old one).
96 * The journal MUST be locked. We don't perform atomic mallocs on the
97 * new transaction and we can't block without protecting against other
145 * unless debugging is enabled, we no longer update t_max_wait, which
206 * We don't call jbd2_might_wait_for_commit() here as there's no in wait_transaction_switching()
222 * Wait until we can add credits for handle to the running transaction. Called
224 * transaction. Returns 1 if we had to wait, j_state_lock is dropped, and
[all …]
/Linux-v5.15/kernel/irq/
Dspurious.c26 * We wait here for a poller to finish.
28 * If the poll runs on this CPU, then we yell loudly and return
32 * We wait until the poller is done and then recheck disabled and
33 * action (about to be disabled). Only if it's still active, we return
86 * All handlers must agree on IRQF_SHARED, so we test just the in try_one_irq()
209 * We need to take desc->lock here. note_interrupt() is called in __report_bad_irq()
210 * w/o desc->lock held, but IRQ_PROGRESS set. We might race in __report_bad_irq()
244 /* We didn't actually handle the IRQ - see if it was misrouted? */ in try_misrouted_irq()
249 * But for 'irqfixup == 2' we also do it for handled interrupts if in try_misrouted_irq()
260 * Since we don't get the descriptor lock, "action" can in try_misrouted_irq()
[all …]
/Linux-v5.15/arch/arm64/kvm/hyp/nvhe/
Dtlb.c24 * For CPUs that are affected by ARM 1319367, we need to in __tlb_switch_to_guest()
25 * avoid a host Stage-1 walk while we have the guest's in __tlb_switch_to_guest()
27 * We're guaranteed that the S1 MMU is enabled, so we can in __tlb_switch_to_guest()
39 * ensuring that we always have an ISB, but not two ISBs back in __tlb_switch_to_guest()
69 * We could do so much better if we had the VA as well. in __kvm_tlb_flush_vmid_ipa()
70 * Instead, we invalidate Stage-2 for this IPA, and the in __kvm_tlb_flush_vmid_ipa()
77 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa()
88 * If the host is running at EL1 and we have a VPIPT I-cache, in __kvm_tlb_flush_vmid_ipa()
89 * then we must perform I-cache maintenance at EL2 in order for in __kvm_tlb_flush_vmid_ipa()
91 * I-cache lines allocated with a different VMID, we don't need in __kvm_tlb_flush_vmid_ipa()
[all …]
/Linux-v5.15/drivers/gpu/drm/i915/gt/
Dintel_execlists_submission.c24 * shouldn't we just need a set of those per engine command streamer? This is
35 * Regarding the creation of contexts, we have:
43 * like before) we need:
50 * more complex, because we don't know at creation time which engine is going
51 * to use them. To handle this, we have implemented a deferred creation of LR
55 * gets populated for a given engine once we receive an execbuffer. If later
56 * on we receive another execbuffer ioctl for the same context but a different
57 * engine, we allocate/populate a new ringbuffer and context backing object and
61 * only allowed with the render ring, we can allocate & populate them right
96 * we use a NULL second context) or the first two requests have unique IDs.
[all …]
/Linux-v5.15/arch/openrisc/mm/
Dfault.c59 * We fault-in kernel-space virtual memory on-demand. The in do_page_fault()
62 * NOTE! We MUST NOT take any locks for this case. We may in do_page_fault()
68 * mappings we don't have to walk all processes pgdirs and in do_page_fault()
69 * add the high mappings all at once. Instead we do it as they in do_page_fault()
82 /* If exceptions were enabled, we can reenable them here */ in do_page_fault()
100 * If we're in an interrupt or have no user in do_page_fault()
101 * context, we must not take the fault.. in do_page_fault()
125 * we get page-aligned addresses so we can only check in do_page_fault()
126 * if we're within a page from usp, but that might be in do_page_fault()
136 * Ok, we have a good vm_area for this memory access, so in do_page_fault()
[all …]
/Linux-v5.15/Documentation/driver-api/thermal/
Dcpu-idle-cooling.rst25 because of the OPP density, we can only choose an OPP with a power
35 If we can remove the static and the dynamic leakage for a specific
38 injection period, we can mitigate the temperature by modulating the
47 At a specific OPP, we can assume that injecting idle cycle on all CPUs
49 idle state target residency, we lead to dropping the static and the
69 We use a fixed duration of idle injection that gives an acceptable
132 - It is less than or equal to the latency we tolerate when the
134 user experience, reactivity vs performance trade off we want. This
137 - It is greater than the idle state’s target residency we want to go
138 for thermal mitigation, otherwise we end up consuming more energy.
[all …]

12345678910>>...309