Lines Matching +full:32 +full:- +full:bit
1 /* SPDX-License-Identifier: GPL-2.0 */
14 * div_u64_rem - unsigned 64bit divide with 32bit divisor with remainder
15 * @dividend: unsigned 64bit dividend
16 * @divisor: unsigned 32bit divisor
17 * @remainder: pointer to unsigned 32bit remainder
21 * This is commonly provided by 32bit archs to provide an optimized 64bit
31 * div_s64_rem - signed 64bit divide with 32bit divisor with remainder
32 * @dividend: signed 64bit dividend
33 * @divisor: signed 32bit divisor
34 * @remainder: pointer to signed 32bit remainder
45 * div64_u64_rem - unsigned 64bit divide with 64bit divisor and remainder
46 * @dividend: unsigned 64bit dividend
47 * @divisor: unsigned 64bit divisor
48 * @remainder: pointer to unsigned 64bit remainder
59 * div64_u64 - unsigned 64bit divide with 64bit divisor
60 * @dividend: unsigned 64bit dividend
61 * @divisor: unsigned 64bit divisor
71 * div64_s64 - signed 64bit divide with 64bit divisor
72 * @dividend: signed 64bit dividend
73 * @divisor: signed 64bit divisor
82 #elif BITS_PER_LONG == 32
114 * div_u64 - unsigned 64bit divide with 32bit divisor
115 * @dividend: unsigned 64bit dividend
116 * @divisor: unsigned 32bit divisor
118 * This is the most common 64bit divide and should be used if possible,
119 * as many 32bit archs can optimize this variant better than a full 64bit
131 * div_s64 - signed 64bit divide with 32bit divisor
132 * @dividend: signed 64bit dividend
133 * @divisor: signed 32bit divisor
155 dividend -= divisor; in __iter_div_u64_rem()
166 * Many a GCC version messes this up and generates a 64x64 mult :-(
199 ah = a >> 32; in mul_u64_u32_shr()
203 ret += mul_u32_u32(ah, mul) << (32 - shift); in mul_u64_u32_shr()
233 * Each of these lines computes a 64-bit intermediate result into "c", in mul_u64_u64_shr()
234 * starting at bits 32-95. The low 32-bits go into the result of the in mul_u64_u64_shr()
235 * multiplication, the high 32-bits are carried into the next step. in mul_u64_u64_shr()
238 rh.l.low = c = (c >> 32) + rm.l.high + rn.l.high + rh.l.low; in mul_u64_u64_shr()
239 rh.l.high = (c >> 32) + rh.l.high; in mul_u64_u64_shr()
242 * The 128-bit result of the multiplication is in rl.ll and rh.ll, in mul_u64_u64_shr()
248 return (rl.ll >> shift) | (rh.ll << (64 - shift)); in mul_u64_u64_shr()
273 /* Bits 32-63 of the result will be in rh.l.low. */ in mul_u64_u32_div()
276 /* Bits 0-31 of the result will be in rl.l.low. */ in mul_u64_u32_div()
285 ({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); })
288 * DIV64_U64_ROUND_CLOSEST - unsigned 64bit divide with 64bit divisor rounded to nearest integer
289 * @dividend: unsigned 64bit dividend
290 * @divisor: unsigned 64bit divisor
292 * Divide unsigned 64bit dividend by unsigned 64bit divisor