Lines Matching full:with

7  * not use this file except in compliance with the License.
58 // Number of channels processed in a block for DW Conv with Int8 weights(MVE)
61 // A layer with lower number of channels than CH_IN_BLOCK_MVE will result in higher
62 // scratch buffer usage and a layer with higher number of channels than CH_IN_BLOCK_MVE
66 // Number of channels processed in a block for DW Conv with Int4 weights(MVE)
81 // Only applicable for processors with MVE extension.
160 * @brief Converts the elements from a s8 vector to a s16 vector with an added offset
180 * @brief Converts the elements from a s8 vector to a s16 vector with an added offset
188 * No additonal ordering is done with the result that output elements are not in order.
190 * Note this is for processors with DSP extension only.
207 * function with constraint that in_channel equals out_channel.
208 * This is for processors with MVE extension.
221 * function with constraint that in_channel equals out_channel.
222 * This is for processors with DSP extension.
265 * @brief General Matrix-multiplication function with per-channel requantization.
301 …* @brief Matrix-multiplication function for convolution with per-channel requantization for 16 bit…
310 …* @param[in] bias_data pointer to struct with bias vector. The length of this vector is eq…
319 * with 2 columns from im2col and produces two elements/output_channel. The outputs are
335 * @brief General Vector by Matrix multiplication with requantization and storage of result.
346 …* @return The function performs matrix(row_base_ref) multiplication with vector(col_base_ref) …
372 …* @brief General Vector by Matrix multiplication with requantization, storage of result and int4 w…
384 …* @return The function performs matrix(row_base_ref) multiplication with vector(col_base_ref) …
410 …* @brief Matrix-multiplication with requantization & activation function for four rows and one col…
413 * For e.g, in a 1x1 conv scenario with stride as 1.
437 * @brief General Matrix-multiplication function with per-channel requantization.
441 * - RHS is int8 packed with 2x int4
450 * @param[out] dst Pointer to the output matrix with "m" rows and "n" columns
484 * @brief General Matrix-multiplication function with per-channel requantization.
488 * - RHS is int8 packed with 2x int4
499 * @param[out] dst Pointer to the output matrix with "m" rows and "n" columns
533 * @brief General Matrix-multiplication function with per-channel requantization.
544 * @param[out] dst Pointer to the output matrix with "m" rows and "n" columns
580 …* @brief General Matrix-multiplication function with per-channel requantization and int16 input (L…
589 …* @param[in] bias_data Pointer to struct with bias vector. The length of this vector is …
592 * @param[out] dst Pointer to the output matrix with "m" rows and "n" columns
623 * @brief General Matrix-multiplication function with int8 input and int32 output.
632 * @param[out] dst Pointer to the output matrix with "m" rows and "n" columns
823 * @brief s8 Vector by Matrix (transposed) multiplication with s16 output
855 …* @brief Depthwise convolution of transposed rhs matrix with 4 lhs matrices. To be used in padded …
898 …* @brief Depthwise convolution of transposed rhs matrix with 4 lhs matrices. To be used in non-pad…
941 …* @brief Depthwise convolution of transposed rhs matrix with 4 lhs matrices. To be used in non-pad…
985 …* @brief Depthwise convolution of transposed rhs matrix with 4 lhs matrices. To be used in non-pad…
1022 …* @brief Row of s8 scalars multiplicated with a s8 matrix ad accumulated into a s32 rolling scratc…
1214 * @brief read and expand one s4 word into two s16 words with ordering.
1232 * @brief read and expand one s8 word into two s16 words with ordering.
1252 * @brief read and expand one s8 word into two s16 words with ordering and addition.
1270 * @brief read and expand two bytes into one word with ordering.
1280 * @brief read and expand two bytes into one word with ordering and addition.
1290 * @brief read and expand one s8 word into two s16 words with no additional ordering.
1309 …* @brief Matrix-multiplication function for convolution with per-channel requantization and 4 bit …
1310 * @param[in] input_a pointer to operand A, int8 packed with 2x int4.
1326 * with 2 columns from im2col and produces two elements/output_channel. The outputs are
1342 * @brief Matrix-multiplication function for convolution with per-channel requantization.
1360 * with 2 columns from im2col and produces two elements/output_channel. The outputs are
1378 …* @brief Matrix-multiplication function for convolution with per-channel requantization, supportin…
1398 * with 2 columns from im2col and produces two elements/output_channel. The outputs are
1425 * @param[in] diff_min Minimum difference with max in row. Used to check if
1560 …* @details Essentially returns (val * multiplier)/(2 ^ shift) with different rounding depe…
1566 * If shift is positive left shift 'val * multiplier' with shift
1567 …* If shift is negative right shift 'val * multiplier' with abs(sh…
1572 …* Returns (val * multiplier) with rounding divided by (2 ^ shift) with roundi…
1574 * Returns (val * multiplier)/(2 ^ (31 - shift)) with rounding
1686 …* @return Returns (val * multiplier)/(2 ^ shift) with different rounding. See arm_nn_requ…
1709 * @brief Vector saturating doubling high multiply with predication returning high half.
1726 * @brief Vector rounding divide by power of two with predication.
1747 * @brief Requantize a given vector with predication.
1997 …* Multiplies a matrix by a "batched" vector (i.e. a matrix with a batch dimension composed by inpu…
2010 that the output is always stored with sequential batches.
2027 …* Multiplies a matrix by a "batched" vector (i.e. a matrix with a batch dimension composed by inpu…
2040 Note that the output is always stored with sequential batches.
2056 * @brief s16 elementwise multiplication with s8 output
2066 * arm_nn_lstm_step_s8. Note that it is assumed that the input is stored with sequential batches.
2082 * @brief s16 elementwise multiplication with s16 output
2092 * arm_nn_lstm_step_s16. Note that it is assumed that the input is stored with sequential batches.