1# CMSIS NN
2CMSIS NN software library is a collection of efficient neural network kernels developed to maximize the
3performance and minimize the memory footprint of neural networks on Arm Cortex-M processors.
4
5## Supported Framework
6The library follows the [int8](https://www.tensorflow.org/lite/performance/quantization_spec) and int16 quantization specification of TensorFlow Lite for Microcontrollers.
7This means CMSIS-NN is bit-exact with Tensorflow Lite reference kernels. In some cases TFL and TFLM reference kernels may not be bit-exact. In that case CMSIS-NN follows TFLM reference kernels. The unit test readme provides an [overview](https://github.com/ARM-software/CMSIS-NN/blob/main/Tests/UnitTest/README.md#tests-depending-on-tflm-interpreter).
8
9## Branches and Tags
10There is a single branch called 'main'.
11Tags are created during a release. Two releases are planned to be done in a year. The releases can be found
12[here](https://github.com/ARM-software/CMSIS-NN/releases) .
13
14## Current Operator Support
15In general optimizations are written for an architecture feature. This falls into one of the following categories.
16Based on feature flags for a processor or architecture provided to the compiler, the right implementation is picked.
17### Pure C
18 There is always a pure C implementation for an operator. This is used for processors like Arm Cortex-M0 or Cortex-M3.
19### DSP Extension
20Processors with DSP extension uses Single Instruction Multiple Data(SIMD) instructions for optimization. Examples of
21processors here are Cortex-M4 or a Cortex-M33 configured with optional DSP extension.
22
23### MVE Extension
24Processors with Arm Helium Technology use the Arm M-profile Vector Extension(MVE) instructions for optimization.
25Examples are Cortex-M55 or Cortex-M85 configured with MVE.
26
27| Operator        | C <br> int8 | C<br>int16 | C<br>int4* | DSP<br>int8 | DSP<br>int16 | DSP<br>int4* | MVE<br>int8 | MVE<br>int16 | MVE<br>int4* |
28| --------------- | ----------- | ---------- |------------|-------------| -------------|--------------|-------------| -------------|--------------|
29| Conv2D          | Yes         | Yes        | Yes        | Yes         | Yes          | Yes          | Yes         | Yes          | Yes          |
30| DepthwiseConv2D | Yes         | Yes        | Yes        | Yes         | Yes          | Yes          | Yes         | Yes          | Yes          |
31| TransposeConv2D | Yes         | No         | No         | Yes         | No           | No           | Yes         | No           | No           |
32| Fully Connected | Yes         | Yes        | Yes        | Yes         | Yes          | Yes          | Yes         | Yes          | Yes          |
33| Add             | Yes         | Yes        | N/A        | Yes         | Yes          | N/A          | Yes         | Yes          | N/A          |
34| Mul             | Yes         | Yes        | N/A        | Yes         | Yes          | N/A          | Yes         | Yes          | N/A          |
35| MaxPooling      | Yes         | Yes        | N/A        | Yes         | Yes          | N/A          | Yes         | Yes          | N/A          |
36| AvgPooling      | Yes         | Yes        | N/A        | Yes         | Yes          | N/A          | Yes         | Yes          | N/A          |
37| Softmax         | Yes         | Yes        | N/A        | Yes         | Yes          | N/A          | Yes         | No           | N/A          |
38| LSTM            | Yes         | NA         | No         | Yes         | NA           | No           | Yes         | NA           | No           |
39| SVDF            | Yes         | No         | No         | Yes         | No           | No           | Yes         | No           | No           |
40
41* int4 weights + int8 activations
42
43## Contribution Guideline
44First, a thank you for the contribution. Here are some guidelines and good to know information to get started.
45
46### Coding Guideline
47By default, follow the style used in the file. You'll soon start noticing a pattern like
48* Variable and function names are lower case with an underscore separator.
49* Hungarian notation is not used. Well, almost.
50* If the variable names don't convey the action, then add comments.
51
52### New Files
53One function per file is followed in most places. In those cases, the file name must match the function name. Connect
54the function to an appropriate Doxygen group as well.
55
56### Doxygen
57Function prototypes must have a detailed comment header in Doxygen format. You can execute the doxygen document generation
58script in the Documentation/Doxygen folder to check that no errors are introduced.
59
60### Unit Tests
61For any new features and bug fixes, new unit tests are needed. Improvements have to be verifed by unit tests. If you do
62not have the means to execute the tests, you can still make the PR and comment that you need help in completing/executing
63the unit tests.
64
65### Version & Date
66Each File has a version number and a date field that must be updated when making any change to that file. The versioning
67follows Semantic Versioning 2.0.0 format. For details check: https://semver.org/
68
69## Building CMSIS-NN as a library
70It is recommended to use toolchain files from [Arm Ethos-U Core Platform](https://review.mlplatform.org/admin/repos/ml/ethos-u/ethos-u-core-platform) project. These are supporting TARGET_CPU, which is a required argument. Note that if not specifying TARGET_CPU, these toolchains will set some default. The format must be TARGET_CPU=cortex-mXX, see examples below.
71
72Here is an example:
73
74```
75cd </path/to/CMSIS_NN>
76mkdir build
77cd build
78cmake .. -DCMAKE_TOOLCHAIN_FILE=</path/to/ethos-u-core-platform>/cmake/toolchain/arm-none-eabi-gcc.cmake -DTARGET_CPU=cortex-m55
79make
80```
81
82Some more examples:
83
84```
85cmake .. -DCMAKE_TOOLCHAIN_FILE=</path/to/ethos-u-core-platform>/cmake/toolchain/armclang.cmake -DTARGET_CPU=cortex-m55
86cmake .. -DCMAKE_TOOLCHAIN_FILE=</path/to/ethos-u-core-platform>/cmake/toolchain/arm-none-eabi-gcc.cmake -DTARGET_CPU=cortex-m7
87cmake .. -DCMAKE_TOOLCHAIN_FILE=</path/to/ethos-u-core-platform>/cmake/toolchain/armclang.cmake -DTARGET_CPU=cortex-m3
88```
89
90### Compiler Options
91Default optimization level is set at Ofast. This can be overwritten with CMake on command line by using <nobr>*"-DCMSIS_OPTIMIZATION_LEVEL"*</nobr>. Please change according to project needs.
92Just bear in mind this can impact performance. With only optimization level -O0, *ARM_MATH_AUTOVECTORIZE* needs to be defined for processors with Helium
93Technology.
94
95The compiler option *'-fomit-frame-pointer'* is enabled by default at -O and higher. When no optimization level is specified,
96you may need to specify '-fomit-frame-pointer'.
97
98The compiler option *'-fno-builtin'* does not utilize optimized implementations of e.g. memcpy and memset, which are heavily used by CMSIS-NN. It can significantly downgrade performance. So this should be avoided. The compiler option *'-ffreestanding'* should also be avoided as it enables '-fno-builtin' implicitly.
99
100Another option is to enable CMSIS_NN_USE_SINGLE_ROUNDING. This may affect the output. If enabling this the equivalent flag should be enabled in TFL/TFLM.
101
102### Supported Compilers
103* CMSIS-NN is tested on Arm Compiler 6 and on Arm GNU Toolchain.
104* IAR compiler is not tested and there can be compilation and/or performance issues.
105* Compilation for Host is not supported out of the box. It should be possible to use the C implementation and compile for host with minor stubbing effort.
106
107## Inclusive Language
108This product confirms to Arm’s inclusive language policy and, to the best of our knowledge, does not contain any non-inclusive language. If you find something that concerns you, email terms@arm.com.
109
110## Support / Contact
111
112For any questions or to reach the CMSIS-NN team, please create a new issue in https://github.com/ARM-software/CMSIS-NN/issues