README.md
1# CMSIS NN
2CMSIS NN software library is a collection of efficient neural network kernels developed to maximize the
3performance and minimize the memory footprint of neural networks on Arm Cortex-M processors.
4
5## Supported Framework
6The library follows the [int8](https://www.tensorflow.org/lite/performance/quantization_spec) and int16 quantization specification of TensorFlow Lite for Microcontrollers.
7This means CMSIS-NN is bit-exact with Tensorflow Lite reference kernels. In some cases TFL and TFLM reference kernels may not be bit-exact. In that case CMSIS-NN follows TFLM reference kernels. The unit test readme provides an [overview](https://github.com/ARM-software/CMSIS-NN/blob/main/Tests/UnitTest/README.md#tests-depending-on-tflm-interpreter).
8
9## Branches and Tags
10There is a single branch called 'main'.
11Tags are created during a release. Two releases are planned to be done in a year. The releases can be found
12[here](https://github.com/ARM-software/CMSIS-NN/releases) .
13
14## Current Operator Support
15In general optimizations are written for an architecture feature. This falls into one of the following categories.
16Based on feature flags for a processor or architecture provided to the compiler, the right implementation is picked.
17### Pure C
18 There is always a pure C implementation for an operator. This is used for processors like Arm Cortex-M0 or Cortex-M3.
19### DSP Extension
20Processors with DSP extension uses Single Instruction Multiple Data(SIMD) instructions for optimization. Examples of
21processors here are Cortex-M4 or a Cortex-M33 configured with optional DSP extension.
22
23### MVE Extension
24Processors with Arm Helium Technology use the Arm M-profile Vector Extension(MVE) instructions for optimization.
25Examples are Cortex-M55 or Cortex-M85 configured with MVE.
26
27| Operator | C <br> int8 | C<br>int16 | C<br>int4* | DSP<br>int8 | DSP<br>int16 | DSP<br>int4* | MVE<br>int8 | MVE<br>int16 | MVE<br>int4* |
28| --------------- | ----------- | ---------- |------------|-------------| -------------|--------------|-------------| -------------|--------------|
29| Conv2D | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
30| DepthwiseConv2D | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
31| TransposeConv2D | Yes | No | No | Yes | No | No | Yes | No | No |
32| Fully Connected | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
33| Batch Matmul | Yes | Yes | No | Yes | Yes | No | Yes | Yes | No |
34| Add | Yes | Yes | N/A | Yes | Yes | N/A | Yes | Yes | N/A |
35| Minimum | Yes | No | N/A | No | No | N/A | Yes | No | N/A |
36| Maximum | Yes | No | N/A | No | No | N/A | Yes | No | N/A |
37| Mul | Yes | Yes | N/A | Yes | Yes | N/A | Yes | Yes | N/A |
38| MaxPooling | Yes | Yes | N/A | Yes | Yes | N/A | Yes | Yes | N/A |
39| AvgPooling | Yes | Yes | N/A | Yes | Yes | N/A | Yes | Yes | N/A |
40| Softmax | Yes | Yes | N/A | Yes | Yes | N/A | Yes | No | N/A |
41| LSTM | Yes | Yes | No | Yes | Yes | No | Yes | Yes | No |
42| SVDF | Yes | No | No | Yes | No | No | Yes | No | No |
43| Pad | Yes | No | N/A | No | No | N/A | Yes | No | N/A |
44| Transpose | Yes | No | N/A | No | No | N/A | Yes | No | N/A |
45
46* int4 weights + int8 activations
47
48## Contribution Guideline
49First, a thank you for the contribution. Here are some guidelines and good to know information to get started.
50
51### Coding Guideline
52By default, follow the style used in the file. You'll soon start noticing a pattern like
53* Variable and function names are lower case with an underscore separator.
54* Hungarian notation is not used. Well, almost.
55* If the variable names don't convey the action, then add comments.
56
57### New Files
58One function per file is followed in most places. In those cases, the file name must match the function name. Connect
59the function to an appropriate Doxygen group as well.
60
61### Doxygen
62Function prototypes must have a detailed comment header in Doxygen format. You can execute the doxygen document generation
63script in the Documentation/Doxygen folder to check that no errors are introduced.
64
65### Unit Tests
66For any new features and bug fixes, new unit tests are needed. Improvements have to be verifed by unit tests. If you do
67not have the means to execute the tests, you can still make the PR and comment that you need help in completing/executing
68the unit tests.
69
70### Version & Date
71Each File has a version number and a date field that must be updated when making any change to that file. The versioning
72follows Semantic Versioning 2.0.0 format. For details check: https://semver.org/
73
74## Building CMSIS-NN as a library
75It is recommended to use toolchain files from [Arm Ethos-U Core Platform](https://review.mlplatform.org/admin/repos/ml/ethos-u/ethos-u-core-platform) project. These are supporting TARGET_CPU, which is a required argument. Note that if not specifying TARGET_CPU, these toolchains will set some default. The format must be TARGET_CPU=cortex-mXX, see examples below.
76
77Here is an example:
78
79```
80cd </path/to/CMSIS_NN>
81mkdir build
82cd build
83cmake .. -DCMAKE_TOOLCHAIN_FILE=</path/to/ethos-u-core-platform>/cmake/toolchain/arm-none-eabi-gcc.cmake -DTARGET_CPU=cortex-m55
84make
85```
86
87Some more examples:
88
89```
90cmake .. -DCMAKE_TOOLCHAIN_FILE=</path/to/ethos-u-core-platform>/cmake/toolchain/armclang.cmake -DTARGET_CPU=cortex-m55
91cmake .. -DCMAKE_TOOLCHAIN_FILE=</path/to/ethos-u-core-platform>/cmake/toolchain/arm-none-eabi-gcc.cmake -DTARGET_CPU=cortex-m7
92cmake .. -DCMAKE_TOOLCHAIN_FILE=</path/to/ethos-u-core-platform>/cmake/toolchain/armclang.cmake -DTARGET_CPU=cortex-m3
93```
94
95### Compiler Options
96Default optimization level is set at Ofast. This can be overwritten with CMake on command line by using <nobr>*"-DCMSIS_OPTIMIZATION_LEVEL"*</nobr>. Please change according to project needs.
97Just bear in mind this can impact performance. With only optimization level -O0, *ARM_MATH_AUTOVECTORIZE* needs to be defined for processors with Helium
98Technology.
99
100The compiler option *'-fomit-frame-pointer'* is enabled by default at -O and higher. When no optimization level is specified,
101you may need to specify '-fomit-frame-pointer'.
102
103The compiler option *'-fno-builtin'* does not utilize optimized implementations of e.g. memcpy and memset, which are heavily used by CMSIS-NN. It can significantly downgrade performance. So this should be avoided. The compiler option *'-ffreestanding'* should also be avoided as it enables '-fno-builtin' implicitly.
104
105Another option is to enable CMSIS_NN_USE_SINGLE_ROUNDING. This may affect the output. If enabling this the equivalent flag should be enabled in TFL/TFLM.
106
107For processors with DSP extension, int4 and int8 convolutions make use of the restrict keyword for the output pointer. This can allow the compiler to make optimizations but the actual performance result depends on the Arm(R) Cortex(R)-M processor, the compiler and the model. This optimization can be enabled by providing the compiler with a defition of OPTIONAL_RESTRICT_KEYWORD=__restrict . In general Arm Cortex-M7 will benefit from this. Similar Arm Cortex-M4 and Cortex-M33, will generally not benefit from it, but it may still bring an uplift depending on the model and compiler. It is recommended to enable this for Cortex-M7.
108
109### Supported Compilers
110* CMSIS-NN is tested on Arm Compiler 6 and on Arm GNU Toolchain.
111* IAR compiler is not tested and there can be compilation and/or performance issues.
112* Compilation for Host is not supported out of the box. It should be possible to use the C implementation and compile for host with minor stubbing effort.
113
114## Inclusive Language
115This product confirms to Arm’s inclusive language policy and, to the best of our knowledge, does not contain any non-inclusive language. If you find something that concerns you, email terms@arm.com.
116
117## Support / Contact
118
119For any questions or to reach the CMSIS-NN team, please create a new issue in https://github.com/ARM-software/CMSIS-NN/issues
120