1## Introduction {#dsppp_intro}
2
3### Dot product example
4
5If you want to compute the dot product:
6
7\f[
8
9<scale*(\overrightarrow{a}+\overrightarrow{b}),\overrightarrow{c}*\overrightarrow{d}>
10
11\f]
12
13with CMSIS-DSP, you would write:
14
15```c
16arm_add_f32(a,b,tmp1,NB);
17arm_scale_f32(tmp1,scale,tmp2,NB);
18arm_mult_f32(c,d,tmp3,NB);
19arm_dot_prod_f32(tmp2,tmp3,NB,&r);
20```
21
22There are several limitations with this way of writing the code:
23
241. The code needs to be rewritten and the `_f32` suffix changed if the developer wants to use another datatype
25
262. Temporary buffers need to be allocated and managed (`tmp1`,`tmp2`,`tmp3`,`tmp4`)
27
283. The four function calls are four different loops. It is not good for data locality and caches. The computation is not done in one pass
29
304. Each loop contains a small number of instructions. For instance, for the `arm_add_f32`, two loads, an add instruction and a store. It is not enough to enable the compiler to reorder the instructions to improve the performance
31
32With this new C++ template library, you can write:
33
34
35```cpp
36r = dot(scale*(a+b),c*d);
37```
38
39The code generated by this line computes the dot product in one pass with all the operators (`+`, `*`) included in the loop.
40There is no more any temporary buffers.
41
42### Vector operations
43
44Let's look at another example:
45
46\f[
47
48\overrightarrow{d} = \overrightarrow{a} + \overrightarrow{b} * \overrightarrow{c}
49
50\f]
51
52With the C++ library, it can be written as:
53
54
55```cpp
56Vector<float32_t,NB> d = a + b * c;
57```
58
59Here again : all the vector operations (`+`,`*`) are done in one pass with one loop. There is no more any temporary buffer.
60
61If you're coming from C and does not know anything about C++ templates, we have a very quick introduction : @ref dsppp_template "The minimum you need to know about C++ template to use this library".
62
63You can also jump directly to an @ref dsppp_vector_example "example with vector operations".
64
65