• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..--

DebugScripts/03-Aug-2024-14896

FrameworkInclude/03-Aug-2024-1,856999

FrameworkSource/03-Aug-2024-4,9113,480

Include/03-Aug-2024-6,1264,412

Parameters/DSP/03-Aug-2024-1,7541,732

PatternGeneration/03-Aug-2024-6,9315,049

Patterns/03-Aug-2024-4,645,6564,643,095

Source/03-Aug-2024-47,33931,818

TestScripts/03-Aug-2024-2,4211,789

cmsis_build/03-Aug-2024-16,27110,483

.gitignoreD03-Aug-2024396 2929

README.mdD03-Aug-202426.6 KiB766471

addAllBenchToDatabase.batD03-Aug-2024613 2626

addAllBenchToRegressionDatabase.batD03-Aug-2024652 2626

addToDB.pyD03-Aug-202411.7 KiB338273

addToRegDB.pyD03-Aug-202412.2 KiB348284

bench.txtD03-Aug-202462.3 KiB1,9991,608

bench_f16.txtD03-Aug-202419.3 KiB630489

convertToOld.pyD03-Aug-20243.1 KiB11283

createDb.sqlD03-Aug-20243.7 KiB13197

createDefaultFolder.shD03-Aug-2024133 76

desc.txtD03-Aug-2024231.9 KiB4,9924,043

desc_f16.txtD03-Aug-202453.4 KiB1,192957

diff.sqlD03-Aug-20242.3 KiB6328

examples.sqlD03-Aug-20241.8 KiB6910

extractDb.pyD03-Aug-202439.4 KiB1,185855

main.cppD03-Aug-20241.5 KiB7547

patterndata.cD03-Aug-2024167 1610

preprocess.pyD03-Aug-2024568 1912

processResult.pyD03-Aug-202420.7 KiB607452

processTests.pyD03-Aug-20241.4 KiB4020

summaryBench.pyD03-Aug-20244.1 KiB13892

testmain.cppD03-Aug-20242.2 KiB9559

README.md

1# TEST FRAMEWORK
2
3This framework is for our own internal use. We decided to release it but, at least in short term, we won't give any help or support about it.
4
5## Summary
6
7Here is a quick summary of how to get started with the framework the first time the repository is cloned.
8
9First, you must use the same tag than the one for your CMSIS Pack. Otherwise, the cloned source may contain tests for functions which are not yet available in the official pack.
10
11You can also look at the artifact for the commit : it is containing a CMSIS Pack for this commit.
12
13You need `Python 3` and the following Python packages:
14
15```
16pip install pyparsing
17pip install Colorama
18```
19
20Once you have cloned the right version and installed the Python packages, you need to generate some files.
21
22**Generation of all C files needed to build the tests.**
23
24The commands must be run from Testing folder:
25
26`createDefaultFolder.sh`
27
28`python preprocess.py -f desc.txt`
29
30`python preprocess.py -f desc_f16.txt -o Output_f16.pickle`
31
32`python processTests.py -e`
33
34`python processTests.py -e -f Output_f16.pickle`
35
36**Now the test suite you want to run can be selected:**
37
38`python processTests.py -e BasicTestsF32`
39
40Each time you want to change the test suite to run, you need to execute this function. No need to redo all the previous steps to generate all the missing files.
41
42Note that if the test suite is part of the half float tests, then you'll need to do instead:
43
44`python processTests.py -f Output_f16.pickle -e BasicTestsF16`
45
46**Building the test framework:**
47
48In `Testing\cmsis_build` you can find some scripts:
49
50* `buildsolution.sh` is converting the solution file, using `csolution`,  to generate the `.cprj` files for each target
51* `build.sh` is building all the targets using `cbuild` tool
52
53The CMSIS build tools must be installed and configured. You may need to run the CMSIS build tools setup script before the previous steps:
54
55`source /cmsistools/etc/setup`
56
57and you may need to add the path to the `csolution` tool:
58
59`export PATH=$PATH:/cmsistools/bin/linux64`
60
61(If you are on Windows, use the `bin/windows64` folder)
62
63You may need to initialize the pack repository and install the needed packs:
64
65`cpackget init`
66
67`cpackget add -f test_packlist.txt`
68
69The `test_packlist.txt` is in the `Testing\cmsis_build` folder.
70
71**You can then run the executable on Virtual Hardware.**
72
73For instance, to run the test on the virtual hardware for Corstone 300, if you have the Arm MDK installed on Windows :
74
75`C:\Keil_v5\ARM\VHT\VHT_Corstone_SSE-300_Ethos-U55.exe -f configs\ARM_VHT_Corstone_300_config.txt Objects\test.Release+VHT-Corstone-300.axf > results.txt`
76
77**Parsing the results:**
78
79If you are still in the `cmsis_build` folder:
80
81`python ../processResult.py -f ../Output.pickle -e -r results.txt`
82
83## REQUIREMENTS
84
85Requirements for the test framework.
86
87### Test descriptions
88
89#### R1 : The tests shall be described in a file
90We need a source of truth which is describing all the tests and can be used
91to generate code, format output etc ...
92
93#### R2 : The test description should support a hierarchy
94We have lots of tests. We need to be able to organize them in a
95hierarchical way
96
97#### R3 : A test shall be uniquely identified
98We need a way to identify in an unique way each test to ensure traceability and enable to create
99history of test results and benchmark.
100
101#### R4 : The unique identifier shall not change when tests are added or removed.
102It is important to keep traceability.
103
104#### R5 : The test description shall list the test patterns and input patterns required by the tests
105
106#### R6 : It shall be possible to parametrize the tests
107
108For benchmarking, we may need to vary some dimensions of the tests (like input length).
109The tests may depend on several parameters (width, height etc ...)
110We need to be able to specify how those parameters are varied.
111
112#### R7 : It shall be possible to specify a subset of parameters (which could be empty) to compute regression.
113For instance, if our test is dependent on a vector size, we may want to compute a linear regression
114to know how the performances are dependent on the vector size.
115
116But, our test may also depend on another parameter B which is not interesting us in the regression. In that case, the regression formula should not take into account B. And we would have several regression formula for each value of the parameter B.
117
118The parameters of the tests would be Vector Size and B but the Summary parameter only Vector Size.
119
120#### R8 : The concept of a test suite shall be supported.
121A test suite is a set of tests packaged with some data.
122
123### Test execution
124
125For following requirements, we define a device under tests (DUT) as the place where the function to test is executed. But the test itself (to check that the execution has been successful could be running on the DUT or on a host like a PC).
126
127
128#### R9 : The memory should be cleaned between 2 tests
129A test should start (as far as possible) in a clean state. There should not be interferences between the tests.
130
131#### R10 : The test may be run on the DUT or on the host
132
133#### R11 : Output of tested functions could be dumped or not
134
135#### R12 : The tested function should not know where are the patterns and how to get them
136
137#### R13 : Control of the tests could run on the DUT but could also be run on a host
138
139#### R14 : Summary of test execution shall support several formats
140(CSV, HTML, Text etc ...)
141
142#### R15 : One should not assume the test environment on the DUT has access to IOs.
143
144
145## DESIGN PRINCIPLES
146
147The design is a consequence of all the requirements.
148
149### Test description
150
151A test description file is defined with a specific syntax to support R1 to R8.
152
153#### Hierachical structure
154
155    group Root {
156        class = Root
157        group DSP Test {
158            class = DSPTest
159            folder = DSP
160            suite Basic Tests {
161               class = BasicTests
162               folder = BasicMaths
163
164The tests are organized in a hierarchy. For each node of the hierarchy, a C++ class is specified.
165The script processTest.py is generating C++ codee for the group.
166For the test suite, the script is generating a partial implementation since a test suite is containing tests and you need to add the test themselves.
167
168The patterns, output of tests, parameters are also following a hierarchical structure. But they do not need
169to be organized in exactly the same way. So, the folder property of a node is optional.
170
171A folder can be reused for different nodes. For instance, you may have a suite for testing and one for benchmarking and both may use the same pattern folder.
172
173A test suite is more complex than a group since it contains the description of the tests and related information.
174
175#### Test suite
176
177The simplest suite is just containing functions:
178
179    suite Basic Tests {
180           class = BasicTests
181           folder = BasicMaths
182
183           Functions {
184             Test arm_add_f32:test_add_f32
185           }
186    }
187
188A function is described with some text and followed by the name of the function in the C++ class.
189The text is used when reporting the results of the tests.
190
191The same function can be used for different tests in the suite. The tests will be different due to different input data or parameters.
192
193A test is requiring input patterns, reference patterns and outputs (to be compared to the reference).
194Since the test must not know where is the data and how to get it, this information is provided in the test description file.
195
196So, the test suite would be:
197
198    suite Basic Tests {
199           class = BasicTests
200           folder = BasicMaths
201
202           Pattern INPUT1_F32_ID : Input1_f32.txt
203           Pattern INPUT2_F32_ID : Input2_f32.txt
204           Pattern REF_ADD_F32_ID : Reference1_f32.txt
205           Output  OUT_SAMPLES_F32_ID : Output
206
207           Functions {
208             Test arm_add_f32:test_add_f32
209           }
210    }
211
212A pattern or output description is an ID (to be used in the code) followed by a filename.
213
214The file is in the folder defined with the folder properties of the group / suites.
215
216The root folder for pattern files and output files is different.
217
218#### Benchmarks
219
220A benchmark will often have to be run with different lengths for the input.
221So we need a way to communicate arguments to a function.
222
223We make the assumption that those arguments are integers.
224In the benchmark results, we may want to generate a CSV (or any other format) with different columns for those arguments.
225
226And we may want to compute a regression formula using only a subset of those arguments.
227
228So, we have the possibility in the suite section to add a parameter section to describe all of this.
229
230    suite Complex Tests {
231            class = ComplexTests
232            folder = ComplexMaths
233
234            ParamList {
235                A,B,C
236                Summary A,B
237                Names "Param A", "Param B"
238                Formula "A*B"
239            }
240
241            Pattern INPUT1_F32_ID : Input1_f32.txt
242
243
244In above example we declare that the functions of the suite are using 3 parameters named A,B and C.
245We declare that a regression formula will use only A and B. So for each C value, we will get a different
246regression formula.
247
248We list the names to use when formatting the output of benchmarks.
249We define a regression formula using R syntax. (We do not write "cycles ~ A*B" but only "A*B")
250
251Once parameters have been described, we need a way to feed parameter values to a test.
252
253There are 2 ways. First way is a parameter file. Problem of a parameter file when it has to be included in the test (C array) is that it may be big. So, we also have a parameter generator. It is less flexible but enough for lot of cases.
254
255Those parameters values, when specified with a file, are described with:
256
257            Output  OUT_SAMPLES_F32_ID : Output
258            Params PARAM1_ID : Params1.txt
259
260They follow the outputs section and use similar syntax.
261
262When the parameter is specified with a generator then the syntax is :
263
264    Params PARAM3_ID = {
265                A = [1,3,5]
266                B = [1,3,5]
267                C = [1,3,5]
268            }
269
270This generator will compute the cartesian product of the 3 lists.
271
272To use parameters with a function the syntax is:
273
274    Functions {
275               Test A:testA -> PARAM3_ID
276            } -> PARAM1_ID
277
278PARAM1_ID is the default applied to all functions.
279In this example we decide to use PARAM3_ID for the testA function.
280
281#### File formats
282Pattern files have the following format:
283
284    W
285    128
286    // 1.150898
287    0x3f93509c
288    ...
289
290
291First line if the word size (W,H or B)
292Second line is the number of samples
293Then for each samples we have an human representation of the value:
294// 1.150898
295
296and an hexadecimal representation
2970x3f93509c
298
299Output files are only containing the hexadecimal values.
300
301Parameters files have the following format:
302
303    81
304    1
305    1
306    1
307    1
308    1
309    3
310    ...
311
312First line is the number of samples. Then the samples.
313
314First line must be a multiple of the number of parameters. In our above example we have 3 parameters A,B,C.
315So, the number of possible run must be a multiple of 3 since we need to specify values for all parameters.
316
317#### disabled
318
319Any node (Group, Suite or Function) can be disabled by using disabled { ...}.
320
321A disabled group/suite/test is not executed (and its code not generated for group/suite).
322Using disabled for tests is allowing to disable a test without changing the test ID of following tests.
323
324
325### Memory manager
326Memory manager is coming from requirement R9
327Its API is defined by virtual class Memory. An implementation ArrayMemory is provided which is using a buffer.
328The details of the APIs are in Test.h
329
330A memory manager can provide new buffer, free all the already allocated buffers and give a generation number which is incremented each time all buffer are released.
331
332#### Runner
333According to R13 , the test may be controlled on the DUT or from an external host.
334It is implemented with a Runner class. The only implementation provided is IORunner,
335
336A Runner is just an implementation of the visitor pattern. A runner is applied to the tree of tests.
337In case of the IO runner, an IO mechanism and a memory manager must be provided.
338
339The runner is running a test and for benchmark measuring the cycles.
340Cycles measurement can be based on internal counter or external trace.
341Generally, there is a calibration at beginning of the Runner to estimate the overhead of
342cycle measurements. This overhead is then removed when doing the measurement.
343
344#### IO
345According to R12 and R15, tests do not know how to access patterns. It is a responsiblity implemented with the IO, Pattern and PatternMgr.
346
347IO is about loading patterns and dumping output. It is not about IO in general.
348We provide 2 IO implementations : Semihosting and FPGA.
349
350FPGA is when you need to run the tests in a constrained environment where you only have stdout. The inputs of tests are in C array. The script processTest.py will generate those C arrays.
351
352Patterns is the interface to patterns and output from the test point of view.
353They will return NULL when a pattern is still referencing a generation of memory older than the current one.
354
355PatternMgr is the link between IO and Memory and knows how to load a pattern and save it into memory.
356
357#### Dump feature
358According to R10 and R11, one must be able to disable tests done on the DUT and dump the output so that the test itself can be done on the host.
359When instantiating a runner, you can specify the running mode with an enum. For instance Testing::kTestAndDump.
360There are 3 modes, Test only, Dump only, Test and dump.
361
362In dump only mode, tests using pattern will fail but the tests will be considered as passed (because we are only interested in the output).
363
364But it means that no test using patterns shoudl be used in the middle of the test or some part of it may not be executed. Those tests must be kept at the end.
365
366#### processResult
367For R14, we have a python script which will process the result of tests and format it into several possible formats like text, CSV, Mathematica dataset.
368
369
370## HOW TO RUN IT
371
372### Needed packages
373
374    pip install pyparsing
375    pip install Colorama
376
377If you want to compute summary statistics with regression:
378
379    pip install statsmodels
380    pip install numpy
381    pip install panda
382
383If you want to run the script which is launching all the tests on all possible configurations then
384you'll need yaml:
385
386    pip install pyyaml
387
388### Generate the test patterns in Patterns folder
389
390We have archived lot of test patterns on github. So this step is needed only if you write new test patterns.
391
392    cd Testing
393    python PatternGeneration\BasicMaths.py
394
395
396### Generate the cpp,h and txt files from the desc.txt file
397
398First time the project is cloned from github, you'll need to create some missing folders as done
399in the script `createDefaultFolder.sh`
400
401Those folders are used to contain the files generated by the scripts.
402
403Once those folders have been created. You can use following commands to create the generated C files.
404
405    cd ..
406
407    python preprocess.py -f desc.txt
408
409This will create a file `Output.pickle` which is containing a Python object representing
410the parsed data structure. It is done because parsing a big test description file is quite slow.
411
412So, it is needed to be done only once or if you modify the test description file.
413
414Then, the tests can be processed to configure the test environment with
415
416    python processTests.py -f Output.pickle
417
418or just
419
420    python processTests.py
421
422You can also use the -e option (for embedded). It will include all the patterns (for the selected tests) into a C array. It is the **preferred** method if you want to run on a board. In below examples, we will
423always use -e option.
424
425    python processTests.py -e
426
427You can pass a C++ class to specifiy that you want to generate tests only for a specific group or suite.
428
429    python processTests.py -e BasicTests
430
431You can add a test ID to specify that you wan to run only a specific test in the suite:
432
433    python processTests.py -e BasicTests 4
434
435#### Important:
436
437The very first time you configure the test framework, you'll need to generate C files for all the tests
438
439The reason is that the build is not aware of the filtering and will include some source files even if they are not needed for a given test suite. So those files should at least be present to allow the compilation to proceed. They need to be generated at least once.
440
441To generate all of them the first time, you can do (from `Testing` folder):
442
443`python preprocess.py -f desc.txt`
444
445`python preprocess.py -f desc_f16.txt -o Output_f16.pickle`
446
447`python processTests.py -e`
448
449`python processTests.py -e -f Output_f16.pickle`
450
451
452
453### Building and Running
454
455You can use the [CMSIS build tools](https://github.com/Open-CMSIS-Pack/devtools) to build.
456
457In `Testing\cmsis_build` you can find some scripts:
458
459* `buildsolution.sh` is converting the solution file, using `csolution`,  to generate the `.cprj` files for each target
460* `build.sh` is building all the targets using `cbuild` tool
461
462You can then run the executable on Virtual Hardware. For instance, to run the test on the virtual hardware for Corstone 300, if you have the Arm MDK installed on Windows :
463
464`C:\Keil_v5\ARM\FVP\MPS2_Cortex-M\FVP_MPS2_Cortex-M55_MDK.exe ^
465   -f configs/ARM_VHT_MPS2_M55_config.txt ^
466   Objects\test.Release+FVP_M55.axf > results.txt`
467
468### Parse the results
469
470The results generated in previous step can be processed with a Python script.
471
472The `-f` option should be used to tell the script where to find the `Output.pickle` file if the script is not run from the `Testing` folder.
473
474    python processResult.py -f Output.pickle -e -r result.txt
475
476-e option is needed if the mode -e was used with processTests because the output has a different
477format with or without -e option.
478
479
480Some cycles are displayed with the test status (passed or failed). **Don't trust** those cycles for a benchmark.
481
482At this point they are only an indication. The timing code will have to be tested and validated.
483
484### Generate summary statistics
485
486The parsing of the results may have generated some statistics in FullBenchmark folder.
487
488The script summaryBench can parse those results and compute regression formula.
489
490    python summaryBench.py -r build\result.txt
491
492The file result.txt must be placed inside the build folder for this script to work.
493Indeed, this script is using the path to result.txt to also find the file currentConfig.csv which has
494been created by the cmake command.
495
496The Output.pickle file is used by default. It can be changed with -f option.
497
498The output of this script may look like:
499
500    "ID","CATEGORY","Param C","Regression","MAX"
501    1,"DSP:ComplexMaths",1,"225.3749999999999 + A * 0.7083333333333606 + B * 0.7083333333333641 + A*B * 1.3749999999999876",260
502
503Each test is uniquely identified with the CATEGORY and test ID (ID in the suite).
504The MAX column is the max of cycles computed for all values of A and B which were used for this benchmark.
505
506### Other tools
507
508To convert some benchmark to an older format.
509The PARAMS must be compatible between all suites which are children of AGroup
510
511    python convertToOld.py -e AGroup
512
513Output.pickle is used by default. It can be changed with -f option.
514
515To add a to sqlite3 databse:
516
517    python addToDB.py -e AGroup
518
519Output.pickle is used by default. It can be changed with -f option.
520
521The database must be created with createDb.sql before this script can be used.
522
523### Semihosting or FPGA mode
524The script processTests and processResult must be used with additional option -e for the FPGA (embedded mode)
525
526`testmain.cpp`, in semihosting mode, must contain:
527
528```cpp
529Client::Semihosting io("../TestDesc.txt","../Patterns","../Output");
530```
531
532In FPGA (embedded mode), this lne must be replaced with:
533
534```cpp
535Client::FPGA io(testDesc,patterns);
536```
537
538testDesc and patterns are char* generated by the script processTests and containing the description
539of the tests to run and the test pattern samples to be used.
540
541### Dumping outputs
542
543To dump the output of the tests, the line
544
545```cpp
546Client::IORunner runner(&io,&mgr,Testing::kTestOnly);
547```
548
549Must be replaced by
550
551```cpp
552Client::IORunner runner(&io,&mgr,Testing::DumpOnly);
553```
554
555or
556
557```cpp
558Client::IORunner runner(&io,&mgr,Testing::kTestAndDump);
559```
560
561and of course, the test must contain a line to dump the outputs.
562
563In DumpOnly mode, reference patterns are not loaded and the test assertions are "failing" but reporting passed.
564
565So, if a test is in the middle of useful code, some part of the code will not execute.
566
567As consequence, if you intend to use the DumpOnly mode, you must ensure that all test assertions are at the
568end of your test.
569
570## testmain.cpp
571
572To start the tests you need to:
573
574* Allocate a memory manager
575* Choose IO (Semihosting or FPGA)
576* Instantiate a pattern manager (linking IO and memory)
577* Choose a test Runner (IORunner)
578* Instantiate the root object which is containing all tests
579* Apply the runner to the root object
580
581This is done in testmain.cpp.
582
583## HOW TO ADD NEW TESTS
584
585For a test suite MyClass, the scripts are generating an include file MyClass_decl.h
586
587You should create another include Include/MyClass.h and another cpp file Source/MyClass.cpp in TEsting folder.
588
589MyClass.h should contain:
590
591```cpp
592 #include "Test.h"
593 #include "Pattern.h"
594 class MyClass:public Client::Suite
595     {
596         public:
597             MyClass(Testing::testID_t id);
598             void setUp(Testing::testID_t,std::vector<Testing::param_t>& params,Client::PatternMgr *mgr);
599             void tearDown(Testing::testID_t,Client::PatternMgr *mgr);
600         private:
601             #include "MyClass_decl.h"
602
603             // Definitions of the patterns you have in the test description file
604             // for this test suite
605             Client::Pattern<float32_t> input1;
606             Client::Pattern<float32_t> input2;
607             Client::LocalPattern<float32_t> output;
608             // Reference patterns are not loaded when we are in dump mode
609             Client::RefPattern<float32_t> ref;
610     };
611```
612
613Then, you should provide an implementation of setUp, tearDown and of course your tests.
614
615So, MyClass.cpp could be:
616
617```cpp
618 #include "MyClass.h"
619 #include "Error.h"
620
621
622     // Implementation of your test
623     void MyClass::test_add_f32()
624     {
625         // Ptr to input patterns, references and output.
626         // Input and references have been loaded in setUp
627         const float32_t *inp1=input1.ptr();
628         const float32_t *inp2=input2.ptr();
629         float32_t *refp=ref.ptr();
630         float32_t *outp=output.ptr();
631
632         // Execution of the tests
633         arm_add_f32(inp1,inp2,outp,input1.nbSamples());
634
635
636         // Testing.
637         // Warning : in case of benchmarking this will be taken into account in the
638         // benchmark. So a benchmark should not contain tests.
639         ASSERT_NEAR_EQ(ref,output,(float)1e-6);
640
641     }
642```
643
644Warning : in case of a benchmark the xxx.ptr() function calls should be done in the setup function because they have an overhead.
645
646If you use regression formula, this overhead will modify the intercept but the coefficient of highest
647degree should not be changed.
648
649Then setUp should load the patterns:
650
651```cpp
652 void MyClass::setUp(Testing::testID_t id,std::vector<Testing::param_t>& params,Client::PatternMgr *mgr)
653     {
654
655        Testing::nbSamples_t nb=MAX_NB_SAMPLES;
656
657        // We can load different pattern or length according to the test ID
658        switch(id)
659        {
660         case MyClass::TEST_ADD_F32_1:
661           nb = 3;
662           ref.reload(MyClass::REF_ADD_F32_ID,mgr,nb);
663           break;
664         }
665
666       input1.reload(BasicTests::INPUT1_F32_ID,mgr,nb);
667       input2.reload(BasicTests::INPUT2_F32_ID,mgr,nb);
668
669       output.create(input1.nbSamples(),BasicTests::OUT_SAMPLES_F32_ID,mgr);
670    }
671```
672
673In tearDown we have to clean the test. No need to free the buffer since the memory manager will do it
674in an automatic way. But if other allocations were done outside of the memory manager, then the clean up should be done here.
675
676It is also here that you specify what you want to dump if you're in dump mode.
677
678```cpp
679    void MyClass::tearDown(Testing::testID_t id,Client::PatternMgr *mgr)
680    {
681        output.dump(mgr);
682    }
683```
684## Benchmarks and database
685
686### Creating and filling the databases
687
688To add a to sqlite3 database:
689
690    python addToDB.py AGroup
691
692Output.pickle is used by default. It can be changed with -f option.
693
694AGroup should be the class name of a Group in the desc.txt
695
696The suite in this Group should be compatible and have the same parameters.
697
698For instance, we have a BasicBenchmarks group is desc.txt
699This group is containing the suites BasicMathsBenchmarksF32, BasicMathsBenchmarksQ31, BasicMathsBenchmarks15 and BasicMathsBenchmarksQ7.
700
701Each suite is defining the same parameters : NB.
702
703If you use:
704
705    python addToDB.py BasicBenchmarks
706
707Output.pickle is used by default. It can be changed with -f option.
708
709A table BasicBenchmarks will be create and the benchmarks result for F32, Q31, Q15 and Q7 will be added to this table.
710
711But, if you do:
712
713    python addToDB.py BasicMathsBenchmarksF32
714
715The a table BasicMathsBenchmarksF32 will be created which is probably not what you want since the table is containing a type column (f32,q31, q15, q7)
716
717The script addToRegDB.py is working on the same principle but using the regression csv to fill a regression database.
718
719To create an empty database you can use  (for default database)
720
721    sqlite3.exe bench.db < createDb.sql
722
723And for regression database:
724
725    sqlite3.exe reg.db < createDb.sql
726
727Since the python scripts are using bench.db and reg.db as default names for the databases.
728
729### Processing the database
730
731Database schema (defined in createDb.sql) is creating several columns for the fields which are common to lot of rows like core, compiler, compiler version, datatype etc ...
732
733Like that it is easier to change the name of this additional information and it makes the database smaller.
734
735But then it means that to display the tables in a readable format by the user, some joins are needed.
736
737examples.sql and diff.sql are showing some examples.
738
739examples.sql : how to do simple queries and join with the configuration columns to get a readable format.
740
741diff.sql : How to compute a performance ratio (max cycle and regression) based on a reference core (which could be extended to a reference configuration if needed).
742
743## HOW TO EXTEND IT
744
745## FLOAT16 support
746
747On Arm AC5 compiler \_\_fp16 type (float16_t in CMSIS-DSP) can't be used as argument or return value of a function.
748
749Pointer to \_fp16 arrays are allowed.
750
751In CMSIS-DSP, we want to keep the possibility of having float16_t as an argument.
752
753As consequences,
754
755* the functions using float16_t in the API won't be supported by AC5 compiler.
756* The corresponding float16_t tests are put in a different test file desc_f16.txt
757* Code for those float16_t test is not built when ac5.cmake toolchain is used
758* BasicMath cmake has been modified to show how to avoid including float16 code
759when building with ac5.cmake toolchain
760
761In current example, we assume all float16_t code and tests are not supported by AC5 just to
762show how the cmake must be modified.
763
764
765
766