1              Thread-Metric RTOS Test Suite
2
3
41. Thread-Metric Test Suite
5
6The Thread-Metric test suite consists of 8 distinct RTOS
7tests that are designed to highlight commonly used aspects
8of an RTOS. The test measures the total number of RTOS events
9that can be processed during a specific timer interval. A 30
10second time interval is recommended.
11
121.1. Basic Processing Test
13
14This is the baseline test consisting of a single thread. This
15should execute the same on every operating system. Test values
16from testing with different RTOS products should be scaled
17relative to the difference between the values of this test.
18
191.2. Cooperative Scheduling Test
20
21This test consists of 5 threads created at the same priority that
22voluntarily release control to each other in a round-robin fashion.
23Each thread will increment its run counter and then relinquish to
24the next thread.  At the end of the test the counters will be verified
25to make sure they are valid (should all be within 1 of the same
26value). If valid, the numbers will be summed and presented as the
27result of the cooperative scheduling test.
28
291.3. Preemptive Scheduling Test
30
31This test consists of 5 threads that each have a unique priority.
32In this test, all threads except the lowest priority thread are
33left in a suspended state. The lowest priority thread will resume
34the next highest priority thread.  That thread will resume the
35next highest priority thread and so on until the highest priority
36thread executes. Each thread will increment its run count and then
37call thread suspend.  Eventually the processing will return to the
38lowest priority thread, which is still in the middle of the thread
39resume call. Once processing returns to the lowest priority thread,
40it will increment its run counter and once again resume the next
41highest priority thread - starting the whole process over once again.
42
431.4. Interrupt Processing Test
44
45This test consists of a single thread. The thread will cause an
46interrupt (typically implemented as a trap), which will result in
47a call to the interrupt handler. The interrupt handler will
48increment a counter and then post to a semaphore. After the
49interrupt handler completes, processing returns to the test
50thread that initiated the interrupt. The thread then retrieves
51the semaphore set by the interrupt handler, increments a counter
52and then generates another interrupt.
53
541.5. Interrupt Preemption Processing Test
55
56This test is similar to the previous interrupt test. The big
57difference is the interrupt handler in this test resumes a
58higher priority thread, which causes thread preemption.
59
601.6. Message Processing Test
61
62This test consists of a thread sending a 16 byte message to a
63queue and retrieving the same 16 byte message from the queue.
64After the send/receive sequence is complete, the thread will
65increment its run counter.
66
671.7. Synchronization Processing Test
68
69This test consists of a thread getting a semaphore and then
70immediately releasing it. After the get/put cycle completes,
71the thread will increment its run counter.
72
731.8. RTOS Memory allocation
74
75This test consists of a thread allocating a 128-byte block and
76releasing the same block. After the block is released, the thread
77will increment its run counter.
78
792. Zephyr Modifications
80
81A few minor modifications have been made to the Thread-Metric source
82code to resolve some minor issues found during porting.
83
842.1. tm_main() -> main()
85
86Zephyr's version of this benchmark has modified the original tm_main()
87to become main().
88
892.2. Thread entry points
90
91Thread entry points used by Zephyr have a different function signature
92than that used by the original Thread-Metric code. These functions
93have been updated to match Zephyr's.
94
952.3. Reporting thread
96
97Zephyr's version does not spawn a reporting thread. Instead it calls
98the reporting function directly. This helps ensure that the test
99operates correctly on QEMU platorms.
100
1012.4. Directory structure
102
103Each test has been converted to its own project. This has necessitated
104some minor changes to the directory structure as compared to the
105original version of this benchmark.
106
107The source code to the Thread-Metric test suite is organized into
108the following files:
109
110         File                                           Meaning
111
112tm_basic_processing_test.c                    Basic test for determining board
113                                                 processing capabilities
114tm_cooperative_scheduling_test.c              Cooperative scheduling test
115tm_preemptive_scheduling_test.c               Preemptive scheduling test
116tm_interrupt_processing_test.c                No-preemption interrupt processing
117                                                 test
118tm_interrupt_preemption_processing_test.c     Interrupt preemption processing
119                                                 test
120tm_message_processing_test.c                  Message exchange processing test
121tm_synchronization_processing_test.c          Semaphore get/put processing test
122tm_memory_allocation_test.c                   Basic memory allocation test
123tm_porting_layer_zephyr.c                     Specific porting layer source
124                                                 code for Zephyr
125
1262.5. Test execution with Twister tool
127
128When the test suite is executed by Twister it takes parameters from testcase.yaml
129file, in particular:
130
131    * check expected benchmark output presence at least three times to collect
132      measurements from 3 consequtive intervals for each of the benchmark tests.
133
134    * use 300 sec. timeout on each benchmark test from this suite;
135      it is expected to be at least twice bigger than normally needed
136      to collect measurements 3 times with 30 sec. intervals on most of the
137      platforms except some simulators.
138
139    * parse benchmark output to extract measurements and errors when
140      it happens e.g. on counters diverged from average; Twister records
141      this data in twister.json and recording.csv report files for analysis.
142
143For more details see Twister testcase.yaml documentation and 'harness_config:'
144parameters.
145
146
1473 Porting
148
1493.1 Porting Layer
150
151As for the porting layer defined in tm_porting_layer_template.c, this file contain
152shell services of the generic RTOS services used by the actual tests. The
153shell services provide the mapping between the tests and the underlying RTOS.
154The following generic API's are required to map any RTOS to the performance
155measurement tests:
156
157
158    void  tm_initialize(void (*test_initialization_function)(void));
159
160    This function is typically called by the application from its
161    main() function. It is responsible for providing all the RTOS
162    initialization, calling the test initialization function as
163    specified, and then starting the RTOS.
164
165    int  tm_thread_create(int thread_id, int priority, void (*entry_function)(void));
166
167    This function creates a thread of the specified priority where 1 is
168    the highest and 16 is the lowest.  If successful, TM_SUCCESS
169    returned. If an error occurs, TM_ERROR is returned. The created thread
170    is not started.
171
172    int  tm_thread_resume(int thread_id);
173
174    This function resumes the previously created thread specified by
175    thread_id. If successful, a TM_SUCCESS is returned.
176
177    int  tm_thread_suspend(int thread_id);
178
179    This function suspend the previously created thread specified by
180    thread_id. If successful, a TM_SUCCESS is returned.
181
182    void  tm_thread_relinquish(void);
183
184    This function lets all other threads of same priority execute
185    before the calling thread runs again.
186
187    void  tm_thread_sleep(int seconds);
188
189    This function suspends the calling thread for the specified
190    number of seconds.
191
192    int  tm_queue_create(int queue_id);
193
194    This function creates a queue with a capacity to hold at least
195    one 16-byte message. If successful, a TM_SUCCESS is returned.
196
197    int  tm_queue_send(int queue_id, unsigned long *message_ptr);
198
199    This function sends a message to the previously created queue.
200    If successful, a TM_SUCCESS is returned.
201
202    int  tm_queue_receive(int queue_id, unsigned long *message_ptr);
203
204    This function receives a message from the previously created
205    queue. If successful, a TM_SUCCESS is returned.
206
207    int  tm_semaphore_create(int semaphore_id);
208
209    This function creates a binary semaphore. If successful, a
210    TM_SUCCESS is returned.
211
212    int  tm_semaphore_get(int semaphore_id);
213
214    This function gets the previously created binary semaphore.
215    If successful, a TM_SUCCESS is returned.
216
217    int  tm_semaphore_put(int semaphore_id);
218
219    This function puts the previously created binary semaphore.
220    If successful, a TM_SUCCESS is returned.
221
222    int  tm_memory_pool_create(int pool_id);
223
224    This function creates a memory pool able to satisfy at least one
225    128-byte block of memory. If successful, a TM_SUCCESS is returned.
226
227    int  tm_memory_pool_allocate(int pool_id, unsigned char **memory_ptr);
228
229    This function allocates a 128-byte block of memory from the
230    previously created memory pool. If successful, a TM_SUCCESS
231    is returned along with the pointer to the allocated memory
232    in the "memory_ptr" variable.
233
234    int  tm_memory_pool_deallocate(int pool_id, unsigned char *memory_ptr);
235
236    This function releases the previously allocated 128-byte block
237    of memory. If successful, a TM_SUCCESS is returned.
238
239
2403.2 Porting Requirements Checklist
241
242The following requirements are made in order to ensure fair benchmarks
243are achieved on each RTOS performing the test:
244
245    1. Time period should be 30 seconds. This will ensure the printf
246       processing in the reporting thread is insignificant.
247
248    *  Zephyr : Requirement met.
249
250    2. The porting layer services are implemented inside of
251       tm_porting_layer_[RTOS].c and NOT as macros.
252
253    *  Zephyr : Requirements met.
254
255    3. The tm_thread_sleep service is implemented by a 10ms RTOS
256       periodic interrupt source.
257
258    *  Zephyr : Requirement met. System tick rate = 100 Hz.
259
260    4. Locking regions of the tests and/or the RTOS in cache is
261       not allowed.
262
263    *  Zephyr : Requirement met.
264
265    5. The Interrupt Processing and Interrupt Preemption Processing tests
266       require an instruction that generates an interrupt. Please refer
267       to tm_porting_layer.h for an example implementation.
268
269    *  Zephyr : Requirement met. See irq_offload().
270