1
2How to execute the run-time tests
3---------------------------------
4
5
61. Install the tests (see HOW_TO_INSTALL).
7
8
92. To configure the required/desired set of test cases and the modes of
10   the execution runs, manually edit the settings file:
11
12        aslts/bin/settings
13
14   If necessary, tune the test suite to your current needs by setting the
15   variables SETN and run4. These variables control the test suite options
16   and are contained in this file:
17
18        aslts/src/runtime/cntl/runmode.asl
19
203. Use the Do utility to run the specified set of tests in all specified
21   modes of execution runs. It supports the automated logging of the results
22   of test runs and allows results to be compared. See comments for the Do
23   utility within the Do script file (aslts/bin/Do).
24
25   a) Set the following environment variables:
26
27        ASL - path to iASL compiler: (example)
28
29            > export ASL="c:/acpica/libraries/iasl.exe"
30
31        acpiexec - path to acpiexec utility: (example)
32
33            > export acpiexec="c:/acpica/libraries/acpiexec.exe"
34
35        acpibin - path to acpibin utility: (example)
36
37            > export acpibin="c:/acpica/libraries/acpibin.exe"
38
39        ASLTSDIR - path to the aslts directory: (example)
40
41            > export ASLTSDIR="c:/acpica/tests/aslts"
42
43   b) Add the following directory to the PATH variable:
44
45        aslts/bin
46
47   c) If necessary, convert all scripts in the aslts/bin directory to unix
48   line endings:
49
50        > d2u aslts/bin/*
51
52   d) Execute "Do" with one of the following commands:
53      (Use 'Do 1' to run all tests)
54
55     0 - Compile and install AML tests
56     1 - Execute all configured tests in all enabled modes
57     2 - Compare two multi-results of two runs of tests
58     3 - Print out the names of all the available test cases
59     4 - Calculate the current state of all bugs and report the summary
60         tables
61     5 - Prepare bdemo summary files of one multi-results directory for all
62         modes
63     6 - Concatenate bdemo summary files of two multi-results
64
65
664. If desired, any individual AML test can be generated from within its
67   directory by running the iASL compiler on the MAIN.asl file for that test.
68   For example:
69
70        > iASL MAIN.asl
71
72
735. If desired, any individual AML test can be executed from the aslts/tmp/aml
74   directory by invoking the AcpiExec utility with the name of the AML test
75   and the batch execute option. For example:
76
77        > cd aslts/tmp/aml
78        > acpiexec -b "Execute MAIN" 20090320/nopt/32/arithmetic.aml
79
80
816. When all tests are executed in batch mode (Do 1), the individual test
82   results are placed in the following directory structure:
83
84    aslts/tmp/RESULTS/
85        <date.time.acpica_version>/
86           norm/                      // normal interpreter mode, no slack
87               32/                    // 32-bit table execution
88               64/                    // 64-bit table execution
89           slack/                     // interpreter slack mode enabled
90               32/                    // 32-bit table execution
91               64/                    // 64-bit table execution
92           summary                    // test execution summary
93
94
957. After completion, each AML test reports its status as one of the following:
96
97        [PASS|FAIL|BLOCKED|SKIPPED]
98
99   PASS    - Success, no errors encountered in the functionality of the
100             product.
101   FAIL    - The test encountered errors - improper functionality of the
102             product.
103   BLOCKED - The test was blocked (was not run). This option is used for the
104             tests which are temporarily causing abort or hang of execution
105             due to the errors the product.
106   SKIPPED - The test was skipped (was not run). This option is used in case
107             where the result of the test is undefined under the particular
108             conditions.
109
110
1118. How to evaluate the results of the run-time tests.
112
113   A. Successful run.
114
115   After the run is completed, the following summary lines are displayed
116   by ASLTS:
117
118   a) "Run time (in seconds): 0x0000000000000031"
119   b) "The total number of exceptions handled: 0x0000000000000005"
120   c) "TEST ACPICA: 64-bit : PASS"
121
122   Line (a) shows the run time in seconds measured by the ASL Timer operator.
123
124   Line (b) reports the number of exceptions which took place during the test
125   execution.
126
127   Line (c) reports the mode of the run and the summary status:
128       Mode is either 32-bit or 64-bit
129       Status is one of [PASS|FAIL|BLOCKED|SKIPPED]
130
131
132   B. Failed run.
133
134   a) "Run time (in seconds): 0x0000000000000031"
135   b) "The total number of exceptions handled: 0x0000000000000005"
136   c) "TEST ACPICA: 64-bit : FAIL : Errors # 0x0000000000000009"
137
138      The number of errors (9 here) is reported as
139      Errors # 0x0000000000000009".
140
141   C. Example error message:
142
143      "---------- ERROR : 0x000000001903301A, 0x0000000000033017, m503"
144      "TITLE            : Miscellaneous named object creation"
145      "COLLECTION       : functional"
146      "TEST CASE        : name"
147      "TEST             : PCG0"
148      "ERROR,    file   : package.asl"
149      "          index  : 000000000000001A"
150      "CHECKING, file   : package.asl"
151      "          method : m123"
152      "          index  : 0000000000000017"
153      "(r):"
154                          0x0000000000000025
155      "(e):"
156                          0x0000000000000027
157      "---------- END."
158
159   Explanations:
160
161      0x000000001903301A,
162      0x0000000000033017 - two 32-bit words of error opcode
163                           (see "The layout of error opcode" below)
164
165      m503 - This is usually the name of the executing control method (which
166             in turn is usually a conglomeration of subtests) or some brief
167             diagnostic message explanation/designation/naming of the error.
168
169      "TITLE            : The common intention of the test
170      "COLLECTION       : Functional/complex/exceptions/..
171      "TEST CASE        : The name of test case (bfield, arithmetic, opackageel, ...)
172      "TEST             : The name of test (simplest unit reported by diagnostics
173                          and supplied with the satatus line)
174      "ERROR,    file   : The name of file where the error reporting
175                          function (err()) was invoked
176      "          index  : Index of error inside that file where err() was invoked
177                          (each invocation of err() differs with its index)
178      "CHECKING, file   : The name of the file where the checking was initiated
179      "          method : The name of method initiated the checking
180      "          index  : Index of the checking inside the file "CHECKING, file"
181
182      (r): - usually, the following value is a received one
183      (e): - usually, the following value is an expected one
184
185
186   D. The errors (currently 200 max) are summarized as follows at the end of
187      the test output. Example of test "Reference":
188
189      "========= ERRORS SUMMARY (max 200):"
190      "reference, ref50.asl,      0000000000000003, ref50.asl, 0000000000000000, m22c"
191      "reference, datastproc.asl, 000000000000000F, ref50.asl, 0000000000000001, m22c"
192      "reference, ref50.asl,      0000000000000007, ref50.asl, 0000000000000000, m234"
193      "reference, ref50.asl,      0000000000000007, ref50.asl, 0000000000000000, m234"
194      "reference, datastproc.asl, 0000000000000001, ref50.asl, 0000000000000013, m365"
195      "reference, datastproc.asl, 0000000000000001, ref50.asl, 0000000000000015, m365"
196      "reference, datastproc.asl, 0000000000000001, ref50.asl, 0000000000000017, m365"
197      "========= END."
198
199   Explanations:
200
201      "reference, datastproc.asl, 0000000000000001, ref50.asl, 0000000000000017, m365"
202
203      reference        - The name of the test case
204      datastproc.asl   - The name of the file where the error was revealed
205                         and reported by invoking err(..,index,..)
206      0000000000000001 - Index of error inside that (datastproc.asl) file
207      ref50.asl        - The name of file where the checking was initiated
208      0000000000000017 - Index of that checking inside that (ref50.asl) file
209      m365             - Diagnostic message (usually, the name of the method
210                         containing the conglomeration of tests)
211
212      For more information, see the file aslts/TESTS.
213
214
2159. The layout of the error opcode (three 32-bit words)
216
217   0xctfffeee
218   0xmmzzzuuu
219   0xnnnnnnnn
220
221   c - Index of tests collection
222   t - Index of test inside the collection
223   f - Absolute index of the file reporting the error
224   e - Index of error (inside the file)
225   z - Absolute index of the file initiating the checking
226   u - Index of checking
227   n - Name of Method initiating the checking
228   m - Miscellaneous:
229       1) in case of TCLD tests there is an index of bug stored (max 600)
230
231
232How to use ASL-compilation control test collection
233==================================================
234
235The tests for the ASL Compiler to check its ability to detect, report and
236reject wrong ASL code are contained in this directory:
237
238   aslts/src/compilation/collection
239
240At present, no utility is provided to perform automated run and verification
241of these tests.
242
243The tests contain ASL code with compile errors such that no output AML files
244are expected. Expected are Warning and Error messages to be reported by ASL
245Compiler for the incorrect ASL code. When implemented, the utility should
246parse the output of the ASL Compiler for these files to verify the presence
247of the expected messages.
248