1.. _bsim:
2
3BabbleSim
4#########
5
6BabbleSim and Zephyr
7********************
8
9In the Zephyr project we use the `Babblesim`_ simulator to test some of the Zephyr radio protocols,
10including the BLE stack, 802.15.4, and some of the networking stack.
11
12BabbleSim_ is a physical layer simulator, which in combination with the Zephyr
13:ref:`bsim boards<bsim boards>`
14can be used to simulate a network of BLE and 15.4 devices.
15When we build Zephyr targeting a :ref:`bsim board<bsim boards>` we produce a Linux
16executable, which includes the application, Zephyr OS, and models of the HW.
17
18When there is radio activity, this Linux executable will connect to the BabbleSim Phy simulation
19to simulate the radio channel.
20
21In the BabbleSim documentation you can find more information on how to
22`get <https://babblesim.github.io/fetching.html>`_ and
23`build <https://babblesim.github.io/building.html>`_ the simulator.
24In the :ref:`nrf52_bsim<nrf52_bsim>`, :ref:`nrf5340bsim<nrf5340bsim>`,
25and :ref:`nrf54l15bsim<nrf54l15bsim>` boards documentation
26you can find more information about how to build Zephyr targeting these particular boards,
27and a few examples.
28
29Types of tests
30**************
31
32Tests without radio activity: bsim tests with twister
33=====================================================
34
35The :ref:`bsim boards<bsim boards>` can be used without radio activity, and in that case, it is not
36necessary to connect them to a physical layer simulation. Thanks to this, these target boards can
37be used just like :ref:`native_sim<native_sim>` with :ref:`twister <twister_script>`,
38to run all standard Zephyr twister tests, but with models of a real SOC HW, and their drivers.
39
40Tests with radio activity
41=========================
42
43When there is radio activity, BabbleSim tests require at the very least a physical layer simulation
44running, and most, more than 1 simulated device. Due to this, these tests are not build and run
45with twister, but with a dedicated set of tests scripts.
46
47These tests are kept in the :code:`tests/bsim/` folder. The ``compile.sh`` and ``run_parallel.sh``
48scripts contained in that folder are used by the CI system to build the needed images and execute
49these tests in batch.
50
51See sections below for more information about how to build and run them, as well as the conventions
52they follow.
53
54There are two main sets of tests:
55
56* Self checking embedded application/tests: In which some of the simulated devices applications are
57  built with some checks which decide if the test is passing or failing. These embedded
58  applications tests use the :ref:`bs_tests<bsim_boards_bs_tests>` system to report the pass or
59  failure, and in many cases to build several tests into the same binary.
60
61* Test using the EDTT_ tool, in which a EDTT (python) test controls the embedded applications over
62  an RPC mechanism, and decides if the test passes or not.
63  Today these tests include a very significant subset of the BT qualification test suite.
64
65More information about how different tests types relate to BabbleSim and the bsim boards can be
66found in the :ref:`bsim boards tests section<bsim_boards_tests>`.
67
68Test coverage and BabbleSim
69***************************
70
71As the :ref:`nrf52_bsim<nrf52_bsim>` and :ref:`nrf5340bsim<nrf5340bsim>`, and
72:ref:`nrf54l15bsim<nrf54l15bsim>` boards are based on the POSIX architecture, you can easily collect
73test coverage information.
74
75You can use the script :zephyr_file:`tests/bsim/generate_coverage_report.sh` to generate an html
76coverage report from tests.
77
78Check :ref:`the page on coverage generation <coverage_posix>` for more info.
79
80.. _BabbleSim:
81   https://BabbleSim.github.io
82
83.. _EDTT:
84   https://github.com/EDTTool/EDTT
85
86Building and running the tests
87******************************
88
89See the :ref:`nrf52_bsim` page for setting up the simulator.
90
91The scripts also expect a few environment variables to be set.
92For example, from Zephyr's root folder, you can run:
93
94.. code-block:: bash
95
96   # Build all the tests
97   ${ZEPHYR_BASE}/tests/bsim/compile.sh
98
99   # Run them (in parallel)
100   RESULTS_FILE=${ZEPHYR_BASE}/myresults.xml \
101      SEARCH_PATH=${ZEPHYR_BASE}/tests/bsim \
102         ${ZEPHYR_BASE}/tests/bsim/run_parallel.sh
103
104Or to build and run only a specific subset, e.g. host advertising tests:
105
106.. code-block:: bash
107
108   # Build the Bluetooth host advertising tests
109   ${ZEPHYR_BASE}/tests/bsim/bluetooth/host/adv/compile.sh
110
111   # Run them (in parallel)
112   RESULTS_FILE=${ZEPHYR_BASE}/myresults.xml \
113      SEARCH_PATH=${ZEPHYR_BASE}/tests/bsim/bluetooth/host/adv \
114         ${ZEPHYR_BASE}/tests/bsim/run_parallel.sh
115
116Check the ``run_parallel.sh`` help for more options and examples on how to use this script to run
117the tests in batch.
118
119After building the tests' required binaries you can run a test directly using its individual test
120script.
121
122For example you can build the required binaries for the networking tests with
123
124.. code-block:: bash
125
126   WORK_DIR=${ZEPHYR_BASE}/bsim_out ${ZEPHYR_BASE}/tests/bsim/net/compile.sh
127
128and then directly run one of the tests:
129
130.. code-block:: bash
131
132   ${ZEPHYR_BASE}/tests/bsim/net/sockets/echo_test/tests_scripts/echo_test_802154.sh
133
134Conventions
135===========
136
137Test code
138---------
139
140See the :zephyr_file:`Bluetooth sample test <tests/bsim/bluetooth/host/misc/sample_test/README.rst>` for conventions that apply to test
141code.
142
143Build scripts
144-------------
145
146The build scripts ``compile.sh`` simply build all the required test and sample applications
147for the tests' scripts placed in the subfolders below.
148
149This build scripts use the common compile.source which provide a function (compile) which calls
150cmake and ninja with the provided application, configuration and overlay files.
151
152To speed up compilation for users interested only in a subset of tests, several compile scripts
153exist in several subfolders, where the upper ones call into the lower ones.
154
155Note that cmake and ninja are used directly instead of the ``west build`` wrapper as west is not
156required, and some Zephyr users do not use or have west, but still use the build and tests scripts.
157
158Test scripts
159------------
160
161Please follow the existing conventions and do not design one-off bespoke runners (e.g. a python
162script, or another shell abstraction).
163
164The rationale is that it is easier and faster for the maintainers to perform tree-wide updates for
165build system or compatibility changes if the tests are run in the same manner, with the same
166variables, etc..
167
168If you have a good idea for improving your test script, please make a PR changing *all* the test
169scripts in order to benefit everyone and conserve homogeneity. You can of course discuss it first in
170an RFC issue or on the babblesim discord channel.
171
172Scripts starting with an underscore (``_``) are not automatically discovered and run. They can serve
173as either helper functions for the main script, or can be used for local development utilities, e.g.
174building and running tests locally, debugging, etc..
175
176Here are the conventions:
177
178- Each test is defined by a shell script with the extension ``.sh``, in a subfolder called
179  ``test_scripts/``.
180- It is recommended to run a single test per script file. It allows for better parallelization of
181  the runs in CI.
182- Scripts expect that the binaries they require are already built. They should not compile binaries.
183- Scripts will spawn the processes for every simulated device and the physical layer simulation.
184- Scripts must return 0 to the invoking shell if the test passes, and not 0 if the test fails.
185- Each test must have a unique simulation id, to enable running different tests in parallel.
186- Neither the scripts nor the images should modify the workstation filesystem content beyond the
187  ``${BSIM_OUT_PATH}/results/<simulation_id>/`` or ``/tmp/`` folders.
188  That is, they should not leave stray files behind.
189- Tests that require several consecutive simulations (e.g, if simulating a device pairing, powering
190  off, and powering up after as a new simulation) should use separate simulation ids for each
191  simulation segment, ensuring that the radio activity of each segment can be inspected a
192  posteriori.
193- Avoid overly long tests. If the test takes over 20 seconds of runtime, consider if it is possible
194  to split it in several separate tests.
195- If the test takes over 5 seconds, set ``EXECUTE_TIMEOUT`` to a value that is at least 5 times
196  bigger than the measured run-time.
197- Do not set ``EXECUTE_TIMEOUT`` to a value lower than the default.
198- Tests should not be overly verbose: less than a hundred lines are expected on the outputs. Do make
199  use of ``LOG_DBG()`` extensively, but don't enable the ``DBG`` log level by default.
200