1ACRN hypervisor
2###############
3
4Zephyr's is capable of running as a guest under the x86 ACRN
5hypervisor (see https://projectacrn.org/).  The process for getting
6this to work is somewhat involved, however.
7
8ACRN hypervisor supports a hybrid scenario where Zephyr runs in a so-called
9"pre-launched" mode. This means Zephyr will access the ACRN
10hypervisor directly without involving the SOS VM. This is the most
11practical user scenario in the real world because Zephyr's real-time
12and safety capability can be assured without influence from other
13VMs. The following figure from ACRN's official documentation shows
14how a hybrid scenario works:
15
16.. figure:: ACRN-Hybrid.jpg
17    :align: center
18    :alt: ACRN Hybrid User Scenario
19    :figclass: align-center
20
21    ACRN Hybrid User Scenario
22
23In this tutorial, we will show you how to build a minimal running instance of Zephyr
24and ACRN hypervisor to demonstrate that it works successfully. To learn more about
25other features of ACRN, such as building and using the SOS VM or other guest VMs,
26please refer to the Getting Started Guide for ACRN:
27https://projectacrn.github.io/latest/tutorials/using_hybrid_mode_on_nuc.html
28
29Build your Zephyr App
30*********************
31
32First, build the Zephyr application you want to run in ACRN as you
33normally would, selecting an appropriate board:
34
35    .. code-block:: console
36
37        west build -b acrn_ehl_crb samples/hello_world
38
39In this tutorial, we will use the Intel Elkhart Lake Reference Board
40(`EHL`_ CRB) since it is one of the suggested platforms for this
41type of scenario. Use ``acrn_ehl_crb`` as the target board parameter.
42
43Note the kconfig output in ``build/zephyr/.config``, you will need to
44reference that to configure ACRN later.
45
46The Zephyr build artifact you will need is ``build/zephyr/zephyr.bin``,
47which is a raw memory image.  Unlike other x86 targets, you do not
48want to use ``zephyr.elf``!
49
50Configure and build ACRN
51************************
52
53First you need the source code, clone from:
54
55    .. code-block:: console
56
57        git clone https://github.com/projectacrn/acrn-hypervisor
58
59We suggest that you use versions v2.5.1 or later of the ACRN hypervisor
60as they have better support for SMP in Zephyr.
61
62Like Zephyr, ACRN favors build-time configuration management instead
63of runtime probing or control.  Unlike Zephyr, ACRN has single large
64configuration files instead of small easily-merged configuration
65elements like kconfig defconfig files or devicetree includes.  You
66have to edit a big XML file to match your Zephyr configuration.
67Choose an ACRN host config that matches your hardware ("ehl-crb-b" in
68this case).  Then find the relevant file in
69``misc/config_tools/data/<platform>/hybrid.xml``.
70
71First, find the list of ``<vm>`` declarations.  Each has an ``id=``
72attribute.  For testing Zephyr, you will want to make sure that the
73Zephyr image is ID zero.  This allows you to launch ACRN with just one
74VM image and avoids the need to needlessly copy large Linux blobs into
75the boot filesystem.  Under currently tested configurations, Zephyr
76will always have a "vm_type" tag of "SAFETY_VM".
77
78Configure Zephyr Memory Layout
79==============================
80
81Next, locate the load address of the Zephyr image and its entry point
82address.  These have to be configured manually in ACRN.  Traditionally
83Zephyr distributes itself as an ELF image where these addresses can be
84automatically extracted, but ACRN does not know how to do that, it
85only knows how to load a single contiguous region of data into memory
86and jump to a specific address.
87
88Find the "<vm id="0">...<os_config>" tag that will look something like this:
89
90    .. code-block:: xml
91
92        <os_config>
93            <name>Zephyr</name>
94            <kern_type>KERNEL_ZEPHYR</kern_type>
95            <kern_mod>Zephyr_RawImage</kern_mod>
96            <ramdisk_mod/>
97            <bootargs></bootargs>
98            <kern_load_addr>0x1000</kern_load_addr>
99            <kern_entry_addr>0x1000</kern_entry_addr>
100        </os_config>
101
102The ``kern_load_addr`` tag must match the Zephyr LOCORE_BASE symbol
103found in include/arch/x86/memory.ld.  This is currently 0x1000 and
104matches the default ACRN config.
105
106The ``kern_entry_addr`` tag must match the entry point in the built
107``zephyr.elf`` file.  You can find this with binutils, for example:
108
109    .. code-block:: console
110
111        $ objdump -f build/zephyr/zephyr.elf
112
113        build/zephyr/zephyr.elf:     file format elf64-x86-64
114        architecture: i386:x86-64, flags 0x00000012:
115        EXEC_P, HAS_SYMS
116        start address 0x0000000000001000
117
118By default this entry address is the same, at 0x1000.  This has not
119always been true of all configurations, however, and will likely
120change in the future.
121
122Configure Zephyr CPUs
123=====================
124
125Now you need to configure the CPU environment ACRN presents to the
126guest.  By default Zephyr builds in SMP mode, but ACRN's default
127configuration gives it only one CPU.  Find the value of
128``CONFIG_MP_MAX_NUM_CPUS`` in the Zephyr .config file give the guest that
129many CPUs in the ``<cpu_affinity>`` tag.  For example:
130
131    .. code-block:: xml
132
133        <vm id="0">
134            <vm_type>SAFETY_VM</vm_type>
135            <name>ACRN PRE-LAUNCHED VM0</name>
136            <guest_flags>
137                <guest_flag>0</guest_flag>
138            </guest_flags>
139            <cpu_affinity>
140                <pcpu_id>0</pcpu_id>
141                <pcpu_id>1</pcpu_id>
142            </cpu_affinity>
143            ...
144            <clos>
145                <vcpu_clos>0</vcpu_clos>
146                <vcpu_clos>0</vcpu_clos>
147            </clos>
148            ...
149        </vm>
150
151To use SMP, we have to change the pcpu_id of VM0 to 0 and 1.
152This configures ACRN to run Zephyr on CPU0 and CPU1. The ACRN hypervisor
153and Zephyr application will not boot successfully without this change.
154If you plan to run Zephyr with one CPU only, you can skip it.
155
156Since Zephyr is using CPU0 and CPU1, we also have to change
157VM1's configuration so it runs on CPU2 and CPU3. If your ACRN setup has
158additional VMs, you should change their configurations as well.
159
160    .. code-block:: xml
161
162        <vm id="1">
163            <vm_type>SOS_VM</vm_type>
164            <name>ACRN SOS VM</name>
165            <guest_flags>
166                <guest_flag>0</guest_flag>
167            </guest_flags>
168            <cpu_affinity>
169                <pcpu_id>2</pcpu_id>
170                <pcpu_id>3</pcpu_id>
171            </cpu_affinity>
172            <clos>
173                <vcpu_clos>0</vcpu_clos>
174                <vcpu_clos>0</vcpu_clos>
175            </clos>
176            ...
177        </vm>
178
179Note that these indexes are physical CPUs on the host.  When
180configuring multiple guests, you probably don't want to overlap these
181assignments with other guests.  But for testing Zephyr simply using
182CPUs 0 and 1 works fine.  (Note that ehl-crb-b has four physical CPUs,
183so configuring all of 0-3 will work fine too, but leave no space for
184other guests to have dedicated CPUs).
185
186Build ACRN
187==========
188
189Once configuration is complete, ACRN builds fairly cleanly:
190
191    .. code-block:: console
192
193        $ make -j BOARD=ehl-crb-b SCENARIO=hybrid
194
195The only build artifact you need is the ACRN multiboot image in
196``build/hypervisor/acrn.bin``
197
198Assemble EFI Boot Media
199***********************
200
201ACRN will boot on the hardware via the GNU GRUB bootloader, which is
202itself launched from the EFI firmware.  These need to be configured
203correctly.
204
205Locate GRUB
206===========
207
208First, you will need a GRUB EFI binary that corresponds to your
209hardware.  In many cases, a simple upstream build from source or a
210copy from a friendly Linux distribution will work.  In some cases it
211will not, however, and GRUB will need to be specially patched for
212specific hardware.  Contact your hardware support team (pause for
213laughter) for clear instructions for how to build a working GRUB.  In
214practice you may just need to ask around and copy a binary from the
215last test that worked for someone.
216
217Create EFI Boot Filesystem
218==========================
219
220Now attach your boot media (e.g. a USB stick on /dev/sdb, your
221hardware may differ!) to a Linux system and create an EFI boot
222partition (type code 0xEF) large enough to store your boot artifacts.
223This command feeds the relevant commands to fdisk directly, but you
224can type them yourself if you like:
225
226    .. code-block:: console
227
228        # for i in n p 1 "" "" t ef w; do echo $i; done | fdisk /dev/sdb
229        ...
230        <lots of fdisk output>
231
232Now create a FAT filesystem in the new partition and mount it:
233
234    .. code-block:: console
235
236        # mkfs.vfat -n ACRN_ZEPHYR /dev/sdb1
237        # mkdir -p /mnt/acrn
238        # mount /dev/sdb1 /mnt/acrn
239
240Copy Images and Configure GRUB
241==============================
242
243ACRN does not have access to a runtime filesystem of its own.  It
244receives its guest VMs (i.e. zephyr.bin) as GRUB "multiboot" modules.
245This means that we must rely on GRUB's filesystem driver.  The three
246files (GRUB, ACRN and Zephyr) all need to be copied into the
247"/efi/boot" directory of the boot media.  Note that GRUB must be named
248"bootx64.efi" for the firmware to recognize it as the bootloader:
249
250    .. code-block:: console
251
252        # mkdir -p /mnt/acrn/efi/boot
253        # cp $PATH_TO_GRUB_BINARY /mnt/acrn/efi/boot/bootx64.efi
254        # cp $ZEPHYR_BASE/build/zephyr/zephyr.bin /mnt/acrn/efi/boot/
255        # cp $PATH_TO_ACRN/build/hypervisor/acrn.bin /mnt/acrn/efi/boot/
256
257At boot, GRUB will load a "efi/boot/grub.cfg" file for its runtime
258configuration instructions (a feature, ironically, that both ACRN and
259Zephyr lack!).  This needs to load acrn.bin as the boot target and
260pass it the zephyr.bin file as its first module (because Zephyr was
261configured as ``<vm id="0">`` above).  This minimal configuration will
262work fine for all but the weirdest hardware (i.e. "hd0" is virtually
263always the boot filesystem from which grub loaded), no need to fiddle
264with GRUB plugins or menus or timeouts:
265
266    .. code-block:: console
267
268        # cat > /mnt/acrn/efi/boot/grub.cfg<<EOF
269        set root='hd0,msdos1'
270        multiboot2 /efi/boot/acrn.bin
271        module2 /efi/boot/zephyr.bin Zephyr_RawImage
272        boot
273        EOF
274
275Now the filesystem should be complete
276
277    .. code-block:: console
278
279        # umount /dev/sdb1
280        # sync
281
282Boot ACRN
283*********
284
285If all goes well, booting your EFI media on the hardware will result
286in a running ACRN, a running Zephyr (because by default Zephyr is
287configured as a "prelaunched" VM), and a working ACRN command line on
288the console.
289
290You can see the Zephyr (vm 0) console output with the "vm_console"
291command:
292
293    .. code-block:: console
294
295        ACRN:\>vm_console 0
296
297        ----- Entering VM 0 Shell -----
298        *** Booting Zephyr OS build v2.6.0-rc1-324-g1a03783861ad  ***
299        Hello World! acrn
300
301
302.. _EHL: https://www.intel.com/content/www/us/en/products/docs/processors/embedded/enhanced-for-iot-platform-brief.html
303