1################################################# 2Physical attack mitigation in Trusted Firmware-M 3################################################# 4 5:Authors: Tamas Ban; David Hu 6:Organization: Arm Limited 7:Contact: tamas.ban@arm.com; david.hu@arm.com 8 9************ 10Requirements 11************ 12PSA Certified Level 3 Lightweight Protection Profile [1]_ requires protection 13against physical attacks. This includes protection against manipulation of the 14hardware and any data, undetected manipulation of memory contents, physical 15probing on the chip's surface. The RoT detects or prevents its operation outside 16the normal operating conditions (such as voltage, clock frequency, temperature, 17or external energy fields) where reliability and secure operation has not been 18proven or tested. 19 20.. note:: 21 22 Mitigation against certain level of physical attacks is a mandatory 23 requirement for PSA Level 3 certification. 24 The :ref:`tf-m-against-physical-attacks` discussed below 25 doesn't provide mitigation against all the physical attacks considered in 26 scope for PSA L3 certification. Please check the Protection Profile document 27 for an exhaustive list of requirements. 28 29**************** 30Physical attacks 31**************** 32The goal of physical attacks is to alter the expected behavior of a circuit. 33This can be achieved by changing the device's normal operating conditions to 34untested operating conditions. As a result a hazard might be triggered on the 35circuit level, whose impact is unpredictable in advance but its effect can be 36observed. With frequent attempts, a weak point of the system could be identified 37and the attacker could gain access to the entire device. There is a wide variety 38of physical attacks, the following is not a comprehensive list rather just give 39a taste of the possibilities: 40 41 - Inject a glitch into the device power supply or clock line. 42 - Operate the device outside its temperature range: cool down or warm it up. 43 - Shoot the chip with an electromagnetic field. This can be done by passing 44 current through a small coil close to the chip surface, no physical contact 45 or modification of the PCB (soldering) is necessary. 46 - Point a laser beam on the chip surface. It could flip bits in memory or a 47 register, but precise knowledge of chip layout and design is necessary. 48 49The required equipment and cost of these attacks varies. There are commercial 50products to perform such attacks. Furthermore, they are shipped with a scripting 51environment, good documentation, and a lot of examples. In general, there is a 52ton of videos, research paper and blogs about fault injection attacks. As a 53result the threshold, that even non-proficient can successfully perform such 54attack, gets lower over time. 55 56***************************************************************** 57Effects of physical attacks in hardware and in software execution 58***************************************************************** 59The change in the behavior of the hardware and software cannot be seen in 60advance when performing a physical attack. On circuit-level they manifest 61in bit faults. These bit faults can cause varied effects in the behavior of 62the device micro-architecture: 63 64 - Instruction decoding pipeline is flushed. 65 - Altering instructions when decoding. 66 - Altering data when fetching or storing. 67 - Altering register content, and the program counter. 68 - Flip bits in register or memory. 69 70These phenomenons happen at random and cannot be observed directly but the 71effect can be traced in software execution. On the software level the following 72can happen: 73 74 - A few instructions are skipped. This can lead to taking different branch 75 than normal. 76 - Corrupted CPU register or data fetch could alter the result of a comparison 77 instruction. Or change the value returned from a function. 78 - Corrupted data store could alter the config of peripherals. 79 - Very precise attacks with laser can flip bits in any register or in memory. 80 81This is a complex domain. Faults are not well-understood. Different fault models 82exist but all of them target a specific aspect of fault injection. One of the 83most common and probably the easily applicable fault model is the instruction 84skip. 85 86*********************************** 87Mitigation against physical attacks 88*********************************** 89The applicability of these attacks highly depends on the device. Some 90devices are more sensitive than others. Protection is possible at hardware and 91software levels as well. 92 93On the hardware level, there are chip design principles and system IPs that are 94resistant to fault injection attacks. These can make it harder to perform a 95successful attack and as a result the chip might reset or erase sensitive 96content. The device maker needs to consider what level of physical attack is in 97scope and choose a SoC accordingly. 98 99On top of hardware-level protection, a secondary protection layer can be 100implemented in software. This approach is known as "defence in depth". 101 102Neither hardware nor software level protection is perfect because both can be 103bypassed. The combination of them provides the maximum level of protection. 104However, even when both are in place, it is not certain that they provide 100% 105protection against physical attacks. The best of what is to achievable to harden 106the system to increase the cost of a successful attack (in terms of time and 107equipment), thereby making it non profitable to perform them. 108 109.. _phy-att-countermeasures: 110 111Software countermeasures against physical attacks 112================================================= 113There are practical coding techniques which can be applied to harden software 114against fault injection attacks. They significantly decrease the probability of 115a successful attack: 116 117 - Control flow monitor 118 119 To catch malicious modification of the expected control flow. When an 120 important portion of a program is executed, a flow monitor counter is 121 incremented. The program moves to the next stage only if the accumulated 122 flow monitor counter is equal to an expected value. 123 124 - Default failure 125 126 The return value variable should always contain a value indicating 127 failure. Changing its value to success is done only under one protected 128 flow (preferably protected by double checks). 129 130 - Complex constant 131 132 It is hard to change a memory region or register to a pre-defined value, but 133 usual boolean values (0 or 1) are easier to manipulate. 134 135 - Redundant variables and condition checks 136 137 To make branch condition attack harder it is recommended to check the 138 relevant condition twice (it is better to have a random delay between the 139 two comparisons). 140 141 - Random delay 142 143 Successful fault injection attacks require very precise timing. Adding 144 random delay to the code execution makes the timing of an attack much 145 harder. 146 147 - Loop integrity check 148 149 To avoid to skip critical loop iterations. It can weaken the cryptographic 150 algorithms. After a loop has executed, check the loop counter whether it 151 indeed has the expected value. 152 153 - Duplicated execution 154 155 Execute a critical step multiple times to prevent fault injection from 156 skipping the step. To mitigate multiple consecutive fault injections, random 157 delay can be inserted between duplicated executions. 158 159These techniques should be applied in a thoughtful way. If it is applied 160everywhere then it can result in messy code that makes the maintenance harder. 161Code must be analysed and sensitive parts and critical call path must be 162identified. Furthermore, these techniques increase the overall code size which 163might be an issue on the constrained devices. 164 165Currently, compilers are not providing any support to implement these 166countermeasures automatically. On the contrary, they can eliminate the 167protection code during optimization. As a result, the C level protection does 168not add any guarantee about the final behavior of the system. The effectiveness 169of these protections highly depends on the actual compiler and the optimization 170level. The compiled assembly code must be visually inspected and tested to make 171sure that proper countermeasures are in-place and perform as expected. 172 173.. _phy-att-threat-model: 174 175****************************************** 176TF-M Threat Model against physical attacks 177****************************************** 178 179Physical attack target 180====================== 181A malicious actor performs physical attack against TF-M to retrieve assets from 182device. These assets can be sensitive data, credentials, crypto keys. These 183assets are protected in TF-M by proper isolation. 184 185For example, a malicious actor can perform the following attacks: 186 187 - Reopen the debug port or hinder the closure of it then connect to the device 188 with a debugger and dump memory. 189 - Bypass secure boot to replace authentic firmware with a malicious image. 190 Then arbitrary memory can be read. 191 - Assuming that secure boot cannot be bypassed then an attacker can try to 192 hinder the setup of the memory isolation hardware by TF-M 193 :term:`Secure Partition Manager` (SPM) and manage to execute the non-secure 194 image in secure state. If this is achieved then still an exploitable 195 vulnerability is needed in the non-secure code which can be used to inject 196 and execute arbitrary code to read the assets. 197 - Device might contain unsigned binary blob next to the official firmware. 198 This can be any data, not necessarily code. If an attacker manages to 199 replace this data with arbitrary content (e.g. a NOP slide leading to a 200 malicious code) then they can try to manipulate the program counter to jump 201 to this area before setting up the memory isolation. 202 203.. _attacker-capability: 204 205Assumptions on attacker capability 206================================== 207It is assumed that the attacker owns the following capabilities to perform 208physical attack against devices protected by TF-M. 209 210 - Has physical access to the device. 211 - Able to access external memory, read and possibly tamper it. 212 - Able to load arbitrary candidate images for firmware upgrade. 213 - Able to manage that bootloader tries to upgrade the arbitrary image from 214 staging area. 215 - Able to inject faults on hardware level (voltage or power glitch, EM pulse, 216 etc.) to the system. 217 - Precise timing of fault injection is possible once or a few times, but in 218 general the more intervention is required for a successful attack the harder 219 will be to succeed. 220 221It is out of the scope of TF-M mitigation if an attacker is able to directly 222tamper or disclose the assets. It is assumed that an attacker has the following 223technical limitations. 224 225 - No knowledge of the image signing key. Not able to sign an arbitrary image. 226 - Not able to directly access to the chip through debug port. 227 - Not able to directly access internal memory. 228 - No knowledge of the layout of the die or the memory arrangement of the 229 secure code, so precise attack against specific registers or memory 230 addresses are out of scope. 231 232Physical attack scenarios against TF-M 233====================================== 234Based on the analysis above, a malicious actor may perform physical attacks 235against critical operations in :term:`SPE` workflow and critical modules in 236TF-M, to indirectly gain unauthenticated accesses to assets. 237 238Those critical operations and modules either directly access the assets or 239protect the assets from disclosure. Those operations and modules can include: 240 241 - Image validation in bootloader 242 - Isolation management in TF-M, including platform specific configuration 243 - Cryptographic operations 244 - TF-M Secure Storage operations 245 - PSA client permission check in TF-M 246 247The detailed scenarios are discussed in following sections. 248 249Physical attacks against bootloader 250----------------------------------- 251Physical attacks may bypass secure image validation in bootloader and a 252malicious image can be installed. 253 254The countermeasures is bootloader specific implementation and out of the scope 255of this document. TF-M relies on MCUboot by default. MCUboot has already 256implemented countermeasures against fault injection attacks [3]_. 257 258.. _physical-attacks-spm: 259 260Physical attacks against TF-M SPM 261--------------------------------- 262TF-M SPM initializes and manages the isolation configuration. It also performs 263permission check against secure service requests from PSA clients. 264 265Static isolation configuration 266^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 267It is TF-M SPM's responsibility to build up isolation during the initialization 268phase. If this is missed or not done correctly then it might be possible for 269non-secure code to access some secure memory area or an external device can 270access assets in the device through a debug port. 271 272Therefore, hindering the setup of memory or peripheral isolation hardware is an 273obvious candidate for physical attacks. The initialization phase has a constant 274time execution (like the previous boot-up state), therefore the timing of the 275attack is simpler, compared to cases when secure and non-secure runtime firmware 276is up-and-running for a while and IRQs make timing unpredictable. 277 278Some examples of attacking isolation configuration are shown in the list below. 279 280 - Hinder the setting of security regions. Try to execute non-secure code as 281 secure. 282 - Manipulate the setting of secure regions, try to extend the non-secure 283 regions to cover a memory area which otherwise is intended to be secure 284 area. 285 - Hinder the setting of isolation boundary. In this case vulnerable ARoT code 286 has access to all memory. 287 - Manipulate peripheral configuration to give access to non-secure code to a 288 peripheral which is intended to be secure. 289 290PSA client permission checks 291^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 292TF-M SPM performs several permission checks against secure service requests from 293a PSA client, such as: 294 295- Check whether the PSA client is a non-secure client or a secure client 296 297 NS client's PSA client ID is negative. NS client is not allowed to directly 298 access secure areas. A malicious actor can inject faults when TF-M SPM 299 authenticates a NS client. It may manipulate TF-M to accept it as a secure 300 client and allow the NS client to access assets. 301 302- Memory access checks 303 304 TF-M SPM checks whether the request has correct permission to access a secure 305 memory area. A malicious actor can inject faults when TF-M SPM checks memory 306 access permission. It may skip critical check steps or corrupt the check 307 result. Thereby a malicious service request may pass TF-M memory access check 308 and accesses assets which it is not allowed to. 309 310The physical attacks mentioned above relies on the a malicious NS application or 311a vulnerable RoT service to start a malicious secure service request to access 312the assets. The malicious actor has to be aware of the accurate timing of 313dealing with the malicious request in TF-M SPM. The timing can be affected by 314other clients and interrupts. 315It should be more difficult than pure fault injection. 316 317Dynamic isolation boundary configuration 318^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 319Physical attack may affect the isolation boundary setting during TF-M context 320switch, especially in Isolation Level 3. For example: 321 322 - A fault injection may cause TF-M SPM to skip clear privileged state before 323 switching in an ARoT service. 324 - A fault injection may cause TF-M SPM to skip updating MPU regions and 325 therefore the next RoT service may access assets belonging to a previous 326 one. 327 328However, it is much more difficult to find out the accurate timing of TF-M 329context switch, compared to other scenarios in TF-M SPM. It also requires a 330vulnerable RoT service to access assets after fault injection. 331 332Physical attacks against TF-M Crypto service 333-------------------------------------------- 334Since crypto operations are done by mbedTLS library or by a custom crypto 335accelerator engine and its related software driver stack, the analysis of 336physical attacks against crypto operations is out-of-scope for this document. 337However, in general the same requirements are applicable for the crypto, to be 338compliant with PSA Level 3 certification. That is, it must be resistant against 339physical attacks. So crypto software and hardware must be hardened against 340side-channel and physical attacks. 341 342Physical attacks against Secure Storage 343--------------------------------------- 344Physical attacks against Internal Trusted Storage 345^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 346Based on the assumption in :ref:`attacker-capability`, a malicious actor is 347unable to directly retrieve assets via physical attacks against 348:term:`Internal Trusted Storage` (ITS). 349 350Instead, a malicious actor can inject faults into isolation configuration of ITS 351area in TF-M SPM to gain the access to assets stored in ITS. Refer to 352:ref:`physical-attacks-spm` for details. 353 354Physical attacks against Protected Storage 355^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 356Based on the assumption in :ref:`attacker-capability`, a malicious actor can be 357able to directly access external storage device. 358Therefore :term:`Protected Storage` (PS) shall enable encryption and 359authentication by default to detect tampering with the content in external 360storage device. 361 362A malicious actor can also inject faults into isolation configuration of PS and 363external storage device peripherals in TF-M SPM to gain the access to assets 364stored in PS. Refer to :ref:`physical-attacks-spm` for details. 365 366It is out of the scope of TF-M to fully prevent malicious actors from directly 367tampering with or retrieving content stored in external storage devices. 368 369Physical attacks against platform specific implementation 370--------------------------------------------------------- 371Platform specific implementation includes critical TF-M HAL implementations. 372A malicious actor can perform physical attack against those platform specific 373implementations to bypass the countermeasures in TF-M common code. 374 375Platform early initialization 376^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 377TFM provides a HAL API for platforms to perform HW initialization before SPM 378initialization starts. 379The system integrator is responsible to implement this API on a particular SoC 380and harden it against physical attacks: 381 382.. code-block:: c 383 384 enum tfm_hal_status_t tfm_hal_platform_init(void); 385 386The API can have several initializations on different modules. The system 387integrator can choose to even harden some of these initializations functions 388within this platform init API. One of the examples is the debug access setting. 389 390Debug access setting 391******************** 392TF-M configures debug access according to device lifecycle and accessible debug 393certificates. In general, TF-M locks down the debug port if the device is in 394secure production state. 395The system integrator can put the settings into an API and harden it against 396physical attacks. 397 398Platform specific isolation configuration 399^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 400TFM SPM exposes a HAL API for static and dynamic isolation configuration. The 401system integrator is responsible to implement these API on a particular SoC and 402harden it against physical attacks. 403 404.. code-block:: c 405 406 enum tfm_hal_status_t tfm_hal_set_up_static_boundaries(void); 407 enum tfm_hal_status_t tfm_hal_bind_boundary(const struct partition_load_info_t *p_ldinf, 408 uintptr_t *p_boundary); 409 410Memory access check 411^^^^^^^^^^^^^^^^^^^ 412TFM SPM exposes a HAL API for platform specific memory access check. The 413system integrator is responsible to implement this API on a particular SoC and 414harden it against physical attacks. 415 416.. code-block:: c 417 418 tfm_hal_status_t tfm_hal_memory_check(uintptr_t boundary, 419 uintptr_t base, 420 size_t size, 421 uint32_t access_type); 422 423.. _tf-m-against-physical-attacks: 424 425********************************************* 426TF-M countermeasures against physical attacks 427********************************************* 428This section propose a design of software countermeasures against physical 429attacks. 430 431Fault injection hardening library 432================================= 433There is no open-source library which implements generic mitigation techniques 434listed in :ref:`phy-att-countermeasures`. 435TF-M project implements a portion of these techniques. TF-M software 436countermeasures are implemented as a small library Fault Injection Hardening 437(FIH) in TF-M code base. A similar library was first introduced and tested in 438the MCUboot project (version 1.7.0) [2]_ which TF-M relies on. 439 440The FIH library is put under TF-M ``lib/fih/``. 441 442The implementation of the different techniques was assigned to fault injection 443protection profiles. Four profiles (OFF, LOW, MEDIUM, HIGH) were introduced to fit 444better to the device capability (memory size, TRNG availability) and to 445protection requirements mandated by the device threat model. Fault injection 446protection profile is configurable at compile-time, default value: OFF. 447 448Countermeasure profiles and corresponding techniques are listed in the table 449below. 450 451+--------------------------------+-------------+----------------+--------------+------------------+ 452| Countermeasure | Profile LOW | Profile MEDIUM | Profile HIGH | Comments | 453+================================+=============+================+==============+==================+ 454| Control flow monitor | Y | Y | Y | | 455+--------------------------------+-------------+----------------+--------------+------------------+ 456| Failure loop hardening | Y | Y | Y | | 457+--------------------------------+-------------+----------------+--------------+------------------+ 458| Complex constant | | Y | Y | | 459+--------------------------------+-------------+----------------+--------------+------------------+ 460| Redundant variables and checks | | Y | Y | | 461+--------------------------------+-------------+----------------+--------------+------------------+ 462| Random delay | | | Y | Implemented, but | 463| | | | | depends on HW | 464| | | | | capability | 465+--------------------------------+-------------+----------------+--------------+------------------+ 466 467Similar to MCUboot, four profiles are supported. It can be configured at build 468time by setting (default is OFF): 469 470 ``-DTFM_FIH_PROFILE=<OFF, LOW, MEDIUM, HIGH>`` 471 472How to use FIH library 473====================== 474As analyzed in :ref:`phy-att-threat-model`, this section focuses on integrating 475FIH library in TF-M SPM to mitigate physical attacks. 476 477 - Identify critical function call path which is mandatory for configuring 478 isolation or debug access. Change their return types to ``FIH_RET_TYPE`` and 479 make them return with ``FIH_RET``. Then call them with ``FIH_CALL``. These macros 480 are providing the extra checking functionality (control flow monitor, redundant 481 checks and variables, random delay, complex constant) according to the profile 482 settings. More details about usage can be found here: 483 ``trusted-firmware-m/lib/fih/inc/fih.h`` 484 485 Take simplified TF-M SPM initialization flow as an example: 486 487 .. code-block:: c 488 489 main() 490 | 491 |--> tfm_core_init() 492 | | 493 | |--> tfm_hal_set_up_static_boundaries() 494 | | | 495 | | |--> platform specific isolation impl. 496 | | 497 | |--> tfm_hal_platform_init() 498 | | 499 | |--> platform specific init 500 | 501 |--> During each partition initialization 502 | 503 |--> tfm_hal_bind_boundary() 504 | 505 |--> platform specific peripheral isolation impl. 506 507 - Might make the important setting of peripheral config register redundant 508 and verify them to match expectations before continue. 509 510 - Implements an extra verification function which checks the critical hardware 511 config before secure code switches to non-secure. Proposed API for this 512 purpose: 513 514 .. code-block:: c 515 516 fih_int tfm_hal_verify_static_boundaries(void); 517 518 This function is intended to be called just after the static boundaries are 519 set up and is responsible for checking all critical hardware configurations. 520 The goal is to catch if something is missed and act according 521 to system policy. The introduction of one more checking point requires one 522 more intervention with precise timing. The system integrator is responsible 523 to implement this API on a particular SoC and harden it against physical 524 attacks. Make sure that all platform dependent security feature is properly 525 configured. 526 527 - The most powerful mitigation technique is to add random delay to the code 528 execution. This makes the timing of the attack much harder. However it 529 requires an entropy source. It is recommended to use the ``HIGH`` profile 530 when hardware support is available. There is a porting API layer to fetch 531 random numbers in FIH library: 532 533 .. code-block:: c 534 535 void fih_delay_init(void); 536 uint8_t fih_delay_random(void); 537 538 - Similar countermeasures can be implemented in critical steps in platform 539 specific implementation. 540 541 Take memory isolation settings on AN521 platform as an example. 542 The following hardware components are responsible for memory isolation in a 543 SoC, which is based on SSE-200 subsystem. 544 System integrators must examine the chip specific memory isolation solution, 545 identify the key components and harden the configuration of those. 546 This list just serves as an example here for easier understanding: 547 548 - Implementation Defined Attribution Unit (IDAU): Implementation defined, 549 it can be a static config or dynamic. 550 Contains the default security access permissions of the memory map. 551 - SAU: The main module in the CPU to determine the security settings of 552 the memory. 553 - :term:`MPC`: External module from the CPU point of view. It protects the 554 non security aware memories from unauthenticated access. Having a 555 properly configured MPC significantly increases the security of the 556 system. 557 - :term:`PPC`: External module from the CPU 558 point of view. Protects the non security aware peripherals from 559 unauthenticated access. 560 - MPU: Protects memory from unprivileged access. ARoT code has only a 561 restricted access in secure domain. It mitigates that a vulnerable or 562 malicious ARoT partition can access to device assets. 563 564 The following AN521 specific isolation configuration functions 565 shall be hardened against physical attacks. 566 567 .. code-block:: c 568 569 sau_and_idau_cfg() 570 mpc_init_cfg() 571 ppc_init_cfg() 572 573 Some platform specific implementation rely on platform standard device 574 driver libraries. It can become much more difficult to maintain drivers if 575 the standard libraries are modified with FIH library. Platform specific 576 implementation can implement duplicated execution and redundant variables/ 577 condition check when calling platform standard device driver libraries 578 according to usage scenarios. 579 580Impact on memory footprint 581========================== 582The addition of protection code against physical attacks increases the memory 583footprint. The actual increase depends on the selected profile and where the 584mitigation code is added. 585 586Attack experiment with SPM 587========================== 588The goal is to bypass the setting of memory isolation hardware with simulated 589instruction skips in fast model execution (FVP_MPS2_AEMv8M) in order to execute 590the regular non-secure test code in secure state. This is done by identifying 591the configuration steps which must be bypassed to make this happen. The 592instruction skip simulation is achieved by breakpoints and manual manipulation 593of the program counter. The following steps are done on AN521 target, but this 594can be different on another target: 595 596 - Bypass the configuration of isolation HW: SAU, MPC. 597 - Bypass the setting of the PSP limit register. Otherwise, a stack overflow 598 exception will happen. Because the secure PSP will be overwritten by the 599 address of the non-secure stack and on this particular target the non-secure 600 stack is on lower address than the value in the secure PSP_LIMIT register. 601 - Avoid the clearing of the least significant bit in the non-secure entry 602 point, where BLXNS/BXNS is jumping to non-secure code. Having the least 603 significant bit cleared indicates to the hardware to switch security state. 604 605The previous steps are enough to execute the non-secure Reset_Handler() in 606secure state. Usually, RTOS is executing on the non-secure side. In order to 607properly boot it up further steps are needed: 608 609 - Set the S_VTOR system register to point the address of the NS Vector table. 610 Code is executed in secure state therefore when an IRQ hit then the handler 611 address is fetched from the table pointed by S_VTOR register. RTOS usually 612 do an SVC call at start-up. If S_VTOR is not modified then SPM's SVC handler 613 will be executed. 614 - TBC: RTX osKernelStart still failing. 615 616The bottom line is that in order to execute the regular non-secure code in 617secure state the attacker need to interfere with the execution flow at many 618places. Successful attack can be made even harder by adding the described 619mitigation techniques and some random delays. 620 621 622********* 623Reference 624********* 625 626.. [1] `PSA Certified Level 3 Lightweight Protection Profile <https://www.psacertified.org/app/uploads/2020/11/JSADEN009-PSA_Certified_Level_3_LW_PP-1.0-ALP02.pdf>`_ 627 628.. [2] `MCUboot project <https://github.com/mcu-tools/mcuboot/blob/master/boot/bootutil/include/bootutil/fault_injection_hardening.h>`_ 629 630.. [3] `MCUboot fault injection mitigation <https://www.trustedfirmware.org/docs/TF-M_fault_injection_mitigation.pdf>`_ 631 632-------------------------------- 633 634*Copyright (c) 2021-2022, Arm Limited. All rights reserved.* 635