cancel
Showing results for 
Search instead for 
Did you mean: 

X-CUBE-SBSFU with third-party firmware

crwper
Senior

Hi all--

I am working on a project using the STM32WB55. I would like to distribute an "official" firmware, but also allow third-party firmware to be installed. The intent is that the official firmware will be open source, so users can make changes to the official firmware and upload their own firmware to the device.

The catch is that the official firmware will include private keys (not part of the open source distribution, obviously) used for signing its output, so that users can verify that the output was produced by the official firmware, as opposed to a third-party firmware. This means:

  1. The official firmware should be encrypted, and X-CUBE-SBSFU will need to have the ability to decrypt that firmware.
  2. Users should not have access (either through JTAG or through third-party firmware) to the keys required to decrypt the official firmware, or to the decrypted official firmware itself.

I think X-CUBE-SBSFU will allow me to do this with a little modification, but I wanted to get a second opinion in case there are security issues I'm overlooking.

The main question, in my mind, is whether X-CUBE-SBSFU security relies on only "official" user applications being installed. As near as I can tell, this is not the case, since several protections are in place by the time the user application is executed:

  1. RDP and IWDG prevent external attacks. I am planning to use RDP level 2, as recommended, so that JTAG will not have access to RAM and Flash, and option bytes cannot be changed.
  2. AES keys are stored in the M0 secure memory area, and are not accessible from the user application.
  3. WRP prevents the user application from modifying the Secure Engine and SBSFU.
  4. MPU is configured to prevent the user application from executing any code outside its own memory.

In addition, the single-slot implementation of X-CUBE-SBSFU doesn't allow partial firmware updates, so the "official" firmware should be completely erased before a third-party firmware is installed.

Can you think of anything I'm overlooking here?

Michael

1 ACCEPTED SOLUTION

Accepted Solutions

Hi Michael,

you could use different keys in CKS. I mean one dedicated to official application and other that would be shared. But in that case this is no more a secret ...

You can also use CKS to store a key that would encrypt your private key. Only SBSFU would be able to decrypt it and use it. Then key will be locked and private key erased from RAM.

There is no way from application to access to CKS. So user application will not be able get it.

Best regards

Jocelyn

View solution in original post

9 REPLIES 9
Jocelyn RICARD
ST Employee

Hello Michael,

The SBSFU always checks the signature of installed firmware as well as any new firmware to be installed.

There is no passthrough.

What you could add in SBSFU could be some specific information in the header to tell about origin of the firmware. If this is official firmware you go through normal SBSFU. If this is open source, then you could either use another set of keys.

Regarding partial firmware update constraint, I don't really get your point. But, erasing official application before installing new one could limit the risk to retrieve the secrets from it.

Best regards

Jocelyn

Thanks for your help, Jocelyn.

I was thinking of disabling the signature check on boot, since we don't need to ensure that only official firmware is installed in this application. At least, this is not a requirement for the application itself, but one thing I'd like to hear your thoughts on is whether this introduces any security concerns I should be aware of. If I understand correctly, the signature check is generally included for business reasons--e.g., to control which features are enabled on the hardware--rather than for security reasons. Would skipping the signature check (i.e., allowing unofficial firmware to be installed) introduce any serious security issues?

My thinking is that the security measures mentioned above (RDP, IWDG, WRP, MPU) can be used to prevent unofficial firmware from accessing private data (e.g., the encryption key for the official firmware).

With the signature check removed, a partial firmware update would introduce a security flaw, I think, since an attacker could replace only part of the official firmware, and use that replacement code to read out proprietary information (e.g., the key used to sign log files) contained in the official firmware. This is why I mentioned disabling partial firmware updates--because by erasing the whole firmware before installing third-party firmware, I think we eliminate this attack.

I've been thinking of making two changes in SBSFU:

  1. Remove explicit signature checks.
  2. Implement ECIES so that both the official firmware and unofficial firmwares have the option to encrypt their distributed binaries. The private asymmetric key would be stored in the bootloader, which I think will make it necessary to implement MPU privileged mode as is done in the STM32F4 SBSFU example application.

Again, the potential issue I see is that this opens up possibilities for an inside attack (i.e., from an installed unofficial firmware), but as near as I can tell, the available security tools allow us to protect against this issue.

Michael

Jocelyn RICARD
ST Employee

Hello Michael,

The principle of security in SBSFU is based on the root of trust and the chain of trust.

Root of trust is the hardware ensuring that secure boot will run first and will not be tampered. This is using hardware protection.

Then once you can trust the secureboot you can rely on the public key used to authenticate the firmware as well as the code executing this check.

In your case, what you want to secure is the official firmware. This firmware embbeds some keys that are secret.

You want to remove the authentication of this firmware.

Now, just imagine that your keys have leaked. Then anyone can create an "official firmware" as it will not be authenticated.

So, authentication is really mandatory to ensure official firmware is official.

Besides, storing private asymmetric keys in the bootloader with only MPU protection will not work in your context. Reason is that any application can change the MPU configuration. It is up to the application to actually setup the MPU configuration.

I hope this clarifies some points

Best regards

Jocelyn

Ah, of course. You mention that "only MPU protection" won't do the trick--is there another protection strategy which might allow a private asymmetric key to be installed on the device, while still allowing third-party firmware?

There is one other idea I had, which does not meet this requirement, but would meet the basic requirement of allowing third-party firmware. Since we're using an STM32WB55, we can provision a private AES key in CKS on CPU2, and disable this key before the user application is executed. This is the same strategy used in the STM32WB55 examples, so the main change would be to add a flag to the firmware header which allows unencrypted firmware to be installed.

My only hesitation with this strategy is that it treats the official firmware and third-party firmwares quite differently--i.e., the official firmware can be encrypted, but third-party firmwares cannot. This is in contrast to ECIES, which would give third-party developers access to the same tools the official firmware has.

However, aside from that, my main concern is whether allowing third-party firmwares to be installed would compromise security at all. Since the key stored in CKS is disabled before the user application is called, I don't think it would be possible for third-party firmware to read or use that key. Can you think of any other attack I might be overlooking?

Thanks for all your help. This has been very enlightening.

Michael

Hi Michael,

you could use different keys in CKS. I mean one dedicated to official application and other that would be shared. But in that case this is no more a secret ...

You can also use CKS to store a key that would encrypt your private key. Only SBSFU would be able to decrypt it and use it. Then key will be locked and private key erased from RAM.

There is no way from application to access to CKS. So user application will not be able get it.

Best regards

Jocelyn

Thanks, Jocelyn.

The second idea sounds perfect. This would allow us to use ECIES, so that third parties have essentially the same security options as we do, while ensuring that those security options are, in fact, secure.

I appreciate your input. This is exactly the kind of information I was looking for when I made this post.

Michael

I've just run into a surprise with our bootloader, and I'm wondering if it might be related to any known issues. We are using CKS to store an AES key, and within the bootloader we use this to decrypt an encrypted ECC key. Following the examples given in the X-CUBE-SBSFU package, I use this function to lock the key before running either the user application or the standalone loader:

void SE_CRYPTO_Lock_CKS_Keys(void)
{
	/* Lock the key 1 */
	HAL_NVIC_EnableIRQ(IPCC_C1_RX_IRQn);
	HAL_NVIC_EnableIRQ(IPCC_C1_TX_IRQn);
	SHCI_C2_FUS_LockUsrKey(SBSFU_AES_KEY_IDX);
 
	/* Unload the keys by resetting AES1 using AES1RST bit in RCC_AHB2RSTR
	   As keys are locked they cannot be re-loaded */
	__HAL_RCC_AES1_FORCE_RESET();
	__HAL_RCC_AES1_RELEASE_RESET();
}

This is called from SE_LockRestrictServices, again as is done in the 1_Image example for the P-NUCLEO-WB55.Nucleo. However, in my user application I am able to load the same key and successfully use it to decrypt the ECC key.

I have verified that the function above is actually being called before the user application is executed.

I have also verified that if I lock the key in my user application, immediately before the test, the test fails as expected.

I'm wondering if this is a known issue.

I'm also wondering if this line from the CM0_DeInit function might be causing a problem:

  /* Allow the UserApp to fake a set C2BOOT as it has already been set */
  SHCI_C2_Reinit();

As I understand it, C2 isn't actually reinitialized when this is called, but is there any chance the CKS keys get unlocked when that function is called?

crwper
Senior

I've done some more testing, and I think I've verified my hypothesis above.

For this test, I built a simple application which works on the P-NUCLEO-WB55. I'm happy to share the full application if it will be helpful. The application tests three things:

  1. It verifies that AES-256 CTR is working correctly using a key stored in CKS.
  2. It verifies that if SHCI_C2_FUS_LockUsrKey is called, a subsequent call to SHCI_C2_FUS_LoadUsrKey fails.
  3. It shows that re-initializing C2 between these two calls with SHCI_C2_Reinit allows the key to be loaded afterward.

The body of the test looks like this:

void Test_Crypto(void)
{
	CRYP_HandleTypeDef hcryp1;
 
	uint32_t i;
	uint32_t j;
 
	SHCI_CmdStatus_t res;
 
	/* AES1 HW peripheral initialization */
	hcryp1.Instance = AES1;
 
	if (HAL_CRYP_DeInit(&hcryp1) != HAL_OK)
	{
		Error_Handler();
	}
 
	hcryp1.Init.DataType = CRYP_DATATYPE_8B;
	hcryp1.Init.DataWidthUnit = CRYP_DATAWIDTHUNIT_BYTE;
	hcryp1.Init.KeySize = CRYP_KEYSIZE_256B;
	hcryp1.Init.Algorithm = CRYP_AES_CTR;
	hcryp1.Init.pKey = NULL; /* Key will be provided by CKS service */
 
	for (i = 0U; i < 4U; i ++)
	{
		j = 4 * i;
		if (i != 3U)
		{
			aes_iv[i] = random_bytes[j] << 24 | random_bytes[j + 1] << 16 | \
					random_bytes[j + 2] << 8 | random_bytes[j + 3];
		}
		else
		{
			aes_iv[i] = 0;
		}
	}
 
	hcryp1.Init.pInitVect = aes_iv;
 
	if (HAL_CRYP_Init(&hcryp1) != HAL_OK)
	{
		Error_Handler();
	}
 
#ifdef LOCK_USER_KEY
	res = SHCI_C2_FUS_LockUsrKey(CKS_KEY_INDEX);
	if (res != SHCI_Success)
	{
		Error_Handler();
	}
#endif
 
#ifdef C2_REINIT
	res = SHCI_C2_Reinit();
	if (res != SHCI_Success)
	{
		Error_Handler();
	}
 
	main_cpu2_ready = 0;
 
	MX_APPE_Init();
 
	while (!main_cpu2_ready)
	{
		MX_APPE_Process();
	}
#endif
 
	res = SHCI_C2_FUS_LoadUsrKey(CKS_KEY_INDEX);
	if (res != SHCI_Success)
	{
		Error_Handler();
	}
 
	if (HAL_CRYP_Encrypt(&hcryp1, (uint32_t *)random_bytes, sizeof(random_bytes),
			(uint32_t *)output, 1000) != HAL_OK)
	{
		Error_Handler();
	}
 
	/* Unload user key */
	if (SHCI_C2_FUS_UnloadUsrKey(CKS_KEY_INDEX) != SHCI_Success)
	{
		Error_Handler();
	}
 
	if (HAL_CRYP_DeInit(&hcryp1) != HAL_OK)
	{
		Error_Handler();
	}
 
	if (memcmp(output, encrypted_bytes, sizeof(encrypted_bytes)))
	{
		Error_Handler();
	}
}

If neither LOCK_USER_KEY nor C2_REINIT are defined, the test passes as expected.

If only LOCK_USER_KEY is defined, the test fails as expected.

If both LOCK_USER_KEY and C2_REINIT are defined, the test passes. Is this the expected behaviour?

Jocelyn RICARD
ST Employee

Hello @crwper​,

thank you for raising this point.

I'm not able to tell you if this is normal or not.

I will check internally

Best regards

Jocelyn