cancel
Showing results for 
Search instead for 
Did you mean: 

Slow TLS communication with NetXDuo STM32MP13

Clement7
Associate II

Hello,
I'm starting to port functional code from stm32f7 to stm32mp13. On the F7, I was using FreeRTOS+Lwip+mbedTLS. On stm32mp13, I switched to ThreadX+NetXDuo as it's the only supported stack. For my tests, I used the examples provided by the STM32CubeMP13 and pimped them up on the STM32MP13-DK board.

I ran a few tests on both architectures. The TLS connection is established much more quickly with the stm32mp13 than with the stm32f7, which is reassuring :

  • RSA certificate (2048 bits) : 52ms on mp1 VS 1400ms on F7.
  • ECC certificate (secp384) : 700ms on mp13 VS 4000ms on F7.

However, when it comes to data transfer after the handshake, I have a ratio of 10. For example, to transfer 340Kb, the mp1 takes 10 sec, while the f7 takes 1.4 sec. For the record, I've done data transfer tests without TLS, and in this case I'm faster with the mp1 than with the f7. So I deduce that the problem comes with the TLS. I don't know whether it's the stack itself or whether I'm misconfiguring NetXDuo.

What are the ideal settings with netxduo at TLS level for this kind of data transfer? 

Is anyone experiencing this kind of problem?

Thank you

Best regards.

4 REPLIES 4
Clement7
Associate II

Hi again,

I solved my problem by disabling NX_SECURE_ENABLE_AEAD_CIPHER. The STM32MP13 is much faster without this option. 340Kb data transfer takes 500ms versus 10 seconds before removing this option.

 

So I can't use TLS1.3 with optimal performances since it uses AEAD ciphers.

And I can't contact TLS web server since most of them only accept AEAD ciphers.


My question now is : why this option is adding a x20 ratio to my TLS communication ? I suppose it comes from AES-GCM processing but I wouldn't have expected such slowness. I saw on this post : https://community.st.com/t5/stm32-mcus-security/x-cube-azure-h7-hw-cryptographic-acceleration/td-p/626407/page/2 that you will not provide hardware acceleration. Is there any solution to improve performance ?

 

Thanks for your reply.

Best regards

Hello @Clement7 ,

I doubt the ratio really in 20x surely the lack of hardware crypto acceleration for AES GCM will impact the performance greatly but not with a ratio of 20x so I would like to see the method with which you are measuring this performance degradation compared to which nominal value. 

the only solution for you to improve performance is to try and implement you proper hardware leveraging driver to be used by NetX which can be quite the journey as mentioned in the thread you linked. 

going back to software solution if you envision using the MP13 with a Linux build you will find a lot of very optimized and robust software taking care of TLS and Networking in general.

 Regards

In order to give better visibility on the answered topics, please click on Accept as Solution on the reply which solved your issue or answered your question.

Hello @STea,

The method used to measure this is the following : 

- Create HTTPS client and force the cipher suite used to established a secure connection

- Create a local HTTPS server on my computer which host rest API. The GET reply sends 31MB data to the client.

- Request this GET from the Azure client 

- Measure the time between the request and the full reply reception 

 

My ethernet configuration is globally  the same between all my test : Maximum Segment Size, TCP Window, buffer size ... 

Here are my surprising result, especially results concerning the MP1 and H7 under Azure stack.

Clement7_0-1726126185945.png

 

I think I'm missing something on the MP1 side, but I don't know what.

Thanks for your reply,

Regards

Hello @Clement7 ,

There seems to be some sort of a bottleneck from MP1 on azure side which I can't see why it is happening. the degradation between azure and non-azure benchmarks is for sure the lack of hardware acceleration and leveraging of symmetric crypto. but the result found should be in the same range as the H7 series for example. I recommend you check the configuration of ethernet and azure the clocks the io speeds and try to run booth test on similar software environments to see why this is happening.
Regards 

In order to give better visibility on the answered topics, please click on Accept as Solution on the reply which solved your issue or answered your question.