cancel
Showing results for 
Search instead for 
Did you mean: 

Writing own drivers vs HAL for professional project

CDyer.1
Senior

Hi,

firstly i apologise if this is in the wrong section as I'm asking more for technical advice than an answer for a specific technical question.

First a bit of background:

I'm a PhD student that is the sole engineer on a three year project that includes hardware design and embedded firmware programming. For the embedded side I've elected to use the STM32H7 line for its fast clock speeds and DSP capabilities as I am going to be doing DSP related work further down the line. i have spent the past two months working with STM32F4 and getting to learn the fundamentals as it is a simpler device than the H7. My previous experience in embedded systems was 8 bit PIC as an undergraduate.

Issue:

The first month spent with the STM32F4 I played around with STM32CUBEMX for the clocks and UART and HAL for the ADC peripheral and got some fairly complicated system (read in to the ADC, do some stuff, output results over GPIO and over USART) up and running. Because I also want to learn the hardware itself, in the second month I spent writing my own drivers from scratch. I got GPIO with interrupts, and SPI up and running. Then I elected to move to the H7 as I want to do some fairly intensive DSP work later on, and in two months have only written GPIO drivers based on the previous work I did for the STM32F4. It is taking me ages to understand the peripherals present in the H7 as they seem to be orders of magnitude more complicated than the F4 (the SPI is a good example of this). If I want to go down the writing my own drivers route, I also have to write drivers for: The clock sources, the USART, the DMA, the ADC, timers and I may want to interface with a TFT display at some point (maybe TouchGFX?). My concern is that I won't have enough time before I get to the DSP part of the project if I write my own drivers as there is just too much work to do (for someone who has to understand how each of the peripherals work first). My other concern however is that if I use HAL to get the job done quicker, I'm not learning enough about the system. I would like to have deep knowledge of the STM32 when this project is done, but my fear is that HAL will abstract way too much. Worse case scenario for me is after the project attempting to get a job that requires register level knowledge and I only have HAL knowledge.

I'd also like to note that I have no professionals here that I can go to for advice. If I get stuck on something, the project halts until I can find the solution.

Can anyone give me any advice on how to make a decision with the project? Do i proceed to write my own drivers and get a better understanding of the chip and more performance out of the chip but at running the risk of running out of time or do I just proceed with HAL and get everything up and running nice and quickly and not worry about the details?

Any help or advice would be appreciated, I'm at the point in the project now where I need to make this decision before I proceed with continuing to write the drivers.

28 REPLIES 28

Unfortunately this is a commercial product (the PhD aspect has kind of been bolted on) and the final product is expected to constantly operate in an industrial environment for years. With the product it's replacing, some of the installations have been operating constantly for 20 years! While the process isn't quite as critical as your medical equipment example, the last thing I want is for some unforeseen bug in the HAL to bring the system crashing down. Not that my own code would be bug free, but it would certainly be easier to fix any issues that may arise. With your final point, that's my biggest concern. To write a fully featured in-house library would be a project in of itself and realistically I have only a number of months to get to a base line where I can start the DSP work. As I'm looking to implement a handful of peripherals and largely operate them in their simplest modes, I think it may be worth spending some time just writing basic drivers for what i need rather than writing something fully featured.

I know the ARM is the CPU and the STM32 is the MCU, containing the ARM core + STs peripherals. I appreciate that outside the ARM core that different vendors will vary the peripherals and how they are configured but won't using HAL prevent learning a more intimate knowledge of the peripherals, the basics of which can then be transferred to any system? An example being SPI, learning how to configure it correctly in HAL isn't exactly giving you the best understanding of how the MCU configures it. It would be extremely difficult to then go to NXPs LPC series and do the same thing , especially as they don't have a HAL equivalent (unless I'm mistaken), you'd have to do it via low level. I have to admit that using HAL I got the fundamentals of the firmware done in about a month whuch was great whereas writing my own drivers, I've barely gotten past toggling a LED with interrupts. A lot of people are saying that the HAL isn't suitable for DSP or for time critical or safety critical applications, what are your thoughts on that? Just to clarify, this project should result in a final commercial product.

You might like the middle ground and use the LL libraries. They abstract away a little bit, not enough that your stuff comes out of the oven broken, but a little bit more than direct register access. Perhaps it's the compromise that'll satisfy?

DSP wise your efforts are going to be in CMSIS and possibly even optimizing beyond the compiler say for SIMD. Heaven help you if you have to use the gcc assembler's funky mnemonics (or did they adopt ARM mnemonics? That would be cool). Depending on your timeline and budget it may be worth setting up a dev environment you like with visual studio and clang, or even paying for IAR.

But the LL interface is as bare metal as programming the registers directly, most LL functions being one-liner inline functions reading or writing a single register, or a bitfield in it.

Consider the workflow.

Using the LL functions you follow the instructions in the reference manual, but then instead of accessing the register directly, look up the LL function that does the same. Searching the headers, you find a #define creating a LL_whatever alias for it. Then search for that alias to finally find the function using it.

Then your code has to be modified, either by someone else, or by you a year later, when you have long forgotten what each line does. The documentation of the function is a Captain Obvious one-liner, so wind it back, possibly through a chain of #defines, to the CMSIS definition which can be looked up in the reference manual.

On the other hand, if you read 'to enable this feature on the ABC peripheral, set the GHI bits in the DEF register' in the reference manual, you can blindly write

ABC1->DEF |= ABC_DEF_GHI;

and the fellow who has to fix something there can go straight to the ABC chapter and look up the DEF register there.

I don't think the LL is quite that bad but your point about captain obvious one liners is not wrong. The one liners aren't mandatory either, although I take your point that they may be detrimental where they obscure the activity.

Two benefits of LL immediately come to mind. The configuration data structs help organize and aid in remembering or at least forcing you to think about all of the parameters involved in bringing up peripherals, and two, the inits themselves aren't trivial one liners. These are enough for me most of the time.

Whether or not it's worth it for a particular project is one of those judgment calls we have to make that justify calling our profession an art as well as a science.

@CDyer.1​ 

> the last thing I want is for some unforeseen bug in the HAL to bring the system crashing down.

There are a couple of missing volatile declarations in HAL just for this purpose.

@n2wx​

Mnemonics taken straight from ARM docs work in gcc. Getting the constraints right is not everybody's idea of fun though.

JoniS
Senior

Start with HAL and when needed write some parts yourself, atleast you get mostly working starting point, which you can use as basis for your own code.

Ie. Let HAL do the periephal init and then write the code that takes care of for example spi send and receive yourself, if you want you can then replace the init codes later on with your own implementation when you know rest of the periephal driver works, which simplifies things alot.

SPI is a good example. suppose you already know the protocol but not the STM implementation. STM SPI has a NSS bit and signal which is notorious for its peculiar design. You will, sooner or later, hit this feature, regardless whether you are going down from HAL or up from register level. And you may need to master NSS, so you read+debug+test alot. This knowledge is pretty useless for other vendor's SPI. Same if you need to do advanced things like timer triggered SPI transfers using DMA.

Time criticality may be an issuse. HAL has a "design princle" of having only one handler (HAL_SPI_IRQHandler) for all SPI instances. Parts of the handler are generic, others not. When you implement a completion callback, it typically starts with figuring out what SPI instance did you call. This is overhead. Don't know the design rationale behind it, my guess: space efficiency. Which is for some projects also important.

Safety: Sorry, but I doubt that you can achieve any formal degree of safety (SIL, ISO 26262,...) with either approach. You need certified toolchains and libraries (like C runtime) for that, and you cannot be the one and only engineer.

The is a famous saying "Premature optimization is the root of all evil" by Donald Knuth.

And the pareto principle: 80 percent of the runtime is spent in 20% of the code.

I like both of them.

I've got 0 experience with the LL that ST provides but I can see your point. The explanations for some of the HAL functions somehow tends to have less useful information in it than the function name. I'll take a look at what LL offers as @n2wx​  is suggesting that it helps in organising the code if I understand it correctly. But if it's barely different to just programming the registers directly then I'd rather stick to the reference manual as my literature of choice.

I've not actually looked too deeply into CMSIS but I was under the impression that it was purely for the ARM stuff, not the vendor specific stuff such as peripheral access, control etc. Although I know I'll need it at some point for the DSP implementation.