cancel
Showing results for 
Search instead for 
Did you mean: 

Package management, why to copy all files?

JKapp.1
Associate III

I'm highly confused. Coming from Keil MDK, I really enjoyed the CMSIS-Pack approach to manage dependencies. A MCU (and vendor) independent and efficient approach which works in command line and various IDEs (Keil MDK, VSCode with plugin, Eclipse). See Arm Keil Microcontroller Development Kit (MDK) Getting Started GuideArm Keil | CMSIS PacksCMSIS: Introduction, and Arm Keil | Arm CMSIS.

The key advantage was that there's a global config file to define all dependencies (like CMSIS-Core vxxx, or BSP, etc) in a certain version and then the package manager downloads these packages on demand. There's no need to check in all libraries/dependencies into the source control or add them to the source tree at all - they are just linked automatically.

Now that the ST packages all use CMSIS, why do all examples contain hundreds of MB of libraries that could be installed via a package manager? That feels like a huge step back.

Maybe I'm misunderstanding something, there's the "Embedded Software Packages" menu in CubeIDE, but I can't add packages to the current project, so how are they supposed to be used?

Thanks,

Jan

16 REPLIES 16
JKapp.1
Associate III

This has nothing to do with a debugger. I think we're not discussing the same thing here.

From my PoV STM32CubeIde is not suited for scaleable, professional, CI/CD driven development. And that's fine, just wanted to make sure.

It might be useful as an IDE for debugging, but apparently not for package/project/configuration management.

oxc07w2
Associate III

There are different ways of realization with different advantages and disadvantages. Plus if you use one way, you are used to do it this way. To give you a different perspective from a hardware and software developer:

"Hundrets of MB" is not a reason for me, since we are no longer in the 90s. Any system update takes up more memory than library. The reference manual alone takes about 30...50 MB. I prefer efficency, but this is not the place where a battle is won, not today.

For me it is important to be as independent as possible. Exaggerated: If the world gets nuked, there is no internet any more, I lost my system, but I found my backup, then I will be able to restore the system and build the binary. For this it is helpful to have a few zip files containing all libraries I need and a way to import them. Downloading everything slice-by-slice (automatically) is disadvantageous for this.


@JKapp.1 wrote:

The key advantage was that there's a global config file to define all dependencies (like CMSIS-Core vxxx, or BSP, etc) in a certain version and then the package manager downloads these packages on demand. There's no need to check in all libraries/dependencies into the source control or add them to the source tree at all - they are just linked automatically.


In contrast, I don't like it when everything is done automatically, because that often leads to me not noticing/understanding what exactly is happening. I want to know, what dependencies exist.

There are also other ways. Another vendor of MCUs provides one huge file containing everything for every MCU on the market, including all examples and all thirt party libs. You need to copy or link everything you need. Huge, but not bad in my opinion.

JKapp.1
Associate III

Of course there are always multiple ways. But some do follow best-practices and some do not. Or something in between. And no-one says that the best-practice way is the easiest on short term, but likely the better one on mid-term ;)

Hundrets of MB is always a reason to avoid when it comes to version control and CI/CD. 5 MB are instantly cloned. 300 MB take a minute or two. Now do this 100 times a day per project (yes sure there's some caching which might help), but do it for 10 projects - it's just not efficient. 

The reference manual does not live in the SCM btw.

It's also about separation of concerns - if I clone a user application I expect only user code to be in that, no libraries.

I get the point about nuking the world, but then instead of pulling out my hard disk backup, I pull up the GitLab backup and have everything working out of the box, assuming I've created a docker image for the CI/CD before, which is highly desired to have reproducible builds anyways. I don't download my packages from the internet all the time - just once. Afterwards they live in the Docker image or in my local package registry (GitLab or Artifactory). But they don't belong to my code repo.

It feels really weird to discuss about the advantages of package management. I would be so laughed at if I started to tell the Python, Go, JavaScript, Julia, ... people to ship their applications with all required libs. So why is that so different in C?

Again - I do understand the simplicity of having everything zipped in the same repo. Fine. But this is not how modern SW development works at scale. For home use - I don't care and would do it however the vendor's example work, that's fine. But not in a professional context.

A random find, from 2022: 3 Tips for Embedded Software Configuration ManagementBeningo Embedded Group

Edit: missed that one:

> In contrast, I don't like it when everything is done automatically, because that often leads to me not noticing/understanding what exactly is happening. I want to know, what dependencies exist.

Exactly! And instead of looking at some random folders in a repo, why not looking at one single dependency file? How do you think do dependency/vulnerability scanners like Dependency list | GitLab work?

Pavel A.
Evangelist III

So we have two almost unrelated discussions here:

(1) You acknowledge that CubeIDE does not come with a decent package manager like ARM (Keil) has. That's fine if we compare their price tags and target audience. So much about CubeIDE.

CubeIDE itself is just Eclipse CDT. It will happily build, edit and debug a suitable Eclipse project, generated by CubeMX or whatever else. CubeIDE comes with some 'team' components inherited from Eclipse. One can use these or use whatever they like.

(2) Are poor CubeIDE users doomed to keep everything in one version control repo? No they do not. CubeMX (standalone or CubeIDE internal) and its Software Pack manager does not help creating well structured projects - but other means exist for this. Professionals know their tools. 

For every STM32 MCU and eval. board, ST offers a consistent package that lets quickly make tests or POC applications. Not more, not less. It is good for users, good for marketing, good for support, mostly free of 3rd party licensing troubles. Is this also good for real application development? Of course it is not. ST does not want to compete with their software ecosystem partners.  That's fine.

 

Maybe we are talking at cross purposes. You don't need to copy the libraries into your project and you don't need to download the libraries for every project. If you start a project for an MCU you never used before, CubeIDE downloads all libraries for this MCU, put in into a local "repository" and you have to link (not copy) all the files you need, referenced to the library folder. You can add the library folder to the search path list and simple include the files. When starting a new project for the same MCU family, all libraries are already there.

The backup of a project only contains the user code, not the libraries, but the references to the library folders. After restoring the backup you have to check if the libraries are present in the right version.


@JKapp.1 wrote:

It feels really weird to discuss about the advantages of package management. I would be so laughed at if I started to tell the Python, Go, JavaScript, Julia, ... people to ship their applications with all required libs. So why is that so different in C?


It does not depend on the language, but on the complexity of the software, the performance of the hardware and where it is used. Writing in C allows you to hardware-related developement and getting efficient code at the price of more workload (MCU). Writing in python or java allows you to write complex software with less workload at the price of the performance of the result (PC, smartphone or big MCU).

If we look at a company whose only product is isolated software, then you will probably be right. But companies that uses MCUs not only develope software, they have to develope PCBs, cases, they have to think about packaging, procurement of components and so on. Developement of simple software is only a small part and factores you have listed doesn't really matter, especially if the software developer does not only writes code. The IDE must work for those people. For bigger companies, where tens or hundret of people only write code, there are other solutions like Keil MDK. As a result, there is a solution for hobyiests up to big international companies. If there is no CubeIDE, there will be no solution for hobbyists up to medium sized companies. There are other manufacturers of MCUs with higher entry barriers and they are less successful on the market or cover other markets.

Yes I think that's a good summary. The examples by ST are just examples or PoCs. That's it. I think that's a good "mind-set", but people I spoke to recently (in my company) take these for best practice examples, perhaps that's where all this discussion comes from.


@oxc07w2 wrote:

Maybe we are talking at cross purposes. You don't need to copy the libraries into your project and you don't need to download the libraries for every project. If you start a project for an MCU you never used before, CubeIDE downloads all libraries for this MCU, put in into a local "repository" and you have to link (not copy) all the files you need, referenced to the library folder. You can add the library folder to the search path list and simple include the files. When starting a new project for the same MCU family, all libraries are already there.

 

The backup of a project only contains the user code, not the libraries, but the references to the library folders. After restoring the backup you have to check if the libraries are present in the right version.


Yes, I know the local repository, and it's the right approach imo. My mistake was that I started with ST examples, e.g. STMicroelectronics/fp-sns-datalog2: The FP-SNS-DATALOG2 function pack represents an evolution of FP-SNS-DATALOG1 and provides a comprehensive solution for saving data from any combination of sensors and microphones configured up to the maximum sampling rate. Please check st.com where a more recent v..., where the linking mechanism is not used, but everything is indeed copied to the repo which lead to this thread. Other examples look similarly. They are not utilizing the package installer and the local lib repository. They copy everything into the project and thus are not using the mechanism you're describing.

I get now that these are examples and PoCs, that's fine to quick start. But they do not follow best practices of SW development and thus might lead to the erroneous assumption that this is the way to structure projects in the ST ecosystem.

The other gap I'm seeing is that there's no automatic way to link the packages from the local repository automatically to the CubeIde project, keeping dependencies and fixed version numbers into account, guess that's the whole point here.

Thanks all for keeping answering me!