2024-06-17 12:39 PM
Hi, I am working on a realtime audio DSP product using the STM32H7 with a AS4C4M16SA-7BCN SDRAM chip for long delay line memory. I am using the FMC controller with the settings in the attached photo:
The product processes an incoming audio stream in real time, so this is a very runtime critical application. I have found that reads and writes to and from the delay memory on the SDRAM are by far the biggest drag on overall performance.
Currently I am just accessing SDRAM memory automatically through the C++ compiler, declaring as follows and accessing as I would any other variable:
static float delay_line[48000 * 30] __attribute__((section(".sdram"))); //48000 sample rate * 30 seconds
I am wondering if there are any ways to optimize SDRAM reads and writes to get better performance, either through how I structure my code, or through settings in the CubeMX configurator.
In particular, would it be faster to do sequential reads from consecutive SDRAM locations to a buffer in onboard memory rather than just accessing at random points based on my code behavior? Is there a vector style function that can quickly copy a block of data from the SDRAM to local memory? Would this approach be likely to provide a noticeable performance increase?
Please advise, thanks!
2024-07-12 12:24 AM - edited 2024-07-12 12:26 AM
Hi,
by chance i just seen a cpu , that might be perfect for your project, in same price range as H7 :
771-MIMXRT685SFVKB (mouser)
- 4.5 MB of system SRAM accessible by both CPUs
- Arm Cortex-M33 processor, running at frequencies of up to 300 MHz
- Two coprocessors for the Cortex-M33: a hardware accelerator for fixed and floating
point DSP functions (PowerQuad) and a Crypto/FFT engine (Casper).
- Cadence Xtensa HiFi4 Audio DSP processor, running at frequencies of up to 600 MHz.
- Hardware Floating Point Unit. Up to four single-precision IEEE floating point MACs per cycle.
So enough RAM for your idea , fast cpu, even faster DSP (about 4x the speed of the H7 ) ...
by NXP , afaik STM has nothing to compare to this.