2020-08-10 11:47 PM
Hello, trying to get DMA transfer to work.
This does work fine:
//INIT DMA
static void MX_DMA_Init(void)
{
/* DMA controller clock enable */
__HAL_RCC_DMA1_CLK_ENABLE();
/* Configure DMA request hdma_memtomem_dma1_stream1 on DMA1_Stream1 */
hdma_memtomem_dma1_stream1.Instance = DMA1_Stream1;
hdma_memtomem_dma1_stream1.Init.Request = DMA_REQUEST_MEM2MEM;
hdma_memtomem_dma1_stream1.Init.Direction = DMA_MEMORY_TO_MEMORY;
hdma_memtomem_dma1_stream1.Init.PeriphInc = DMA_PINC_ENABLE;
hdma_memtomem_dma1_stream1.Init.MemInc = DMA_MINC_ENABLE;
hdma_memtomem_dma1_stream1.Init.PeriphDataAlignment = DMA_PDATAALIGN_WORD;
hdma_memtomem_dma1_stream1.Init.MemDataAlignment = DMA_MDATAALIGN_WORD;
hdma_memtomem_dma1_stream1.Init.Mode = DMA_NORMAL;
hdma_memtomem_dma1_stream1.Init.Priority = DMA_PRIORITY_LOW;
hdma_memtomem_dma1_stream1.Init.FIFOMode = DMA_FIFOMODE_ENABLE;
hdma_memtomem_dma1_stream1.Init.FIFOThreshold = DMA_FIFO_THRESHOLD_FULL;
hdma_memtomem_dma1_stream1.Init.MemBurst = DMA_MBURST_SINGLE;
hdma_memtomem_dma1_stream1.Init.PeriphBurst = DMA_PBURST_SINGLE;
if (HAL_DMA_Init(&hdma_memtomem_dma1_stream1) != HAL_OK)
{
Error_Handler( );
}
}
//START DMA
if(HAL_DMA_Start(&hdma_memtomem_dma1_stream1, (uint32_t)tx_buffer, (uint32_t)d3buffer, RX_LENGTH) != HAL_OK)
{
Error_Handler();
}
Yet, this does not work:
//INIT DMA
/* DMA controller clock enable */
__HAL_RCC_BDMA_CLK_ENABLE();
/* Configure DMA request hdma_memtomem_bdma1_channel3 on BDMA1_Channel3 */
hdma_memtomem_bdma1_channel3.Instance = BDMA1_Channel3;
hdma_memtomem_bdma1_channel3.Init.Direction = DMA_MEMORY_TO_MEMORY;
hdma_memtomem_bdma1_channel3.Init.PeriphInc = DMA_PINC_ENABLE;
hdma_memtomem_bdma1_channel3.Init.MemInc = DMA_MINC_ENABLE;
hdma_memtomem_bdma1_channel3.Init.PeriphDataAlignment = DMA_PDATAALIGN_WORD;
hdma_memtomem_bdma1_channel3.Init.MemDataAlignment = DMA_MDATAALIGN_WORD;
hdma_memtomem_bdma1_channel3.Init.Mode = DMA_NORMAL;
hdma_memtomem_bdma1_channel3.Init.Priority = DMA_PRIORITY_LOW;
if (HAL_DMA_Init(&hdma_memtomem_bdma1_channel3) != HAL_OK)
{
Error_Handler( );
}
//START DMA
if(HAL_DMA_Start(&hdma_memtomem_bdma1_channel3, (uint32_t)tx_buffer, (uint32_t)d3buffer, RX_LENGTH) != HAL_OK)
{
Error_Handler();
}
What might be the reason for this? What is the fundamental difference between these DMA units (stream vs channel)?
I'm using STM32H7B3 microcontroller. d3buffer is allocated as follows:
uint32_t __attribute__((section (".RAM_D3"))) d3buffer[16] =
{0};
Linker:
MEMORY:
...
RAM_D3 (xrw) : ORIGIN = 0x38000000, LENGTH = 128K
...
.RAM_D3 : {
. = ALIGN(4);
_sRAM_D3 = .; /* create a global symbol at DTCMData start */
. = ALIGN(4);
_eRAM_D3 = .; /* create a global symbol at DTCMData end */
} >RAM_D3
Solved! Go to Solution.
2020-08-11 05:24 AM
BDMA1 doesn't have access to SRAM3.
There is some useful information in the reference manual about these peripherals.
A BDMA "channel" is functionally equivalent to a DMA "stream". The latter uses the DMAMUX to map a stream to different request generators. In familites without a DMAMUX, a DMA stream needed to be mapped by selecting one of 8 or so channels.
2020-08-11 05:24 AM
BDMA1 doesn't have access to SRAM3.
There is some useful information in the reference manual about these peripherals.
A BDMA "channel" is functionally equivalent to a DMA "stream". The latter uses the DMAMUX to map a stream to different request generators. In familites without a DMAMUX, a DMA stream needed to be mapped by selecting one of 8 or so channels.
2020-08-12 02:18 AM
Thank you! So I need to use the DMAMUX(manually) when using BDMA? I will try that. But if they are equivalent, I might as well stick with the DMA steam which works fine.