cancel
Showing results for 
Search instead for 
Did you mean: 

USB_Audio -> I2S DAC Corrupted packet on buffer roll back

PCamp.2
Associate III

I am using STMCubeIDE latest and an F411CE.

I used the MX wizard to enable USB_Audio class device set to 48000 s/s

I configured an I2S with master clock on IS2.

It plays whether I start the circular DMA in the AUDIO_Init function or in the AUDIO_CMD_START case within AUDIO_AudioCmd_FS().

However there is a corrupt packet at the end or start of the buffer.

I looked into the USB_AudioSync() method and seen it is expanding or contracting the buffer by 4 bytes. I have tried to use it, disabling circular mode and using the DMA complete call back to call USB_AUDIO_Sync() which then restarts the stream with the different size buffer.

STILL there is a corrupted packet in the buffer.

What am I missing?

I could consider it plausible that the audio sync is trying to shorten the buffer but for some reason my DMA goes straight on past it. But I am currently relaunching the non-circular I2s DMA transfer with the size that AudioSync sent me.

Also... those USB_DEVICE libraries are not easy to deal with, Eclipse disowns them and will not resolve references in them, it seems all structs as Anon type for example.

The documentation is where? The st.wiki.com has a list of function and nothing about how to use them. Older F0xx guides are out of date.

I can find exceedingly long technical papers teaching me ALL about how fancy the USB LL and stuff is and how USB works in extraordinary detail, but nowhere can I find a "User guide" to the librarys.

The example linked to here in other threads is for a very specific STM32 Discovery board which includes a codec and most of the actual code is specific and bespoke to that device. That is NOT a reference implementation for the library. It's full of BSP macros. You will only have that BSP if you bought that specific discover board.

10 REPLIES 10
PCamp.2
Associate III

I believe the issue is described here.

https://community.st.com/s/question/0D50X00009XkYqISAV/audio-streaming-clock-sync

I haven't read it all yet. However I have suspicions around the function:

usbd_audio.c : void USBD_AUDIO_Sync(USBD_HandleTypeDef *pdev, AUDIO_OffsetTypeDef offset)

I don't think it's doing what it thinks it's doing. For example in the scenario it identifies where the read pointer has caught up on the write pointer it tries to increase the buffer size by 4 bytes... the aim being that the consumer will have to read an extra 4 bytes.

That's great, but BufferSize is a local variable and it's thrown away in the case of the AUDIO_OFFSET_HALF case.

The only time anything is actioned is when it's the AUDIO_OFFSET_FULL, when it's a bit late. The buffer has overrun. It's reset back to 0 and the stream returns to normal.... until the next over/underrun.

Audibly this is a squelching distortion ever N seconds.

I tried to force set the DMA NDTR register in the half call back, to add the 4 bytes, but.. it didn't seem to work. I don't think you "can" write that register with any effect while the transfer is running.

Piranha
Chief II

Also looked at USBD_AUDIO_Sync() function. First, if one is expanding an audio buffer to add an additional frame (one sample for each channel), then that frame must be written with some sane value. In the simplest case one should at least duplicate the previous frame. Second, the buffer size is AUDIO_TOTAL_BUF_SIZE and that is the size which they are expanding. Therefore the playback engine will go over the buffer and play some data from the next members of the structure and/or padding bytes.

Edited by moderation team to adhere to community guidelines.

PCamp.2
Associate III

I also think their buffer expand/contract is backwards in at least one of the clauses. For example when they determine the read pointer is AHEAD of the write_pointer. aka a buffer underrun is UNDERWAY! It contracts the buffer. Not expands it.

I tested this by making it's BufferSize static If their algorithm was working it wouldn't mater. When they contract or expand the buffer it should address the overrun, start it on a path towards underun, when they will again reverse the buffer size change and it would modulate around.

It doesn't. It's unstable. As soon as the code triggers to modify the buffer it just tagents off and ends up with either a really, really long buffer and cut up distorted audio slowing in speed or... it rises in pitch until it stops with a 0 buffer size.

In my project I do want USB->I2S endpoints. Starting there was probably a mistake as it forces me to deal with non-synchronised clocks. USB vs. I2S clocks. After this experience I instead went off insearch of hardware USB->i2S. Actually looks like a better option for basic 48K 16bit audio, in terms of, say, the PCM2706 (or even the fakes).

The larger project will have many front ends for USB, Line IN/OUT, Headphone amp and BT Audio end points. I intend to have a single sensible I2S master clock set to an actual audio compatible frequency not 49.5234235Mhz like the STM. Removing the buffer sync issue completely.

STM32s will still be used for merging streams (mixing), routing streams between outputs. Im not making a digital "mixer", I am making a digital audio switch/router box.

PCamp.2
Associate III

I'm going to try setting the I2S lock PLL multiplier in the DMA complete call back instead.

Not sure if you can modify the PLL settings once the clock is running, but I can try.

EDIT: Actually, that isn't a solution either. Thinking forward in my project when I have multiple i2s devices, modifying the master clock to please the USB drivers is a very, very bad option! It will just move the same kinds of problems to all slaves of the clock. The USB interface has to sync the frames.

AScha.3
Chief II

>their buffer expand/contract < is needed to sync, if you have 2 not sync sources.

i had same idea...as first "idiot" workaround.

but i expected, a reference example from STM would do it better...Oooo.

the way is : isochronous audio transfer !

this is : target (the usb sound device) has the master clock and sending this as master to the DACs , or whatever. and to fill its in-buffer , sending to host (PC) "give me next data block".

so there is no "cracy generate samples or holes" to sync needed.

maybe this is helping : (i didnt try...)

https://github.com/dmitrystu/libusb_stm32

and you have to use ISOCHRONOUS mode !!

If you feel a post has answered your question, please click "Accept as Solution".
PCamp.2
Associate III

Thanks for the tip on ISOCHRONOUS and getting the host to respond to my clock. That would be the better approach.

On that code repo however....

As a professional software engineer in my day job. Java enterprise (previously C++). That code you posted leads me to ask, very seriously, where the non-minimized, non obfuscated developer version is. It literally looks like it has been written to never be read by a human again.

Example, check this master piece of obfuscation.

https://github.com/dmitrystu/libusb_stm32/blob/master/demo/cdc_startup.c

The "user code" isn't much better either.

https://github.com/dmitrystu/libusb_stm32/blob/master/demo/cdc_loop.c

Is that is meant to be an example of how to use it, jess.

Looks like code written by nerds to impress nerds. I'd sack them or ask them to leave the team.

/rant.

AScha.3
Chief II

hmm, > code written by nerds to impress nerds <

is this better : -> device-> audio...

https://github.com/hathach/tinyusb

btw. show me, how "good code" looks like , example please. (just for me, to see what you like)

If you feel a post has answered your question, please click "Accept as Solution".
PCamp.2
Associate III

It certainly better that the previous example, but it's just another black box library. It either works and I learn nothing or it doesn't work and not interested in wasting more time on it. If it works now and I commit to using it and later it turns out to be worse in more ways, I end up binning the entire project.

Sticking with the HAL stuff and just writing around the bugs is probably much, much faster. That IS the point yes? The point of a HAL and other libraries is to make applications rapid to develop. That makes the platform comfortable both to hobbiests but especially to profressionals. Your boss will not take kindly to you spending 5 weeks in the datasheet to demonstrate your programming prowess when the junior engineer beside you has it done in 2 days with HAL.

At least the HAL libraries understand that code is for HUMANs. Binary is for computers. So writing things in (preferably) English and not acronym/operator soup is a good start.

Thing is HAL's coding "style" is fine in the vast majority of places. It has bugs and some of the approaches and annoying, but functionality and style are different things.

Compare that with a block, chained combination of three letter acronyms like _ACR, mathetical logic operators like | and &, but shifts with "magic numbers" and finally a few logical &&s and ||s. A single line of code with about 40 different operations mixed up it it. Great until it breaks. It's literally written in code. The code is the acronyms. It's like writing a new language were you take all the stuff that allows people to read it and even understand what it does without reading documentation. Self describing code written for humans.

Others will hate my code. I even add entire variables and functions I don't need so that it reads and explains itself well. Beside 99.9% of the "Don't do that it wastes x, y, z" are non-sense since about 1990 and modern C compilers.

That style of "compressed" "code golf" coding has no place out of nerd clubs. I'm not even joking, if code like that was submitted in a pull request in work it would be out right rejected and the person told to completely scrap it and start it again. It's absolutely useless to anyone. If it works, great, but nobody is going to know how or why in 3 weeks. It's not like you can just read it out loud to yourself and understand it. You'd need to sit down with a pen, paper, spreadheet and the datasheet for 2 or 3 weeks to have a clue what it's doing. it's pointless and will cost more time than it would cost to have it rerwitten by a different engineer. An engineer writing code like that would not survive long in team projects.

It's also historically significant, because when C and such language were in their infancy a Megabyte of harddrive space would cost you about £100,000. So making your variable names short as possible and compressing your code "text" as much as possible was the way. Nobdy cared about code being readable, just write it once, get it out the door.

This let to more than a decade of issues across the whole software industry. Problems we are STILL sloving today. It seems the embeded folks have a lot to learn as their platform expands and becoming more and more generic and code will start to hang around beyond one or two projects, you can no longer get away with that old brittle obsfucated way of programming.

A good little test. If you find yourself writing comments explaining your code. STOP! Stop right there. Delete that bit of code and rewrite it so you don't need the comment.

AScha.3
Chief II

hmm, i almost agree.

but still you didnt show me some lines of your code... 🙂

for me, i need comments, like here, in IR-remote decoder:

0693W00000WICcMQAX.pngits just helping me, to understand what it should do - if i see it some years later.

and i think, its same for other people, that see it now.

good comments replace a separate "user manual" - then comments are, what i expect.

If you feel a post has answered your question, please click "Accept as Solution".