cancel
Showing results for 
Search instead for 
Did you mean: 

how do i convert sdcard(digital)data to RGB pixel format ??

Predd.11
Associate II

i interfaced STM32 LTDC (24 bit RGB), Hsync,Vsync and DE input to lvds transmitter ,the input data to LTDC is giving from sd card ,but how do i convert sdcard data to RGB pixel format

8 REPLIES 8
pacman
Associate III

By using a converter.

-That's the best answer I can give with the information I got.

If you have data written onto a SD card, the SD card probably holds some kind of file system.

If not, you're storing the raw data on a block-level.

If there is a file system on the card (maybe LittleFS, but likely some kind of FAT if the file is stored by a digital camera or camcorder or computer).

If the file was stored using a digital camera, camcorder or PC, you'd need to be able to read the files, which means you'd need a file system before you can read the file.

... Say you've read the file, now all you need to do is to decode it and convert it.

Depending on whether it's a single picture, a PDF file, vector graphics, a movie or something else, you'd have to find the file format specifications first.

-In some cases, it would be a much better approach to read the data on the SD-card using a computer, then convert it on the computer in advance - to a format that you've designed and optimized for highest possible speed and least 'work' by the microcontroller, then you store this new file on the SD-card instead of the original.

Having said all the above, RGB is easy. It's one pixel at a time, usually starting at the top/left corner, going right, then wrapping to the next line on the left when reaching the far right, going right, and so on, until the bottom of the display is reached.

RGB pixels can be of several different formats, most common are RGB888 and RGB565, where each number represents the number of bits used for the particular component (R=Red component, G=Green component, B=Blue component).

Thus for RGB565, it would be ...

5 red bits, 6 green bits, 5 blue bits.

Looking like this in binary form:

%RRRRRGGGGGGBBBBB

(most significant bit on the far left, least on the far right).

Thus you'd quickly see that there are 16 bits = 2 bytes = one halfword.

If you provide some more information, like what kind of data you want to show on your LCD/TFT display (eg. where it comes from and if it's photo, video or other kind of image data) and what bit-depth you want to show it as, you may get more precise answers.

-If you've never worked with drawing low-level graphics before, I recommend that you try out some basic things, like filling the display with blue pixels (eg. only blue color), change the pixel value and fill the screen again, etc.

void pbox(int16_t x, int16_t y, uint16_t w, uint16_t h, uint16_t pixel)
{
  uint16_t *frameBuffer = ;
  uint16_t bitsPerPixel = 16; /* 16 for RGB565 */
  uint32_t displayWidth = 1024;
  uint32_t displayHeight = 768;
  uint32_t bytesPerLine = displayWidth * ((bitsPerPixel + 7) >> 3);
 
  for(ypos = y; ypos < y+h; ypos++){
    for(xpos = x; xpos < x+w; xpos++){
      frameBuffer[ypos * bytesPerLine + xpos] = pixel;
    }
  }
}
#define BLUE ((1 << 5) - 1)
pbox(7, 9, 837,413, BLUE);

-You need to supply the address of the frameBuffer (which I don't know).

(The above isn't production-quality, just a few basics).

You'll likely want to use some API calls; especially if you're going to decode JPEG (which STM32H7 has a hardware CODEC for).

Note: Some movie formats use JPEG for video.

thanks for the information

Here the video/audio/image data is going to be received by wifi module(ATwinc3400) and this data will be stored in sd card which is interfaced with STM32H743ii,from sd card the video data is giving as input to LVDS .

Here my job is to convert video data to 24bit RGB pixel format , this rgb data giving as input to LVDS transmitter to produce differential signal.

how do i convert to pixel format ?

Still, you will need to know the format of the data you receive from the WiFi module.

Video, audio and image is not just one format, you know. 😉

-There are thousands of different formats. To name a few, just for uncompressed audio, there's ...

8-bit PCM, 16-bit PCM, 24-bit PCM, even 32-bit PCM.

Now these 4 different types can be played back at any frequency from 1 Hz to 384kHz (also outside that range, but just to keep this post short, we'll limit it a little).

Images are much more complicated, more pixels need to be processed per second than audio samples. For a 1024x768 screen resolution of 8 bits per pixel, you have 768 KByte to process per frame, and if you have 60 frames per second, that's 45 MByte/sec. Compare this to audio, where a good quality audio is 24-bit, 8 channels, 192kHz = 24 * 8 * 192000 / 8 = only about 4.4MByte/sec.

You'll need to find out where you're receiving the video/audio/image data from. If you're free to choose a format, I suggest you choose a format that use as low bandwidth as possible while being as easy to decode as possible as well.

In case there's a format that uses JPEG, this will probably be your favorite choice, because then you can use the built-in JPEG decoder in the STM32.

-On the other hand, if someone demands you to be able to decode a specific video/image/audio format, then things are a lot more difficult. In that case, I'd go and look at open-source repositories (for instance at GitHub) to figure out how to decode it.

-Eg. ffmpeg and mplayer might be a place to look, but it's not easy to learn the inner workings of these formats.

My favorite choice would be to create your own format, then on a computer or server, convert a standard video/image/audio stream into this format, and send it to the microcontroller.

This allows you to control the bandwidth and also to design a format that is fairly easy to decode.

It's not too difficult to convert from YUV to RGB; YUV use less bytes per pixel and is still an acceptable quality in most cases.

If you need to compress, you'll need a compression-format that decompresses quickly. LZ4 is a good example on such a format, but LZ4 might not be suitable for video compression as it doesn't compress that kind of data particularly well.

Note: Photos and real-world video does not compress well with LZ4, but cartoon movies would compress quite nicely, because there's often not so many colors per frame.

It might also be a good idea to split things up in 'tasks'.

1: Receive some data from WiFi (we "don't know what it is").

2: Figure out what kind of data it is; determine whether it's video, audo or still-image data.

3: Decode data to raw uncompressed RGB data, while keeping in mind that *every* bit of the input data can be invalid. Do not assume that you receive perfect data; assume that you receive only junk and handle every possible case. If you do not do this, you code could crash.

4: Copy the decoded raw image to the framebuffer while converting each pixel. If you're lucky, you can use the DMA2D for this.

5: Repeat.

If you're receiving both audio and image/video, then audio must have the highest priority.

The reason is that we hate disturbances in audio more than we hate disturbances in image/video.

You never know when disturbances in video is just an 'intended effect', but your ears are not as forgiving and will not accept imperfections in audio. =)

-Yes, the most important part of a good movie is that the sound is perfect.

So in short: Focus on getting yourself a 'raw' RGB image (whether it's video or still-image), when you've got that, the rest of the way should be fairly easy.

Use or consult an image conversion library on how to convert different graphics formats, and access individual pixels.

Therer are plenty of PC based libraries available in source code.

pacman
Associate III

As Ozone says, using a conversion library will get you far with little work.

-As a starting point, you could take an image-processing application like Photoshop and convert an image into an easy uncompressed format like "bitmap", then transmit that image to your device, which will save it onto the SD-card.

When saving is completed, it can read the sd-card and convert it pixel-by-pixel.

For movies, you could do the same thing, but I don't know of any 'official' raw uncompressed video formats.

-Still, you could probably use QuickTime to convert to a format that you can work with like "convert movie to image sequence". Again, uncompressed formats are huge and will take a long time to transfer over WiFi.

If you're thinking about transferring real-time, WiFi might not be the best choice. It looks like the module can stream 72Mbit/sec (which is 9 MByte/sec at most), so you can't send huge uncompressed videos in real-time.

Hmmm, what got me wondering: the OP just dropped some general questions, which do not speak of much mental investment into this topic.

I suspect he would like to have an "almost-ready-to-build" solution from you ...

Even if the H7 has more ressources then the F families, I would still not skip the estimation of bandwidth/performance limits for the device to develop.

Well, everyone has to start somewhere. Yes, there's a lot of things to consider.

I think this project might be possible to pull off, but it will require some work and some workarounds. If the OP succeeds, then he'll probably be able to get someone else started one day. Yes, I agree, it's also important to check that display-resolution and bitdepth isn't too high, compared to what the peripherals can transfer. If the display is 480x272, then there would likely be no problem.