2025-09-27 4:10 AM
Hello ST Team and Community,
I'm working with the STM32MP257F-DK board and exploring the provided AI demo examples. I noticed that most of the camera-related demos (especially those involving image inference or real-time processing) are designed around libcamera as the default source, as indicated in code snippets like:
parser.add_argument("--camera_src", default="LIBCAMERA", help="use V4L2SRC for MP1x and LIBCAMERA for MP2x")
While I understand that libcamera is appropriate for MIPI-CSI cameras (which MP2x supports), I'm currently using a USB UVC camera, which libcamera does not support.
From what I know, USB webcams work well with V4L2 but are ignored by libcamera due to its pipeline design.
Are there any official plans or guidance for supporting USB cameras on STM32MP2x using V4L2, especially in AI demos?
Why is V4L2 support not more visible or included as an option in the MP2x examples, considering the popularity of USB webcams for quick prototyping?
Would ST consider providing dual-path examples (libcamera and V4L2) for broader compatibility?
I believe many developers would benefit from having clearer USB camera support, especially when MIPI modules are not readily available.
Any advice, roadmap updates, or workarounds would be greatly appreciated.
Thanks in advance!
2025-09-29 6:09 AM - edited 2025-09-29 6:15 AM
Hello @Duc ,
You should still find V4L2 based demonstration example for MP1 series in X-LINUX-AI Expansion Package. As far as I know, AI team spread their examples a libcamera based dual pipe for MP2, and V4L2 based single pipe for MP1.
Then, he who can do more can do less. UVC camera is just a simplified version of V4L2 based CSI examples. You do not have to configure the DCMIPP and so on, just to take back the video device created for your UVC camera.
Today, there is no plan to port all the applications for both MIPI-CSI and UVC cam, but you adapt it on your side based on the provided example.
Kind regards,
Erwan.
2025-09-30 7:38 AM
Hi @Erwan SZYMANSKI,
Yeah, I’m aware that I can use my board with V4L2. However, the main point I was trying to raise is that the demo provided doesn’t cover all aspects, which leaves customers like me feeling somewhat dissatisfied. Nevertheless, I appreciate your reply and thank you for that.
2025-10-01 10:49 PM
@Duc ,
Understood, thank you for your feedback. As one of the main feature used for AI is the dual DCMIPP pipes (1 HW pipe for display, 1HW pipe for NN), the AI team preferred to focus on this use CSI use case to take benefits from this MP2 specific feature.
I will let them know your feedback on it.
Kind regards,
Erwan.
2025-10-02 6:30 AM
Hi @Erwan SZYMANSKI ,
Thanks for your response and for considering the feedback.
I understand that the dual DCMIPP pipes, with one dedicated to display and the other to neural network inference, is a standout feature of the MP2, and I agree that it's a strong point for AI-related use cases. However, I also believe that there's nothing fundamentally preventing MP2 from working with UVC cameras. The hardware is solid, and the demo concepts are great, so logically, there shouldn’t be a reason why it can’t support UVC cameras as well, right?
When I first received the MP2, I tried running the demo with a UVC camera, which is quite common in development setups, and even something as basic as camera preview didn’t work. It took me a while to realize the issue, and even more time to go in and patch the code myself just to get it running.
While I can certainly fix and adapt the software if needed, I believe it’s worth considering that from a completeness standpoint, the software should support what the hardware is clearly capable of. Otherwise, it feels like a gap, especially from a user’s perspective.
Also, I imagine there will be other customers who will want to use UVC cameras simply because of their availability and popularity. It would be unfortunate if they too had to spend extra time modifying the code just to get basic functionality working—time that could be better spent exploring the actual capabilities of the platform.
So again, thank you for hearing me out. I really hope the team will consider adding more flexible camera support in future releases. I think it would make the platform more robust and developer-friendly overall.
Duc Le