2024-10-20 05:26 PM
Hi,
I am using a VD55G1 sensor with P-board to interface with Rpi 5. I am using Camera port #0 of Rpi. I downloaded the STSW-IMG506_V4L2, STSW-IMG506_LIPA and STSW-IMG506_LCAM. I followed the steps to install V4L2 driver from the "Linux Start Guide for ST BrightSense image sensors" document and updated the device tree to include pcb4189_vd55g1_cam0.dts, added the same to config.txt as well. I upgraded the system to support latest version of libcamera packahe and extracted both LIPA and LCAM .deb files, but rpicamera-hello or libcamera-hello doesn't detect camera. At the same time I found the libcamera version to get messed up upon extracting the LIPA and LCAM libraries (refer the attached image before and after installing the package).
Debugging steps:
I did not find any external hardware from the /dev/ though I interfaced a FPC cable with P-board and promodule with VD55G1 sensor.
I tried updating device tree with pcb4280_vd55g1_cam0.dts, but still same behaviour.
I switched to camera #1 and tried pcb4280_vd55g1_cam1.dts and pcb4189_vd55g1_cam1.dts, but no changes.
I made sure the libraries are only compiled to arm64 versions and I did make sure I connected the camera the way it was described in ST's documents.
The Linux User guide document mentions to download libraries from st.com, but wanted to confirm if the drivers are updated from Rpi 5.
2024-10-21 04:02 PM
Hello,
The procedure seems good. Indeed, our drivers have been updated to work on RPI 5.
To facilitate debugging, could you please reboot the board and share the log generated by the dmesg command? This will help us determine whether the issue originates from the DTS file or the driver.
Thank you, Megane.
2024-10-29 04:24 PM
Hi,
I figured out the cable interpretation was different in the P-board datasheet (or maybe the cable I got had interpretation inverted) and I was also installing both the IPA and LIPA that was causing the library to get corrupted. I did a fresh install of drivers again with only IPA and validated the libcamera version to remain at v0.3.2 and further access the camera using rpicam-hello and libcamera tools.
Furthermore, I would like to directly access CSI TX outputs (the Data_p, Data_n, and the corresponding clock signals) from the P-module for input into my CSI 2 Receiver block on FPGA. But I am confused on the configurations required to stream the data.
I found the "How to integrate and configure the VD55G1 sensor" manual describing the software states and process of going over them to reach the Streaming stage by modifying the "register values from UI". Does UI mean the register configurations from the I2C EVK GS software (as attached in the screenshot below)?.
I was considering an approach to interface the Raspberry Pi with the P-board, simultaneously probing the data and clock signals to read raw CSI inputs and decode the information on the FPGA to minimize development efforts for FPGA interfacing. Do you have any preferred implementation strategies to streamline fetching the raw CSI-2 data?.
Thanks for your support!.
2024-10-30 04:35 PM
Hi,
Good to know that it is working!
UI stand for User Interface, which denotes a type of memory accessible to the host. So yes, it is the same registers that you can change with the EVK GS software and described in the user manual. The code examples that we use are ST internal notation. But you can use your platform's I2C commands along with the sensor's register addresses to interact with the sensor.
Regarding your last question: Are you planning to use a scope? If it helps, a standard board we use is the U96, which incorporates an FPGA.
2024-10-31 04:20 PM - edited 2024-11-01 09:47 AM
Hi,
I used the software to configure the registers (do basic operations such as switching between Streaming and Standby, etc.) and it works fine.
In order to test my CSI Receiver on FPGA, I would like to first analyze the output coming from the Data_p and Data_n signals for atleast one complete frame (along with header information to align with standards of MIPI D-PHY documents) that will help me understand the nature of the signals (e.g the endianness and other packet information) and could help me fine tune my CSI receiver code.
To achieve this, I need to start capturing signals exactly when the device enters streaming mode for synchronization. However, I'm unsure if the Raspberry Pi can log the raw CSI input signals before they are processed by the Linux kernel's CSI block, or if the EVK development board is better suited for this task.
Using a scope would be helpful (I'm planning on getting my Logic Analyzer soon), but how do I ensure synchronisation of logging in the signals with exact start of the capture, especially if I'm using the Python code from the Linux startup guide to start capture in Raspberry Pi.
I also tried searching for the U96, but couldn't find any relevant results.
Thanks!.
2024-11-04 04:53 PM
Hi,
I don't think you will be able to log the raw CSI input signals using the Raspberry Pi, or with a logic analyzer. You would need a scope with a MIPI CSI analyzer, which should work fine. We are compliant with the MIPI interface.
To ensure the exact timing of the capture start, you can strobe the start of the frame signal: use a GPIO from the board and set the corresponding register to VSYNC. For example, to use GPIO_0, change the GPIO_0_CTRL register to VSYNC_OUT_MODE1 (which indicates the start of image readout). You can find more details in the user manual. let me know if you need more explanations for this.
Here is a link to information about the U96 board: link.
Regards, Megane.