cancel
Showing results for 
Search instead for 
Did you mean: 

STSW-IMG035 Gesture and Hand Posture

emilywengster
Associate II

Hello, 

I've previously posted asking about gesture and hand posture output and I've finally got an output from both of the model. The problem is the method of getting an output for gesture and hand posture is different. For gesture, I'm able to use GestureEVK software and achieve some results for gesture but for hand posture I'm only able to get an result through local terminal/Putty. Is it possible to integrate both of the processses to have them running on one program that can recognize both gesture and posture?

Or is it possible that there are already methods of have gesture and hand posture running on one program?

Any ideas or hint would be useful! Thank you!

 

Emily

 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
labussiy
ST Employee

Hello,

Motion gesture and Hand Posture are 2 different solutions. That's why it's not the same API to get the outputs.

 

Yes, it is possible to merge both solutions. It has been done internally in ST for some demos and it will come in the coming months on st.com. (Motion Gesture + Hand Posture + Smart Presence Detection)

This is something you can do on your side, and I can try to give you few hints:

  • both solutions are using the same VL53L8CX driver (ULD), that makes the merge easier
  • you can use the "motion gesture" C project to start and add Hand Posture files inside
  • add Network_Init(&App_Config) after gesture_library_init_configure();
  • after getting the ranging data, you can call the hand posture functions:

App_Config.RangingData = RangingData;

/* Pre-process data */

Network_Preprocess(&App_Config);

/* Run inference */

Network_Inference(&App_Config);

/* Post-process data */

Network_Postprocess(&App_Config);

 

Then, you will have to merge both outputs, for example you can have this kind of code:

//Write the AI output in the Gesture structure for the GUI EVK

if (evk_label_table[(int)(App_Config.AI_Data.handposture_label)] != 0) {

gest_predictor.gesture.ready = 1;

gest_predictor.gesture.label = evk_label_table[(int)(App_Config.AI_Data.handposture_label)];

}

else{

gest_predictor.gesture.ready = 0;

gest_predictor.gesture.label = 0;

hold_timer = sensor_data.timestamp_ms;

}

That's just some hints, I hope it will help you.

Yann

 

 


Our community relies on fruitful exchanges and good quality content. You can thank and reward helpful and positive contributions by marking them as 'Accept as Solution'. When marking a solution, make sure it answers your original question or issue that you raised.

ST Employees that act as moderators have the right to accept the solution, judging by their expertise. This helps other community members identify useful discussions and refrain from raising the same question. If you notice any false behavior or abuse of the action, do not hesitate to 'Report Inappropriate Content'

View solution in original post

1 REPLY 1
labussiy
ST Employee

Hello,

Motion gesture and Hand Posture are 2 different solutions. That's why it's not the same API to get the outputs.

 

Yes, it is possible to merge both solutions. It has been done internally in ST for some demos and it will come in the coming months on st.com. (Motion Gesture + Hand Posture + Smart Presence Detection)

This is something you can do on your side, and I can try to give you few hints:

  • both solutions are using the same VL53L8CX driver (ULD), that makes the merge easier
  • you can use the "motion gesture" C project to start and add Hand Posture files inside
  • add Network_Init(&App_Config) after gesture_library_init_configure();
  • after getting the ranging data, you can call the hand posture functions:

App_Config.RangingData = RangingData;

/* Pre-process data */

Network_Preprocess(&App_Config);

/* Run inference */

Network_Inference(&App_Config);

/* Post-process data */

Network_Postprocess(&App_Config);

 

Then, you will have to merge both outputs, for example you can have this kind of code:

//Write the AI output in the Gesture structure for the GUI EVK

if (evk_label_table[(int)(App_Config.AI_Data.handposture_label)] != 0) {

gest_predictor.gesture.ready = 1;

gest_predictor.gesture.label = evk_label_table[(int)(App_Config.AI_Data.handposture_label)];

}

else{

gest_predictor.gesture.ready = 0;

gest_predictor.gesture.label = 0;

hold_timer = sensor_data.timestamp_ms;

}

That's just some hints, I hope it will help you.

Yann

 

 


Our community relies on fruitful exchanges and good quality content. You can thank and reward helpful and positive contributions by marking them as 'Accept as Solution'. When marking a solution, make sure it answers your original question or issue that you raised.

ST Employees that act as moderators have the right to accept the solution, judging by their expertise. This helps other community members identify useful discussions and refrain from raising the same question. If you notice any false behavior or abuse of the action, do not hesitate to 'Report Inappropriate Content'