[AMB82-mini] Object detection. osd2enc_queue / isp2osd_queue receive queue timeout or fail

Hi everyone.

I want to detect a person, and if a person is detected, then send a message to another Arduino Nano.

Here is my code.

#include "NNObjectDetection.h"
#include "VideoStream.h"
#include "ObjectClassList.h"

#define CHANNELNN 3  // Video channel for object detection

#define NNWIDTH  576  // Low resolution setting width for object detection
#define NNHEIGHT 320  // Low resolution setting height for object detection

VideoSetting configNN(NNWIDTH, NNHEIGHT, 10, VIDEO_RGB, 0);
NNObjectDetection ObjDet;

unsigned long lastDetectedTime = 0;  // Time when 'person' object was last detected
bool isPersonDetected = false;  // 'Person' object detection state

void setup() {
    Serial.begin(115200);  // Start serial communication

    // Configure video channel for the camera
    Camera.configVideoChannel(CHANNELNN, configNN);
    Camera.videoInit();  // Initialize the camera

    // Set up object detection
    ObjDet.configVideo(configNN);
    ObjDet.modelSelect(OBJECT_DETECTION, DEFAULT_YOLOV4TINY, NA_MODEL, NA_MODEL);  // Select object detection model
    ObjDet.begin();  // Start object detection

    // Start video channel for object detection
    Camera.channelBegin(CHANNELNN);
}

void loop() {
    std::vector<ObjectDetectionResult> results = ObjDet.getResult();  // Get object detection results
    bool detected = false;

    if (ObjDet.getResultCount() > 0) {
        for (uint32_t i = 0; i < ObjDet.getResultCount(); i++) {
            ObjectDetectionResult item = results[i];
            // Filter only 'person' object (index 0), check 'filter' value
            if (item.type() == 0 && itemList[0].filter == 1) {
                detected = true;  // Person object detected
                break;
            }
        }
    }

    if (detected) {
        if (!isPersonDetected) {  // Start detection of person object
            isPersonDetected = true;
            lastDetectedTime = millis();  // Save current time
        } else if (millis() - lastDetectedTime >= 1000) {  // If detected continuously for over 1 second
            Serial.println("Person detected continuously for 1 second");
            isPersonDetected = false;  // Reset state
        }
    } else {
        if (isPersonDetected) {
            Serial.println("Person detection stopped");  // Person detection stopped
            isPersonDetected = false;  // Reset detection state
        }
    }

    delay(100);  // Delay to refresh results
}

I modified the code in ‘ObjectDetectedList.h’ slightly to make it recognize only people.
I’m not sure what’s wrong with my code, but this is what the Serial Monitor displays:

[VOE]pack_v 0x0000 0x0002 cus_v 0 iq_id 0 dn 0 day 0 night 1 other 2 offset 224 length 16436 iq_size 32716

[VOE]Ver 0x0001000c Fast3A Cnt AE 1 AWB 1 period AE 5 AWB 1 delay 0 0

[VOE][isp_mod_sensor_init][1581]Errhdr_mode 0 is over sensor driver num 1

[VOE]hdr_mode 0 sensor driver num 1

[VOE]fps: 1880018620, pclk: 0, hts: 1077805056

[VOE]fps max 30.000000 min 5.000000

[VOE]exposure_step 29.629629

[VOE]change sensor mode => 1920x1080@30.000000fps - 'linear'

[VOE]min_fps 5.000000, max_fps 30.000000, exp_step 29.629629

[VOE]md ver 0x6d640100

[VOE]ae ver 0x61650200

[VOE]awb ver 0x77620100

[VOE]cur_hdr_mode = 0

[VOE]VOE MEM Size = 4867 Used= 3550 KB Free= 1317 KB (1348768)

[VOE]stream 4 buffer 0: 0x70564700 size 552960

[VOE]stream 4 buffer 1: 0x705eb800 size 552960

[VOE]ch 4 not support OSD

[VOE]RGB3 576x320 1/10

[VOE]dynamic set fps 0 -> 10 ok

[VOE]sensor power on

[VOE]status == 1718

hal_voe_send2voe too long 127627 cmd 0x00000206 p1 0x00000000 p2 0x00000004

[VOE]release s4 isp buffer 0

[VOE][WARN]useless release s4 slot0 status 0x00000000

[VOE]release s4 isp buffer 1

[VOE][WARN]useless release s4 slot1 status 0x00000000

[VOE] osd2enc_queue receive queue timeout or fail (1)

[VOE] isp2osd_queue receive queue timeout or fail (1)

I’ve previously looked up articles related to this on the forum, but they couldn’t help me.

The connection with the MIPI camera seems to be well established

I hope someone could help me.

I updated it to ver 4.0.8 because I was wondering if I could solve the problem if I uploaded the version.
Then, this time, therer was another problem.

This is what is displayed on the Serial Monitor

== RAM Start ==
Build @ 13:37:33, Jul 25 2024

$8735b>[video_voe_presetting] fps:30  w:1920  h:1080   

fwin(1),enc_en(0),IQ_OFFSET = 0x17b20

 fwin(1),enc_en(0),SENSOR_OFFSET = 0x2fb60

sensor id 1 iq_data 17b20 sensor_data 2fb60

NN IRQ default priority : 0, set to 9


VIPLite Drv version 1.12.0


set yolo confidence thresh to 0.500000


set yolo NMS thresh to 0.300000


Deploy YOLOv4t


input 0 dim 416 416 3 1, data format=2, quant_format=2, scale=0.003922, zero_point=0


ouput 0 dim 13 13 255 1, data format=2, scale=0.137945, zero_point=178


ouput 1 dim 26 26 255 1, data format=2, scale=0.131652, zero_point=193


---------------------------------
input count 1, output count 2


input param 0


	data_format  2


	memory_type  0


	num_of_dims  4


	quant_format 2


	quant_data  , scale=0.003922, zero_point=0


	sizes        416 416 3 1 0 0 


output param 0


	data_format  2


	memory_type  0


	num_of_dims  4


	quant_format 2


	quant_data  , scale=0.137945, zero_point=178


	sizes        13 13 255 1 0 0 


output param 1


	data_format  2


	memory_type  0


	num_of_dims  4


	quant_format 2


	quant_data  , scale=0.131652, zero_point=193


	sizes        26 26 255 1 0 0 
--------------------------------- 


hal_voe_ready 0x0 0xbf1208 

 read fcs_status 0x000000bf

[video_init] uvcd iq is null, use default.

[video_init] uvcd SNR is null, use default.

IQ:FW size (98342)

sensor:date 2024/4/10 version:RTL8735B_VOE_1.4.8.0

sensor:FW size (5432)

sensor timestamp: 2024/04/10

iq timestamp: 2023/05/15 14:48:54

ISP:1 ENC:1 H265:1 NN:1

hal_voe_ready 0x0 0xbf1208 

voe   :RTL8735B_VOE_1.5.5.0 

sensor:RTL8735B_VOE_1.4.8.0 

hal   :RTL8735B_VOE_1.5.5.0 

load time sensor:74us iq:1274us itcm:0us dtcm:0us ddr:0us ddr2:0us

inputRateNumer[10] < outputRateNumer[30], Set inputRateNumer --> outputRateNumer 

[video_pre_init_procedure] START
[VOE]ext_in = 0 sync = 0

[VOE][Ini set0]init dn 0 hdr 0 mirrorflip 0xf0

[VOE]g_init_fps: 10


[VOE]md init success


[VOE]algo ver 7d4fcf4 

[VOE]pack_v 0x0000 0x0002 cus_v 0 iq_id 0 dn 0 day 0 night 1 other 2 offset 224 length 16436 iq_size 32716 

[VOE]Ver 0x0001000c Fast3A Cnt AE 1 AWB 1 period AE 5 AWB 1 delay 0 0 

[VOE]hdr_mode 0 sensor driver num 1 

[VOE]fps: 279873, pclk: 0, hts: 1077805056

[VOE]fps max 30.000000 min 5.000000 

[VOE]exposure_step 29.629629

[VOE]change sensor mode => 1920x1080@30.000000fps - 'linear' 

[VOE]min_fps 5.000000, max_fps 30.000000, exp_step 29.629629 dyn_fps 10.000000

[VOE]md ver 0x6d640100 

[VOE]ae ver 0x61650200 

[VOE]awb ver 0x77620100 

[VOE]cur_hdr_mode = 0

[VOE]VOE MEM Size =  4867 Used=  3552 KB Free= 1315 KB (1346784)

[VOE]stream 4  buffer 0: 0x7056f600 size 552960 

[VOE]stream 4  buffer 1: 0x705f6700 size 552960 

[VOE]RGB3 576x320 1/10

[VOE]dynamic set fps 0 -> 10 ok

[VOE]sensor power on
[VOE]early mirror/flip 0 0

[VOE]status == 1718
[VOE]encIn allocation fail
hal_voe_send2voe too long 128594 cmd 0x00000206 p1 0x00000000 p2 0x00000004

VOE command 0x206 fail ret 0x1

VOE_OPEN_CMD command fail
hal_video_open fail



[VID Err]Please check sensor id first,the id is 1

[VOE]release s4 isp buffer 0 

[VOE][WARN]useless release s4 slot0 status 0x00000000 

[VOE]release[VOE]frame_ s4 isp buffend: sensorer 1 

[VOE] didn't ini[WARN]uselestialize dons release s4e !


 slot1 status 0x00000000 

[VOE] isp2osd receive timeout, open CH( / / /4)

[VOE] isp2osd receive timeout, open CH( / / /4)

I really want to get help on how to solve this problem

Hello @Monet,

In the example Neural Nework – Object Detection – Realtek IoT/Wi-Fi MCU Solutions, it can be seen that several elements are missing in the code. For example, the following:

// Configure StreamIO object to stream data from RGB video channel to object detection

videoStreamerNN.registerInput(Camera.getStream(CHANNELNN));
videoStreamerNN.setStackSize();
videoStreamerNN.setTaskPriority();
videoStreamerNN.registerOutput(ObjDet);
if (videoStreamerNN.begin() != 0) {
Serial.println(“StreamIO link start failed”);
}

Without this, the controller doesn’t know where to direct what the camera is capturing. Based on my understanding, the flow would be as follows:

  1. The camera acquires the information.
  2. The StreamIO class directs it to the different channels, for example, to process it through the neural network and produce results.
  3. Then, those results are evaluated in the routine in your loop.

Another point is that you should instantiate the StreamIO class before the void setup() in the following way:

StreamIO videoStreamerNN(1, 1);

Then, you must decide whether you want to transmit this information to view it on a camera or store it in memory, which requires additional steps. I hope this helps.

Best regards.