I want to detect a person, and if a person is detected, then send a message to another Arduino Nano.
Here is my code.
#include "NNObjectDetection.h"
#include "VideoStream.h"
#include "ObjectClassList.h"
#define CHANNELNN 3 // Video channel for object detection
#define NNWIDTH 576 // Low resolution setting width for object detection
#define NNHEIGHT 320 // Low resolution setting height for object detection
VideoSetting configNN(NNWIDTH, NNHEIGHT, 10, VIDEO_RGB, 0);
NNObjectDetection ObjDet;
unsigned long lastDetectedTime = 0; // Time when 'person' object was last detected
bool isPersonDetected = false; // 'Person' object detection state
void setup() {
Serial.begin(115200); // Start serial communication
// Configure video channel for the camera
Camera.configVideoChannel(CHANNELNN, configNN);
Camera.videoInit(); // Initialize the camera
// Set up object detection
ObjDet.configVideo(configNN);
ObjDet.modelSelect(OBJECT_DETECTION, DEFAULT_YOLOV4TINY, NA_MODEL, NA_MODEL); // Select object detection model
ObjDet.begin(); // Start object detection
// Start video channel for object detection
Camera.channelBegin(CHANNELNN);
}
void loop() {
std::vector<ObjectDetectionResult> results = ObjDet.getResult(); // Get object detection results
bool detected = false;
if (ObjDet.getResultCount() > 0) {
for (uint32_t i = 0; i < ObjDet.getResultCount(); i++) {
ObjectDetectionResult item = results[i];
// Filter only 'person' object (index 0), check 'filter' value
if (item.type() == 0 && itemList[0].filter == 1) {
detected = true; // Person object detected
break;
}
}
}
if (detected) {
if (!isPersonDetected) { // Start detection of person object
isPersonDetected = true;
lastDetectedTime = millis(); // Save current time
} else if (millis() - lastDetectedTime >= 1000) { // If detected continuously for over 1 second
Serial.println("Person detected continuously for 1 second");
isPersonDetected = false; // Reset state
}
} else {
if (isPersonDetected) {
Serial.println("Person detection stopped"); // Person detection stopped
isPersonDetected = false; // Reset detection state
}
}
delay(100); // Delay to refresh results
}
I modified the code in ‘ObjectDetectedList.h’ slightly to make it recognize only people.
I’m not sure what’s wrong with my code, but this is what the Serial Monitor displays:
[VOE]pack_v 0x0000 0x0002 cus_v 0 iq_id 0 dn 0 day 0 night 1 other 2 offset 224 length 16436 iq_size 32716
[VOE]Ver 0x0001000c Fast3A Cnt AE 1 AWB 1 period AE 5 AWB 1 delay 0 0
[VOE][isp_mod_sensor_init][1581]Errhdr_mode 0 is over sensor driver num 1
[VOE]hdr_mode 0 sensor driver num 1
[VOE]fps: 1880018620, pclk: 0, hts: 1077805056
[VOE]fps max 30.000000 min 5.000000
[VOE]exposure_step 29.629629
[VOE]change sensor mode => 1920x1080@30.000000fps - 'linear'
[VOE]min_fps 5.000000, max_fps 30.000000, exp_step 29.629629
[VOE]md ver 0x6d640100
[VOE]ae ver 0x61650200
[VOE]awb ver 0x77620100
[VOE]cur_hdr_mode = 0
[VOE]VOE MEM Size = 4867 Used= 3550 KB Free= 1317 KB (1348768)
[VOE]stream 4 buffer 0: 0x70564700 size 552960
[VOE]stream 4 buffer 1: 0x705eb800 size 552960
[VOE]ch 4 not support OSD
[VOE]RGB3 576x320 1/10
[VOE]dynamic set fps 0 -> 10 ok
[VOE]sensor power on
[VOE]status == 1718
hal_voe_send2voe too long 127627 cmd 0x00000206 p1 0x00000000 p2 0x00000004
[VOE]release s4 isp buffer 0
[VOE][WARN]useless release s4 slot0 status 0x00000000
[VOE]release s4 isp buffer 1
[VOE][WARN]useless release s4 slot1 status 0x00000000
[VOE] osd2enc_queue receive queue timeout or fail (1)
[VOE] isp2osd_queue receive queue timeout or fail (1)
I’ve previously looked up articles related to this on the forum, but they couldn’t help me.
The connection with the MIPI camera seems to be well established
// Configure StreamIO object to stream data from RGB video channel to object detection
videoStreamerNN.registerInput(Camera.getStream(CHANNELNN));
videoStreamerNN.setStackSize();
videoStreamerNN.setTaskPriority();
videoStreamerNN.registerOutput(ObjDet);
if (videoStreamerNN.begin() != 0) {
Serial.println(“StreamIO link start failed”);
}
Without this, the controller doesn’t know where to direct what the camera is capturing. Based on my understanding, the flow would be as follows:
The camera acquires the information.
The StreamIO class directs it to the different channels, for example, to process it through the neural network and produce results.
Then, those results are evaluated in the routine in your loop.
Another point is that you should instantiate the StreamIO class before the void setup() in the following way:
StreamIO videoStreamerNN(1, 1);
Then, you must decide whether you want to transmit this information to view it on a camera or store it in memory, which requires additional steps. I hope this helps.