The Core Issue: I have successfully converted a custom-trained YOLOv7-tiny model into the .nb (Network Binary) format for the Amb82-mini (AmebaPro2) IoT device. Although the conversion process (both local and online) completes without errors, the model fails to output any bounding boxes during inference on the device.
1. YOLO Training Procedure
I trained the model using the following configurations:
-
Model Architecture: YOLOv7-tiny.
-
Custom Config:
yolov7-tiny-custom.yamlwith 2 classes (nc: 2). -
Hyperparameters: Used
hyp.scratch.tiny.yaml. -
Training Command:
python train.py --weights yolov7-tiny.pt --cfg cfg/training/yolov7-tiny-custom.yaml --img 640 640 .... -
Result: Obtained
best.pt.
2. Online Conversion Steps
I tested the official online conversion service with these steps:
-
Reparameterization: Used
reparam_yolov7-tiny.py(withyolov7-tiny-deploy.yaml) onbest.ptto generate a simplified.ptfile. (nc is correct) -
Packaging: Compressed the simplified
.ptinto a.ziparchive. -
Uploading: Uploaded to the service with the following settings:
-
Model Type: YOLOv7-TINY-Pytorch.
-
Input Size: 640x640.
-
Quantization: UINT8.
-
Calibration: Provided 10 training images.
-
3. Local Conversion Procedure
This is the full workflow I followed using the local Acuity Toolkit (v6.18.8):
-
Reparameterization (Pre-export): Before exporting to ONNX, I processed
best.ptusing a custom scriptreparam_yolov7-tiny.pyand theyolov7-tiny-deploy.yaml(withnc: 2correctly set) to merge weights. -
ONNX Export: Exported the reparameterized model using:
python export.py --weights best_deploy.pt --grid --simplify. -
Acuity Pipeline (Import → Preprocess → Quantize → Export): Ran custom scripts (01 to 04) to generate the final
.nbfile
[Custom Scripts]
Below are the scripts I used for the conversion, which are not provided by the official toolkit by default:
Model Import (01_import_yolov7.sh)
#!/bin/bash
MODEL_NAME="best"
ONNX_FILE="best.onnx"
INPUT_NAME="images"
INPUT_SHAPE="3,640,640"
OUTPUT_NAME="output"
PEGASUS_SCRIPT="$HOME/verisilicon_conv_tool/acuity-toolkit-whl-6.18.8/bin/pegasus.py"
if [ ! -f "$PEGASUS_SCRIPT" ]; then
echo "cant find pegasus.py!"
exit 1
fi
python $PEGASUS_SCRIPT import onnx \
--model $ONNX_FILE \
--output-model ${MODEL_NAME}.json \
--output-data ${MODEL_NAME}.data \
--inputs $INPUT_NAME \
--input-size-list $INPUT_SHAPE \
--outputs "$OUTPUT_NAME"
Preprocessing (02_preprocess.sh)
#!/bin/bash
MODEL_JSON="best.json"
META_FILE="best_inputmeta.yml"
DATA_LIST="dataset.txt"
PEGASUS_SCRIPT="$HOME/verisilicon_conv_tool/acuity-toolkit-whl-6.18.8/bin/pegasus.py"
if [ ! -f "$PEGASUS_SCRIPT" ]; then
echo "cant find pegasus.py!"
exit 1
fi
python $PEGASUS_SCRIPT generate inputmeta \
--model $MODEL_JSON \
--input-meta-output $META_FILE
if [ "$(uname)" == "Darwin" ]; then
sed -i '' "s|source_file:|source_file: $DATA_LIST|g" $META_FILE
else
sed -i "s|source_file:|source_file: $DATA_LIST|g" $META_FILE
fi
if [ "$(uname)" == "Darwin" ]; then
sed -i '' 's/scale: 1.0/scale: 0.00392156862/g' $META_FILE
sed -i '' 's/scale: 1/scale: 0.00392156862/g' $META_FILE
else
sed -i 's/scale: 1.0/scale: 0.00392156862/g' $META_FILE
sed -i 's/scale: 1/scale: 0.00392156862/g' $META_FILE
fi
Quantization (03_quantize.sh)
#!/bin/bash
MODEL_NAME="best"
JSON_FILE="${MODEL_NAME}.json"
DATA_FILE="${MODEL_NAME}.data"
META_FILE="${MODEL_NAME}_inputmeta.yml"
PEGASUS_SCRIPT="$HOME/verisilicon_conv_tool/acuity-toolkit-whl-6.18.8/bin/pegasus.py"
python $PEGASUS_SCRIPT quantize \
--model $JSON_FILE \
--model-data $DATA_FILE \
--with-input-meta $META_FILE \
--rebuild \
--model-quantize "${MODEL_NAME}.quantize" \
--quantizer asymmetric_affine \
--qtype uint8
NBG Export (04_export.sh)
#!/bin/bash
MODEL_NAME="best"
QUANTIZE_FILE="${MODEL_NAME}.quantize"
META_FILE="${MODEL_NAME}_inputmeta.yml"
DATA_FILE="${MODEL_NAME}.data"
JSON_FILE="${MODEL_NAME}.json"
EXPORT_DIR="export_out"
SDK_ROOT="$HOME/verisilicon_conv_tool/acuity-toolkit-whl-6.18.8/sdk/VivanteIDE5.8.1.1/cmdtools"
VIV_SDK_ROOT="$SDK_ROOT"
export VIVANTE_SDK_DIR="$SDK_ROOT/vsimulator"
export AQROOT="$SDK_ROOT"
# 清理輸出
rm -rf $EXPORT_DIR
mkdir $EXPORT_DIR
echo "Target SDK Path: $VIV_SDK_ROOT"
python $HOME/verisilicon_conv_tool/acuity-toolkit-whl-6.18.8/bin/pegasus.py export ovxlib \
--model $JSON_FILE \
--model-data $DATA_FILE \
--model-quantize $QUANTIZE_FILE \
--with-input-meta $META_FILE \
--dtype quantized \
--output-path $EXPORT_DIR \
--pack-nbg-unify \
--optimize VIP8000NANONI_PID0XAD \
--target-ide-project linux64 \
--viv-sdk $VIV_SDK_ROOT
[Summary ]
Despite both conversion methods (local and online) completing successfully, the inference result on Amb82-mini remains empty (no boxes).