Amebapro2 AI Model Conversion - Issue

Hi,

https://www.amebaiot.com/en/amebapro2-ai-convert-model/

i’m trying to convert a yolov7-tiny trained model (darknet) to a NB file but get and error that it doesn’t work:

Error: Using your cfg and weights files can not export binary file, you can see more details on the attached zip file.

application.zip (26,7 KB)

some "parts" of the export log:

2024-05-26 00:49:40.484591: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory

2024-05-26 00:49:43.791105: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2024-05-26 00:49:43.791130: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (5fec6c973be1): /proc/driver/nvidia/version does not exist

Can you please check the attached zip/log that came with “your” email and let me know the problem?

I also tried to convert a model that i used in december. In december it worked fine but now i’m getting the error with this as well - so something must be work with your conversion service

Thanks for your help

Hi,

Can I check if you selected YOLOv7-TINY-Pytorch? Please use YOLO-TINY for darknet object detection models.

Thank you.

yes of course, i selected YOLO-TINY for conversion

BTW: tried it right now again. Same issue

application.zip (26,7 KB)

Hi @moschtrain,

Can you give it a try again? Have you succeeded in any conversion yesterday?

Thank you.

Hi Kelvin,

yes i tried it yesterday with INT16 conversion and it worked (INT8 conv wasn’t working as described). Unfortunately the frame rate of the amb82-mini was too slow (~1-2 fps).

Today i tired the INT8 conversion again and it worked too. Frame rate is ok now. BTW: no changes to my weights and cfg files… Did you fix something on the backend?

Thanks for your help

Hi @moschtrain,

My colleague have troubleshooted the issue and fixed on the server side.

Thanks for your feedback to allow us to know there exists this issue.