HEF was compiled for Hailo8L device, while the device itself is Hailo8. #18237
-
Describe the problem you are havingCurrently the inference speed is at ~11ms but it should be better as Hailo8 is 26TOPS vs 13 TOPS of Hailo8l? It appears the YOLOv6n Model was compiled for Hailo8l and thus capping performance on the Hailo8? Would it not be better the other way around? Version0.16.0-1fa7ce5 Frigate config filedetectors:
hailo8l:
type: hailo8l
device: PCIe
model:
width: 320
height: 320
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: yolo-generic docker-compose file or Docker CLI commanddevices:
- /dev/hailo0:/dev/hailo0 Relevant Frigate log output2025-05-15 02:00:45.278765960 [HailoRT] [warning] HEF was compiled for Hailo8L device, while the device itself is Hailo8. This will result in lower performance. Install methodDocker Compose Object DetectorOther Screenshots of the Frigate UI's System metrics pagesAny other information that may be helpfulNo response |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
That's only a couple ms slower than the best time we've seen, though the code is supposed to detect the correct hardware |
Beta Was this translation helpful? Give feedback.
-
Just chiming in to say thank-you @NickM-27 for the answer. Deleting the model cache removed the warning message and brought the inference times to the 6ms range, very impressive. ![]() What confused me initially is that to get frigate to initialize I had to use type: hailo8l instead of type: hailo8 as I assumed I had to use.
|
Beta Was this translation helpful? Give feedback.
The code that the Hailo team has implemented already tries to pull the correct model. You could try deleting it in config/model_cache so it redownloads