[Support Dev]: License Plate model not working after running the latest dev version #20156
-
Describe the problem you are havingLPR not working after using the latest dev version a7bbca5, getting the following error. Face Recognition works fine VersionWhat browser(s) are you using?No response Frigate config filedetectors:
onnx:
type: onnx
model:
model_type: rfdetr
width: 320
height: 320
input_tensor: nchw
input_dtype: float
path: /config/rfdetr-Medium.onnx
detect:
fps: 5
enabled: true
max_disappeared: 25
stationary:
interval: 50
threshold: 50
lpr:
enabled: true
device: GPU
model_size: small
min_area: 600
detection_threshold: 0.50
recognition_threshold: 0.94
enhancement: 2
min_plate_length: 3
match_distance: 1 Relevant Frigate log output2025-09-21 23:45:51.048977951 [E:onnxruntime:, sequential_executor.cc:572 ExecuteKernel] Non-zero status code returned while running NonMaxSuppression node. Name:'/end2end/NonMaxSuppression' Status Message: std::bad_alloc: cudaErrorStreamCaptureUnsupported: operation not permitted when stream is capturing
Error running YOLOv9 license plate detection model: Error in execution: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:129 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; SUCCTYPE = cudaError; std::conditional_t<THRW, void, common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; SUCCTYPE = cudaError; std::conditional_t<THRW, void, common::Status> = void] CUDA failure 901: operation failed due to a previous error during capture ; GPU=0 ; hostname=ubuntu-frigate ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_graph.cc ; line=61 ; expr=cudaStreamEndCapture(stream_, &graph); Relevant go2rtc log outputN/A FFprobe output from your cameraN/A Frigate statsNo response Install methodDocker Compose docker-compose file or Docker CLI commandservices:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
image: ghcr.io/blakeblackshear/frigate:a7bbca5-tensorrt
shm_size: "3g"
restart: unless-stopped
devices:
- /dev/bus/usb:/dev/bus/usb
runtime: nvidia
volumes:
- "/root/frigate/config/:/config/"
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 2000000000
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
memory: 12288M
cpus: '10.00'
limits:
memory: 13312M
cpus: '13.00'
ports:
- '5000:5000'
- '8971:8971'
- '8081:8080'
- "8554:8554" # RTSP Feeds
- "8555:8555"
environment:
NVIDIA_DRIVER_CAPABILITIES: 'all'
USE_FP16: false
network_mode: host Object DetectorTensorRT Network connectionWired Camera make and modelDahua, Hikvision Screenshots of the Frigate UI's System metrics pagesNo response Any other information that may be helpfulNo response |
Beta Was this translation helpful? Give feedback.
Answered by
NickM-27
Sep 21, 2025
Replies: 1 comment 3 replies
-
Thanks, I'll take a look |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Fixed in #20159