Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VEHICLE OCCUPANCY DETECTION SYSTEM
Document Type and Number:
WIPO Patent Application WO/2023/028714
Kind Code:
A1
Abstract:
A system for detecting occupancy of a vehicle travelling in a direction of travel along a road. The system includes a roadside imaging device positioned on a roadside, and a first roadside light emitter, and a roadside vehicle detector. A processor is configured to receive a signal from the roadside vehicle detector, command the first roadside light emitter to emit light according to a first pattern for a first duration, command the roadside imaging device to capture images of the side of the vehicle, and compute a vehicle occupancy, in each of the captured images by determining one or more regions of interest in each of the captured images, and determining a number of visible occupants in the one or more regions of interest. The processor determines a most likely number of occupants based on each determined vehicle occupancy.

Inventors:
ALI KARIM (CA)
WANG ZHIJIE (CA)
ZORLU SEYDI (CA)
MOHTAT ARASH (CA)
BECKER CARLOS (CH)
Application Number:
PCT/CA2022/051330
Publication Date:
March 09, 2023
Filing Date:
September 02, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INVISION AI INC (CA)
International Classes:
G08G1/01; G01S17/86; G07B15/06; G08G1/052
Domestic Patent References:
WO2008099146A12008-08-21
Foreign References:
US20200234067A12020-07-23
US20130093891A12013-04-18
US20180157922A12018-06-07
US20150294144A12015-10-15
US20080175438A12008-07-24
US11308316B12022-04-19
Attorney, Agent or Firm:
NORTON ROSE FULBRIGHT CANADA LLP (CA)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A system for detecting occupancy of a vehicle travelling in an expected direction of travel along a road, the system comprising: a first roadside imaging device positioned on a roadside, having a first field of view of the road, the first field of view incident on a side of the vehicle when the vehicle is on the road within the first field of view; a first roadside light emitter emitting light towards vehicles in the first field of view; a roadside vehicle detector; a processor, in communication with a memory, configured to: receive a signal from the roadside vehicle detector indicating that the vehicle is within the first field of view or proximate, relative to the expected direction of vehicle travel, to the first field of view; command the first roadside light emitter to emit light according to a first pattern for a first duration; command the first roadside imaging device to capture one or more images of the side of the vehicle according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images of the side of the vehicle from the first roadside imaging device; compute a vehicle occupancy of the vehicle by, in each of the captured images: determining one or more regions of interest of the vehicle in each of the captured images; determining the vehicle occupancy as a number of visible occupants in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and

- 84 - transmit the vehicle occupancy to a monitoring system.

2. The system of claim 1 , wherein: the first roadside imaging device is positioned to extract data for different perspectives of occupants as the vehicle travels horizontally across the field of view; and each of the images captured by the first roadside imaging device include different perspectives of the side of the vehicle.

3. The system of claim 2 wherein the processor is configured to compute a yaw angle relative to a horizontal axis perpendicular to the expected direction of vehicle travel, wherein the images captured by the first roadside imaging device include the different perspectives of the side of the vehicle based on the first yaw angle.

4. The system of claim 1 , wherein the processor, to compute the vehicle occupancy of the vehicle, is configured to: discard uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determine the number of visible occupants based on determining one or more regions of interest of the vehicle in the respective subset of the plurality of captures images.

5. The system of claim 1 , wherein the first roadside imaging device, the first roadside light emitter, and the vehicle detector are attached to a mobile roadside structure.

6. The system of claim 1 , further comprising: a second roadside imaging device, above the first roadside imaging device, the second roadside imaging device having a second field of view of a second lane of the road, the second lane being further from the first roadside imaging device than a first lane of the road, the second field of view incident on a side of a further vehicle when the further vehicle is in the second lane within the second field of view; a second roadside light emitter adjacent to the road and emitting light towards vehicles in the second field of view;

- 85 - wherein the processor is further configured to: receive another signal from the vehicle detector indicating that the further vehicle is within or proximate, relative to the expected direction of vehicle travel, to the second field of view; command the second roadside light emitter to emit light according to a third pattern for a third duration; command the second roadside imaging device to capture additional images of the side of the further vehicle according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receive the additional captured images of the side of the further vehicle from the second roadside imaging device; compute a vehicle occupancy of the further vehicle by, in each of the additional captured images by: determining one or more regions of interest of the further vehicle in each of the additional captured images; determining the vehicle occupancy of the further vehicle as a number of visible occupants of the further vehicle in the one or more regions of interest of the further vehicle; and determining a most likely number of occupants of the further vehicle based on each determined vehicle occupancy of the further vehicle; and transmit the vehicle occupancy of the further vehicle to the monitoring system.

7. The system of claim 6, wherein the first field of view and the second field of view overlap, and the processor is further configured to: determine the one or more regions of interest of the vehicle in the one or more additional captured images;

- 86 - determine a further number of visible occupants of the vehicle in the one or more additional captured images in the one or more regions of interest; and determine the most likely number of occupants of the vehicle based on each determined vehicle occupancy and each determined further number of visible occupants.

8. The system of claim 1 , wherein the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed of the vehicle; and adjust one or more parameters of the first roadside imaging device or the first light emitter into a determined optimal configuration for capturing vehicles travelling the expected vehicle speed.

9. The system of claim 1 , wherein the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed of the vehicle; and determine the first pattern and the first time window based on the expected vehicle speed.

10. The system of claim 1 , further comprising: a sensor for detecting ambient conditions; wherein the processor is further configured to: receive ambient condition information from the sensor; determine an optimal configuration for the imaging device based on the received ambient condition; and transmit a further command signal to the imaging device capture images according to the optimal configuration.

11 . The system of claim 1 , wherein the light emitter is an LED emitting infrared or near infrared light, the first pattern is 120 pulses per second, and the regions of interest are a rear side window and a front side window.

- 87 -

12. A method for detecting occupancy of a vehicle travelling in an expected direction of travel along a road, the method comprising; receiving a signal indicating that the vehicle is within or proximate, relative to the expected direction of vehicle travel, to a first field of view of a first roadside imaging device; commanding a first roadside light emitter to emit light according to a first pattern for a first duration; commanding the first roadside imaging device to capture images of a side of the vehicle according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receiving the captured images of the side of the vehicle from the first roadside imaging device; computing a vehicle occupancy of the vehicle by, in each of the captured images: determining one or more regions of interest of the side of the vehicle in each of the captured images; determining the vehicle occupancy in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmitting the most likely number of occupants to a monitoring system.

13. The method of claim 12, further comprising: discarding uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determining the number of visible occupants based on determining one or more regions of interest of the vehicle in the respective subset of the plurality of captures images.

14. The method of claim 12, wherein the one or more regions of interest include at least one of a rear side window and a front side window.

- 88 -

15. The method of claim 12, wherein each of the captured images includes the side of the vehicle at different perspectives based on a yaw angle which encourages image variation.

16. The method of claim 12, the method further comprising: commanding a second roadside imaging device to capture additional images of the side of the vehicle from a second field of view according to a fourth pattern associated with the first pattern, for a fourth duration associated with the first duration; receiving the additional captured images of the side of the vehicle from the second roadside imaging device; wherein computing the vehicle occupancy of the vehicle further comprises, for each of the additional captured images: determining one or more additional regions of interest of the vehicle; determining the vehicle occupancy of the vehicle in the additional one or more regions of interest of the vehicle; and determining the most likely number of occupants of the vehicle based on the each of the number of visible occupants and the further number of visible occupants; and transmitting the vehicle occupancy of the further vehicle to the monitoring system.

17. The method of claim 16, the method further comprising: receiving a signal indicating that a further vehicle is within or proximate, relative to the expected direction of vehicle travel, to the second field of view; commanding a second roadside light emitter to emit light according to a third pattern for a third duration; commanding the second roadside imaging device to capture additional images of a side of the further vehicle according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receiving the additional captured images of the side of the vehicle from the first roadside imaging device;

- 89 - computing a vehicle occupancy of the further vehicle by, in each of the additional captured images: determining one or more further regions of interest of a side of the further vehicle in each of the additional captured images; determining the further vehicle occupancy as a number of visible occupants in the one or more further regions of interest; and determining a most likely number of occupants of the further vehicle based on each determined further vehicle occupancy; and transmitting the most likely number of occupants of the further vehicle to the monitoring system.

18. The method of claim 12 further comprising computing a correction parameter and providing visual guidance using augmented reality avatars on a display device.

19. The method of claim 12, the method further comprising: monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed of the vehicle; and adjusting one or more parameters of the first roadside imaging device or the first light emitter into a determined adjusted configuration for capturing vehicles travelling the expected vehicle speed.

20. The method of claim 12, the method further comprising: monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed of the vehicle; and determining the first pattern and the first time window based on the expected vehicle speed.

21 . A system for detecting vehicle occupancy, the system comprising: a first roadside imaging device having a first field of view;

- 90 - a first roadside light emitter emitting light in the first field of view; a roadside vehicle detector; a processor, in communication with a memory, configured to: receive a signal from the roadside vehicle detector; command the first roadside light emitter to emit light according to a first pattern for a first duration; command the first roadside imaging device to capture one or more images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images from the first roadside imaging device; compute a vehicle occupancy by, in each of the captured images: determining one or more regions of interest in each of the captured images; determining the vehicle occupancy based on the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmit the vehicle occupancy to a monitoring system or store the vehicle occupancy in memory. The system of claim 21 , wherein: the first roadside imaging device is positioned to extract data for different perspectives across the field of view; and at least some of the images captured by the first roadside imaging device include the different perspectives.

- 91 -

23. The system of claim 22 wherein the processor is configured to compute a yaw angle relative to a horizontal axis perpendicular to an expected direction, wherein the images captured by the first roadside imaging device include the different perspectives based on the first yaw angle.

24. The system of claim 21 , wherein the processor, to compute the vehicle occupancy, is configured to: discard uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determine a number of visible occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

25. The system of claim 21 , wherein the first roadside imaging device, the first roadside light emitter, and the vehicle detector are attached to a mobile roadside structure.

26. The system of claim 21 , further comprising: a second roadside imaging device, above the first roadside imaging device, the second roadside imaging device having a second field of view; a second roadside light emitter emitting light in the second field of view; wherein the processor is further configured to: receive another signal from the vehicle detector; command the second roadside light emitter to emit light according to a third pattern for a third duration; command the second roadside imaging device to capture additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receive the additional captured images from the second roadside imaging device; compute another vehicle occupancy by, in each of the additional captured images by: determining one or more regions of interest in each of the additional captured images; determining the vehicle occupancy using the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy of the further vehicle; and transmit the vehicle occupancy to the monitoring system.

27. The system of claim 26, wherein the first field of view and the second field of view overlap, and the processor is further configured to: determine the one or more regions of interest in the one or more additional captured images; determine a further number of visible occupants in the one or more additional captured images in the one or more regions of interest; and determine the most likely number of occupants based on each determined vehicle occupancy and each determined further number of visible occupants.

28. The system of claim 21 , wherein the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and adjust one or more parameters of the first roadside imaging device or the first light emitter into a determined optimal configuration for capturing images based on the expected vehicle speed.

29. The system of claim 21 , wherein the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and determine the first pattern and the first time window based on the expected vehicle speed.

30. The system of claim 21 , further comprising: a sensor for detecting ambient conditions; wherein the processor is further configured to: receive ambient condition information from the sensor; determine an optimal configuration for the imaging device based on the received ambient condition; and transmit a further command signal to the imaging device capture images according to the optimal configuration.

31. The system of claim 21 , wherein the light emitter is an LED emitting infrared or near infrared light, the first pattern is 120 pulses per second.

32. A method for detecting vehicle occupancy, the method comprising: receiving a signal from a detector based on a first field of view of a first roadside imaging device; commanding a first roadside light emitter to emit light according to a first pattern for a first duration; commanding the first roadside imaging device to capture images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receiving the captured images from the first roadside imaging device; computing a vehicle occupancy by, in each of the captured images: determining one or more regions of interest in each of the captured images; determining the vehicle occupancy in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and

- 94 - transmitting the most likely number of occupants to a monitoring system or storing the vehicle occupancy in memory.

33. The method of claim 32, further comprising: discarding uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determining the number of occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

34. The method of claim 32, wherein the one or more regions of interest include at least one of a rear side window and a front side window.

35. The method of claim 32, wherein each of the captured images includes different perspectives based on a yaw angle which encourages image variation.

36. The method of claim 32, the method further comprising: commanding a second roadside imaging device to capture additional images from a second field of view according to a fourth pattern associated with the first pattern, for a fourth duration associated with the first duration; receiving the additional captured images from the second roadside imaging device; wherein computing the vehicle occupancy further comprises, for each of the additional captured images: determining one or more additional regions of interest of the vehicle; determining the vehicle occupancy in the additional one or more regions of interest; and determining the most likely number of occupants based on the each of the number of visible occupants and the further number of visible occupants; and transmitting the vehicle occupancy to the monitoring system.

37. The method of claim 36, the method further comprising:

- 95 - receiving a signal indicating from the detector based on the second field of view; commanding a second roadside light emitter to emit light according to a third pattern for a third duration; commanding the second roadside imaging device to capture additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receiving the additional captured images from the first roadside imaging device; computing a further vehicle occupancy by, in each of the additional captured images: determining one or more further regions of interest in each of the additional captured images; determining the further vehicle occupancy based on the one or more further regions of interest; and determining a most likely number of occupants based on each determined further vehicle occupancy; and transmitting the most likely number of occupants to the monitoring system.

38. The method of claim 32 further comprising computing a correction parameter and providing visual guidance using augmented reality avatars on a display device.

39. The method of claim 32, the method further comprising: monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and adjusting one or more parameters of the first roadside imaging device or the first light emitter into a determined adjusted configuration for capturing images based on the expected vehicle speed.

40. The method of claim 32, the method further comprising:

- 96 - monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and determining the first pattern and the first time window based on the expected vehicle speed.

41 . A system for detecting vehicle occupancy, the system comprising: a first roadside imaging device having a first field of view; a first roadside light emitter emitting light in the first field of view; a processor, in communication with a memory, configured to: command the first roadside light emitter to emit light according to a first pattern for a first duration; capture, using the first roadside imaging device, one or more images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images from the first roadside imaging device; compute a vehicle occupancy by, in each of the captured images: determining one or more regions of interest in each of the captured images; determining the vehicle occupancy based on the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmit the vehicle occupancy to a monitoring system or store the vehicle occupancy in memory.

42. The system of claim 41 , wherein the processor is further configured to detect the vehicle in one or more images of the captured images from the first roadside imaging device.

- 97 -

43. The system of claim 42, wherein the detect the vehicle in one or more images of the captured images comprises: detecting a first image of the captured images where the vehicle is at a first position in the first field of view; detecting a second image of the captured images where the vehicle is at a second position in the first field of view; and generating a series of images of the vehicle using one or more of the first image, zero or more images captured between the first and second images, and the second image.

44. The system of claim 43, wherein the generating a series of images of the vehicle comprises generating a series of uniformly distanced images of the vehicle.

45. The system of claim 41 , further comprising: a roadside vehicle detector; and wherein the processor is further configured to: receive a signal from the roadside vehicle detector; and adjust one or more parameters of the first roadside imaging device or the first light emitter based on the signal from the roadside vehicle detector.

46. The system of claim 41 , wherein: the first roadside imaging device is positioned to extract data for different perspectives across the field of view; and at least some of the images captured by the first roadside imaging device include the different perspectives.

47. The system of claim 46 wherein the processor is configured to compute a yaw angle relative to a horizontal axis perpendicular to an expected direction, wherein the images captured by the first roadside imaging device include the different perspectives based on the first yaw angle.

- 98 -

48. The system of claim 41 , wherein the processor, to compute the vehicle occupancy, is configured to: discard uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determine a number of visible occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

49. The system of claim 41 , wherein the first roadside imaging device, the first roadside light emitter, and the vehicle detector are attached to a mobile roadside structure.

50. The system of claim 41 , further comprising: a second roadside imaging device, above the first roadside imaging device, the second roadside imaging device having a second field of view; a second roadside light emitter emitting light in the second field of view; wherein the processor is further configured to: command the second roadside light emitter to emit light according to a third pattern for a third duration; capture, using the second roadside imaging device, additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receive the additional captured images from the second roadside imaging device; compute another vehicle occupancy by, in each of the additional captured images by: determining one or more regions of interest in each of the additional captured images; determining the vehicle occupancy using the one or more regions of interest; and

- 99 - determining a most likely number of occupants based on each determined vehicle occupancy of the further vehicle; and transmit the vehicle occupancy to the monitoring system.

51 . The system of claim 50, wherein the first field of view and the second field of view overlap, and the processor is further configured to: determine the one or more regions of interest in the one or more additional captured images; determine a further number of visible occupants in the one or more additional captured images in the one or more regions of interest; and determine the most likely number of occupants based on each determined vehicle occupancy and each determined further number of visible occupants.

52. The system of claim 45, wherein the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and adjust one or more parameters of the first roadside imaging device or the first light emitter into a determined optimal configuration for capturing images based on the expected vehicle speed.

53. The system of claim 41 , wherein the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and determine the first pattern and the first time window based on the expected vehicle speed.

54. The system of claim 41 , further comprising: a sensor for detecting ambient conditions; wherein the processor is further configured to:

- 100 - receive ambient condition information from the sensor; determine an optimal configuration for the imaging device based on the received ambient condition; and transmit a further command signal to the imaging device capture images according to the optimal configuration.

55. The system of claim 41 , wherein the light emitter is an LED emitting infrared or near infrared light, the first pattern is 120 pulses per second.

56. The system of claim 41 , wherein the processor is further configured to anonymize the captured images.

57. A method for detecting vehicle occupancy, the method comprising: commanding a first roadside light emitter to emit light according to a first pattern for a first duration; capturing, using the first roadside imaging device, images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receiving the captured images from the first roadside imaging device; computing a vehicle occupancy by, in each of the captured images: determining one or more regions of interest in each of the captured images; determining the vehicle occupancy in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmitting the most likely number of occupants to a monitoring system or storing the vehicle occupancy in memory.

58. The method of claim 57, further comprising detecting the vehicle in one or more images of the captured images from the first roadside imaging device.

- 101 -

59. The method of claim 58, wherein the detecting the vehicle in one or more images of the captured images comprises: detecting a first image of the captured images where the vehicle is at a first position in the first field of view; detecting a second image of the captured images where the vehicle is at a second position in the first field of view; and generating a series of images of the vehicle using one or more of the first image, zero or more images captured between the first and second images, and the second image.

60. The system of claim 59, wherein the generating a series of images of the vehicle comprises generating a series of uniformly distanced images of the vehicle.

61 . The method of claim 57, further comprising: receiving a signal from a roadside vehicle detector indicating that the vehicle is within or proximate, relative to the expected direction of vehicle travel, to a first field of view of a first roadside imaging device; and adjusting one or more parameters of the first roadside imaging device or the first light emitter based on the signal.

62. The method of claim 57, further comprising: discarding uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determining the number of occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

63. The method of claim 57, wherein the one or more regions of interest include at least one of a rear side window and a front side window.

64. The method of claim 57, wherein each of the captured images includes different perspectives based on a yaw angle which encourages image variation.

65. The method of claim 57, the method further comprising:

- 102 - capturing, using a second roadside imaging device, additional images from a second field of view according to a fourth pattern associated with the first pattern, for a fourth duration associated with the first duration; receive the additional captured images from the second roadside imaging device; wherein computing the vehicle occupancy further comprises, for each of the additional captured images: determining one or more additional regions of interest of the vehicle; determining the vehicle occupancy in the additional one or more regions of interest; and determining the most likely number of occupants based on the each of the number of visible occupants and the further number of visible occupants; and transmitting the vehicle occupancy to the monitoring system. The method of claim 57, the method further comprising: commanding a second roadside light emitter to emit light according to a third pattern for a third duration; capturing, using the second roadside imaging device, additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receiving the additional captured images from the first roadside imaging device; computing a further vehicle occupancy by, in each of the additional captured images: determining one or more further regions of interest in each of the additional captured images; determining the further vehicle occupancy based on the one or more further regions of interest; and

- 103 - determining a most likely number of occupants based on each determined further vehicle occupancy; and transmitting the most likely number of occupants to the monitoring system.

67. The method of claim 57 further comprising computing a correction parameter and providing visual guidance using augmented reality avatars on a display device.

68. The method of claim 61 , the method further comprising: monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and adjusting one or more parameters of the first roadside imaging device or the first light emitter into a determined adjusted configuration for capturing images based on the expected vehicle speed.

69. The method of claim 61 , the method further comprising: monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and determining the first pattern and the first time window based on the expected vehicle speed.

70. The method of claim 57, the method further comprising anonymizing the captured images.

- 104 -

Description:
VEHICLE OCCUPANCY DETECTION SYSTEM

CROSS-REFERENCE

[0001] This application claims all benefit including priority to U.S. Patent Application 17/465681 , filed September 2, 2021 , and entitled “ROAD SIDE VEHICLE OCCUPANCY DETECTION SYSTEM”, the entire contents of which are hereby incorporated by reference.

FIELD

[0002] The improvements generally relate to the field of vehicle occupancy detection systems, and more specifically to automated vehicle detection occupancy systems and methods.

INTRODUCTION

[0003] Determining vehicle occupancy typically includes the use of a physical human presence, such as police or other policy enforcement personnel, to accurately determine a vehicle occupancy.

[0004] Automated vehicle occupancy detection suffers from a lack of accuracy and potential latency issues. Automated vehicle occupancy detection also suffers from a lack of accuracy associated with various road conditions which are likely to occur during operation. Traditional automated vehicle occupancy systems can also be expensive to implement, experience impaired functioning when moved, or be unreliable and difficult to repair or re-calibrate.

[0005] Automated vehicle occupancy detection systems which are more accurate, faster, more reliable, easier to move or install, more robust, or require less calibration are desirable.

SUMMARY

[0006] In accordance with one aspect, there is provided a system for detecting occupancy of a vehicle travelling in an expected direction of travel along a road. The system involves a first roadside imaging device positioned on a roadside, having a first field of view of the road, the first field of view incident on a side of the vehicle when the vehicle is on the road within the first field of view; a first roadside light emitter emitting light towards vehicles in the first field of view; a roadside vehicle detector; a processor, in communication with a memory, configured to: receive a signal from the roadside vehicle detector indicating that the vehicle is within the first field of view or proximate, relative to the expected direction of vehicle travel, to the first field of view; command the first roadside light emitter to emit light according to a first pattern for a first duration; command the first roadside imaging device to capture one or more images of the side of the vehicle according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images of the side of the vehicle from the first roadside imaging device; compute a vehicle occupancy of the vehicle by, in each of the captured images: determining one or more regions of interest of the vehicle in each of the captured images; determining the vehicle occupancy as a number of visible occupants in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmit the vehicle occupancy to a monitoring system.

[0007] In some embodiments, the first roadside imaging device is positioned to extract data for different perspectives of occupants as the vehicle travels horizontally across the field of view; and each of the images captured by the first roadside imaging device include different perspectives of the side of the vehicle.

[0008] In some embodiments, the processor is configured to compute a yaw angle relative to a horizontal axis perpendicular to the expected direction of vehicle travel, wherein the images captured by the first roadside imaging device include the different perspectives of the side of the vehicle based on the first yaw angle.

[0009] In some embodiments, the processor, to compute the vehicle occupancy of the vehicle, is configured to: discard uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determine the number of visible occupants based on determining one or more regions of interest of the vehicle in the respective subset of the plurality of captures images.

[0010] In some embodiments, the first roadside imaging device, the first roadside light emitter, and the vehicle detector are attached to a mobile roadside structure.

[0011] In some embodiments, the system has a second roadside imaging device, above the first roadside imaging device, the second roadside imaging device having a second field of view of a second lane of the road, the second lane being further from the first roadside imaging device than a first lane of the road, the second field of view incident on a side of a further vehicle when the further vehicle is in the second lane within the second field of view; a second roadside light emitter adjacent to the road and emitting light towards vehicles in the second field of view; wherein the processor is further configured to: receive another signal from the vehicle detector indicating that the further vehicle is within or proximate, relative to the expected direction of vehicle travel, to the second field of view; command the second roadside light emitter to emit light according to a third pattern for a third duration; command the second roadside imaging device to capture additional images of the side of the further vehicle according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receive the additional captured images of the side of the further vehicle from the second roadside imaging device; compute a vehicle occupancy of the further vehicle by, in each of the additional captured images by: determining one or more regions of interest of the further vehicle in each of the additional captured images; determining the vehicle occupancy of the further vehicle as a number of visible occupants of the further vehicle in the one or more regions of interest of the further vehicle; and determining a most likely number of occupants of the further vehicle based on each determined vehicle occupancy of the further vehicle; and transmit the vehicle occupancy of the further vehicle to the monitoring system.

[0012] In some embodiments, the first field of view and the second field of view overlap, and the processor is further configured to: determine the one or more regions of interest of the vehicle in the one or more additional captured images; determine a further number of visible occupants of the vehicle in the one or more additional captured images in the one or more regions of interest; and determine the most likely number of occupants of the vehicle based on each determined vehicle occupancy and each determined further number of visible occupants.

[0013] In some embodiments, the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed of the vehicle; and adjust one or more parameters of the first roadside imaging device or the first light emitter into a determined optimal configuration for capturing vehicles travelling the expected vehicle speed.

[0014] In some embodiments, the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed of the vehicle; and determine the first pattern and the first time window based on the expected vehicle speed.

[0015] In some embodiments, the system has a sensor for detecting ambient conditions; wherein the processor is further configured to: receive ambient condition information from the sensor; determine an optimal configuration for the imaging device based on the received ambient condition; and transmit a further command signal to the imaging device capture images according to the optimal configuration.

[0016] In some embodiments, the light emitter is an LED emitting infrared or near infrared light, the first pattern is 120 pulses per second, and the regions of interest are a rear side window and a front side window.

[0017] In accordance with another aspect, there is provided a method for detecting occupancy of a vehicle travelling in an expected direction of travel along a road. The method involves receiving a signal indicating that the vehicle is within or proximate, relative to the expected direction of vehicle travel, to a first field of view of a first roadside imaging device; commanding a first roadside light emitter to emit light according to a first pattern for a first duration; commanding the first roadside imaging device to capture images of a side of the vehicle according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receiving the captured images of the side of the vehicle from the first roadside imaging device; computing a vehicle occupancy of the vehicle by, in each of the captured images: determining one or more regions of interest of the side of the vehicle in each of the captured images; determining the vehicle occupancy in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmitting the most likely number of occupants to a monitoring system.

[0018] In some embodiments, the method involves discarding uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determining the number of visible occupants based on determining one or more regions of interest of the vehicle in the respective subset of the plurality of captures images.

[0019] In some embodiments, the one or more regions of interest include at least one of a rear side window and a front side window.

[0020] In some embodiments, each of the captured images includes the side of the vehicle at different perspectives based on a yaw angle which encourages image variation.

[0021] In some embodiments, the method involves commanding a second roadside imaging device to capture additional images of the side of the vehicle from a second field of view according to a fourth pattern associated with the first pattern, for a fourth duration associated with the first duration; receiving the additional captured images of the side of the vehicle from the second roadside imaging device; wherein computing the vehicle occupancy of the vehicle further comprises, for each of the additional captured images: determining one or more additional regions of interest of the vehicle; determining the vehicle occupancy of the vehicle in the additional one or more regions of interest of the vehicle; and determining the most likely number of occupants of the vehicle based on the each of the number of visible occupants and the further number of visible occupants; and transmitting the vehicle occupancy of the further vehicle to the monitoring system.

[0022] In some embodiments, the method involves receiving a signal indicating that a further vehicle is within or proximate, relative to the expected direction of vehicle travel, to the second field of view; commanding a second roadside light emitter to emit light according to a third pattern for a third duration; commanding the second roadside imaging device to capture additional images of a side of the further vehicle according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receiving the additional captured images of the side of the vehicle from the first roadside imaging device; computing a vehicle occupancy of the further vehicle by, in each of the additional captured images: determining one or more further regions of interest of a side of the further vehicle in each of the additional captured images; determining the further vehicle occupancy as a number of visible occupants in the one or more further regions of interest; and determining a most likely number of occupants of the further vehicle based on each determined further vehicle occupancy; and transmitting the most likely number of occupants of the further vehicle to the monitoring system.

[0023] In some embodiments, the method involves computing a correction parameter and providing visual guidance using augmented reality avatars on a display device.

[0024] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed of the vehicle; and adjusting one or more parameters of the first roadside imaging device or the first light emitter into a determined adjusted configuration for capturing vehicles travelling the expected vehicle speed.

[0025] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed of the vehicle; and determining the first pattern and the first time window based on the expected vehicle speed.

[0026] In accordance with another aspect, there is provided a system for detecting vehicle occupancy. The system involves a first roadside imaging device having a first field of view; a first roadside light emitter emitting light in the first field of view; a roadside vehicle detector; a processor, in communication with a memory, configured to: receive a signal from the roadside vehicle detector; command the first roadside light emitter to emit light according to a first pattern for a first duration; command the first roadside imaging device to capture one or more images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images from the first roadside imaging device; compute a vehicle occupancy by, in each of the captured images: determining one or more regions of interest in each of the captured images; determining the vehicle occupancy based on the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmit the vehicle occupancy to a monitoring system or store the vehicle occupancy in memory.

[0027] In some embodiments, the first roadside imaging device is positioned to extract data for different perspectives across the field of view; and at least some of the images captured by the first roadside imaging device include the different perspectives.

[0028] In some embodiments, the processor is configured to compute a yaw angle relative to a horizontal axis perpendicular to an expected direction, wherein the images captured by the first roadside imaging device include the different perspectives based on the first yaw angle.

[0029] In some embodiments, the processor, to compute the vehicle occupancy, is configured to: discard uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determine a number of visible occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

[0030] In some embodiments, the first roadside imaging device, the first roadside light emitter, and the vehicle detector are attached to a mobile roadside structure.

[0031] In some embodiments, the system has a second roadside imaging device, above the first roadside imaging device, the second roadside imaging device having a second field of view and a second roadside light emitter emitting light in the second field of view. The processor is further configured to: receive another signal from the vehicle detector; command the second roadside light emitter to emit light according to a third pattern for a third duration; command the second roadside imaging device to capture additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receive the additional captured images from the second roadside imaging device; compute another vehicle occupancy by, in each of the additional captured images by: determining one or more regions of interest in each of the additional captured images; determining the vehicle occupancy using the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy of the further vehicle; and transmit the vehicle occupancy to the monitoring system.

[0032] In some embodiments, the first field of view and the second field of view overlap, and the processor is further configured to: determine the one or more regions of interest in the one or more additional captured images; determine a further number of visible occupants in the one or more additional captured images in the one or more regions of interest; and determine the most likely number of occupants based on each determined vehicle occupancy and each determined further number of visible occupants.

[0033] In some embodiments, the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and adjust one or more parameters of the first roadside imaging device or the first light emitter into a determined optimal configuration for capturing images based on the expected vehicle speed.

[0034] In some embodiments, the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and determine the first pattern and the first time window based on the expected vehicle speed.

[0035] In some embodiments, the system involves a sensor for detecting ambient conditions. The processor is further configured to: receive ambient condition information from the sensor; determine an optimal configuration for the imaging device based on the received ambient condition; and transmit a further command signal to the imaging device capture images according to the optimal configuration.

[0036] In some embodiments, the light emitter is an LED emitting infrared or near infrared light, the first pattern is 120 pulses per second.

[0037] In accordance with another aspect there is provided a method for detecting vehicle occupancy. The method involves receiving a signal from a detector based on a first field of view of a first roadside imaging device; commanding a first roadside light emitter to emit light according to a first pattern for a first duration; commanding the first roadside imaging device to capture images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receiving the captured images from the first roadside imaging device; computing a vehicle occupancy by, in each of the captured images: determining one or more regions of interest in each of the captured images; determining the vehicle occupancy in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmitting the most likely number of occupants to a monitoring system or storing the vehicle occupancy in memory.

[0038] In some embodiments, the method involves discarding uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determining the number of occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

[0039] In some embodiments, the one or more regions of interest include at least one of a rear side window and a front side window.

[0040] In some embodiments, each of the captured images includes different perspectives based on a yaw angle which encourages image variation.

[0041] In some embodiments, the method involves: commanding a second roadside imaging device to capture additional images from a second field of view according to a fourth pattern associated with the first pattern, for a fourth duration associated with the first duration; receive the additional captured images from the second roadside imaging device; wherein computing the vehicle occupancy further comprises, for each of the additional captured images: determining one or more additional regions of interest of the vehicle; determining the vehicle occupancy in the additional one or more regions of interest; and determining the most likely number of occupants based on the each of the number of visible occupants and the further number of visible occupants; and transmitting the vehicle occupancy to the monitoring system.

[0042] In some embodiments, the method involves: receiving a signal indicating from the detector based on the second field of view; commanding a second roadside light emitter to emit light according to a third pattern for a third duration; commanding the second roadside imaging device to capture additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receiving the additional captured images from the first roadside imaging device; computing a further vehicle occupancy by, in each of the additional captured images: determining one or more further regions of interest in each of the additional captured images; determining the further vehicle occupancy based on the one or more further regions of interest; and determining a most likely number of occupants based on each determined further vehicle occupancy; and transmitting the most likely number of occupants to the monitoring system.

[0043] In some embodiments, the method involves computing a correction parameter and providing visual guidance using augmented reality avatars on a display device.

[0044] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and adjusting one or more parameters of the first roadside imaging device or the first light emitter into a determined adjusted configuration for capturing images based on the expected vehicle speed.

[0045] In some embodiments, the method involves: monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and determining the first pattern and the first time window based on the expected vehicle speed.

[0046] In accordance with another aspect there is provided a system for detecting vehicle occupancy. The system involves a first roadside imaging device having a first field of view; a first roadside light emitter emitting light in the first field of view; a processor, in communication with a memory, configured to: command the first roadside light emitter to emit light according to a first pattern for a first duration; capture, using the first roadside imaging device, one or more images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images from the first roadside imaging device; compute a vehicle occupancy by, in each of the captured images: determining one or more regions of interest in each of the captured images; determining the vehicle occupancy based on the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmit the vehicle occupancy to a monitoring system or store the vehicle occupancy in memory.

[0047] In some embodiments, the processor is further configured to detect the vehicle in one or more images of the captured images from the first roadside imaging device.

[0048] In some embodiments, the detect the vehicle in one or more images of the captured images involves: detecting a first image of the captured images where the vehicle is at a first position in the first field of view; detecting a second image of the captured images where the vehicle is at a second position in the first field of view; and generating a series of images of the vehicle using one or more of the first image, zero or more images captured between the first and second images, and the second image.

[0049] In some embodiments, the generating a series of images of the vehicle comprises generating a series of uniformly distanced images of the vehicle.

[0050] In some embodiments, the system involves a roadside vehicle detector. The processor is further configured to receive a signal from the roadside vehicle detector; and adjust one or more parameters of the first roadside imaging device or the first light emitter based on the signal from the roadside vehicle detector.

[0051] In some embodiments, the first roadside imaging device is positioned to extract data for different perspectives across the field of view; and at least some of the images captured by the first roadside imaging device include the different perspectives.

[0052] In some embodiments, the processor is configured to compute a yaw angle relative to a horizontal axis perpendicular to an expected direction, wherein the images captured by the first roadside imaging device include the different perspectives based on the first yaw angle.

[0053] In some embodiments, the processor, to compute the vehicle occupancy, is configured to: discard uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determine a number of visible occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

[0054] In some embodiments, the first roadside imaging device, the first roadside light emitter, and the vehicle detector are attached to a mobile roadside structure.

[0055] In some embodiments, the system involves a second roadside imaging device, above the first roadside imaging device, the second roadside imaging device having a second field of view and a second roadside light emitter emitting light in the second field of view. The processor is further configured to: command the second roadside light emitter to emit light according to a third pattern for a third duration; capture, using the second roadside imaging device, additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receive the additional captured images from the second roadside imaging device; compute another vehicle occupancy by, in each of the additional captured images by: determining one or more regions of interest in each of the additional captured images; determining the vehicle occupancy using the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy of the further vehicle; and transmit the vehicle occupancy to the monitoring system.

[0056] In some embodiments, the first field of view and the second field of view overlap, and the processor is further configured to: determine the one or more regions of interest in the one or more additional captured images; determine a further number of visible occupants in the one or more additional captured images in the one or more regions of interest; and determine the most likely number of occupants based on each determined vehicle occupancy and each determined further number of visible occupants.

[0057] In some embodiments, the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and adjust one or more parameters of the first roadside imaging device or the first light emitter into a determined optimal configuration for capturing images based on the expected vehicle speed.

[0058] In some embodiments, the processor is further configured to: monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and determine the first pattern and the first time window based on the expected vehicle speed.

[0059] In some embodiments, the system involves a sensor for detecting ambient conditions. The processor is further configured to: receive ambient condition information from the sensor; determine an optimal configuration for the imaging device based on the received ambient condition; and transmit a further command signal to the imaging device capture images according to the optimal configuration.

[0060] In some embodiments, the light emitter is an LED emitting infrared or near infrared light, the first pattern is 120 pulses per second.

[0061] In some embodiments, the processor is further configured to anonymize the captured images.

[0062] In accordance with another aspect, there is provided a method for detecting vehicle occupancy. The method involves commanding a first roadside light emitter to emit light according to a first pattern for a first duration; capturing, using the first roadside imaging device, images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receiving the captured images from the first roadside imaging device; computing a vehicle occupancy by, in each of the captured images: determining one or more regions of interest in each of the captured images; determining the vehicle occupancy in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmitting the most likely number of occupants to a monitoring system or storing the vehicle occupancy in memory.

[0063] In some embodiments, the method involves detecting the vehicle in one or more images of the captured images from the first roadside imaging device.

[0064] In some embodiments, the detecting the vehicle in one or more images of the captured images involves: detecting a first image of the captured images where the vehicle is at a first position in the first field of view; detecting a second image of the captured images where the vehicle is at a second position in the first field of view; and generating a series of images of the vehicle using one or more of the first image, zero or more images captured between the first and second images, and the second image.

[0065] In some embodiments, the generating a series of images of the vehicle involves generating a series of uniformly distanced images of the vehicle.

[0066] In some embodiments, the method involves receiving a signal from a roadside vehicle detector indicating that the vehicle is within or proximate, relative to the expected direction of vehicle travel, to a first field of view of a first roadside imaging device; and adjusting one or more parameters of the first roadside imaging device or the first light emitter based on the signal.

[0067] In some embodiments, the method involves discarding uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images; and determining the number of occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

[0068] In some embodiments, the one or more regions of interest include at least one of a rear side window and a front side window.

[0069] In some embodiments, each of the captured images includes different perspectives based on a yaw angle which encourages image variation.

[0070] In some embodiments, the method involves capturing, using a second roadside imaging device, additional images from a second field of view according to a fourth pattern associated with the first pattern, for a fourth duration associated with the first duration; receive the additional captured images from the second roadside imaging device; wherein computing the vehicle occupancy further comprises, for each of the additional captured images: determining one or more additional regions of interest of the vehicle; determining the vehicle occupancy in the additional one or more regions of interest; and determining the most likely number of occupants based on the each of the number of visible occupants and the further number of visible occupants; and transmitting the vehicle occupancy to the monitoring system.

[0071] In some embodiments, the method involves commanding a second roadside light emitter to emit light according to a third pattern for a third duration; capturing, using the second roadside imaging device, additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration; receiving the additional captured images from the first roadside imaging device; computing a further vehicle occupancy by, in each of the additional captured images: determining one or more further regions of interest in each of the additional captured images; determining the further vehicle occupancy based on the one or more further regions of interest; and determining a most likely number of occupants based on each determined further vehicle occupancy; and transmitting the most likely number of occupants to the monitoring system.

[0072] In some embodiments, the method involves computing a correction parameter and providing visual guidance using augmented reality avatars on a display device.

[0073] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and adjusting one or more parameters of the first roadside imaging device or the first light emitter into a determined adjusted configuration for capturing images based on the expected vehicle speed.

[0074] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed; and determining the first pattern and the first time window based on the expected vehicle speed.

[0075] In some embodiments, the method involves anonymizing the captured images.

[0076] Many further features and combinations thereof concerning embodiments described herein will appear to those skilled in the art following a reading of the instant disclosure. DESCRIPTION OF THE FIGURES

[0077] In the figures,

[0078] FIG. 1 is a network diagram of a system for vehicle occupancy detection, in accordance with example embodiments;

[0079] FIG. 2 is an example schematic diagram of a system for vehicle occupancy detection, in accordance with example embodiments;

[0080] FIG. 3 is another example schematic diagram of a system for vehicle occupancy detection, in accordance with example embodiments;

[0081] FIG. 4 is a flowchart of an example method for configuring a system for vehicle occupancy detection, in accordance with example embodiments;

[0082] FIG. 5A, is a further example schematic diagram of a system for vehicle occupancy detection, in accordance with example embodiments;

[0083] FIG. 5B is a perspective view of an example system for vehicle occupancy detection, in accordance with example embodiments;

[0084] FIG. 50 is a perspective view of the system of FIG. 5B including another imaging device, in accordance with example embodiments;

[0085] FIG. 5D shows a photograph of the system 514 of FIG. 5B, in accordance with example embodiments;

[0086] FIG. 6A is a top view of an example system for vehicle occupancy detection with a vehicle in a first position, in accordance with example embodiments;

[0087] FIG. 6B is a top view of an example system for vehicle occupancy detection with a vehicle in a second position, in accordance with example embodiments;

[0088] FIG. 60 is a top view of an example system for vehicle occupancy detection with a vehicle in a third position, in accordance with example embodiments;

[0089] FIG. 6D is a rear view of the example system for vehicle occupancy detection of FIG. 6A, in accordance with example embodiments; [0090] FIG. 7 is a perspective view of a further example system for vehicle occupancy detection, in accordance with example embodiments;

[0091] FIG. 8 is a flowchart of an example method for vehicle occupancy detection, in accordance with example embodiments;

[0092] FIG. 9 is a flowchart of an example method to complete step 812 of FIG. 8 for detecting occupants in images, in accordance with example embodiments;

[0093] FIGS. 10A to 10G are each an image of a vehicle with various regions of interest shown, in accordance with example embodiments;

[0094] FIG. 11 is an example report interface for viewing vehicle occupancy, according to example embodiments;

[0095] FIG. 12 is an architecture diagram of the system of FIG. 1, according to example embodiments;

[0096] FIG. 13 is an example reviewing interface for vehicle occupancy detection validation, according to example embodiments; and

[0097] FIG. 14 is an example schematic diagram of a computing device, in accordance with an embodiment.

[0098] FIG. 15A is an example diagram of vehicle weaving.

[0099] FIG. 15B is an example schematic diagram of vehicle detector sensor for detecting vehicle weaving in accordance with an embodiment.

[00100] FIG. 16 is an example schematic diagram of a cloud server and road side units in accordance with an embodiment.

[00101] FIG. 17 is an example schematic diagram of a cloud server and road side units in accordance with an embodiment.

[00102] FIG. 18 is a network diagram of another system for vehicle occupancy detection, in accordance with other example embodiments. [00103] FIG. 19 illustrates an example vehicle next to an example camera’s field of view, according to some embodiments.

[00104] FIG. 20A illustrates an example vehicle passing through an example camera’s field of view at a first moment, according to some embodiments.

[00105] FIG. 20B illustrates an example vehicle passing through an example camera’s field of view at a second moment, according to some embodiments.

[00106] FIG. 21 is a flowchart of another example method for vehicle occupancy detection, in accordance with example embodiments.

[00107] FIG. 22 is an architecture diagram of the system of FIG. 18, according to example embodiments.

[00108] FIG. 23 illustrates an image that has undergone an anonymized, privacy-preserving process, according to some embodiments.

DETAILED DESCRIPTION

[00109] Embodiments described herein provide computer vision based Vehicle Occupancy Detection (VOD) systems and methods. The described systems can include a road-side unit (iRSU) with an imaging device that captures successive images of vehicles moving along in a lane on a highway or road. The road-side unit is configured to capture a plurality of images of one or more vehicles travelling along a lane of a road. The road side unit can capture images from a fixed perspective. The successive images are, for each vehicle, analyzed to determine a likely vehicle occupancy. The system may achieve high accuracy, in some instances above 95% performance, despite processing images received from a fixed perspective imaging device on the side of the road. As a result of capturing multiple images from the fixed perspective, and further as a result of the images being captured from a roadside position, the images between installations may allow for more robust training, and portable occupancy detection approaches, which are adaptable to a variety of operating environments. The use of the multiple images being captured from a fixed roadside position also allows the system to generate a robust estimation of the vehicle occupancy without the need for expensive or overhead systems that are difficult to install. The roadside system may require fewer parts, have lower maintenance costs, and be easier to deploy. [00110] Furthermore, and as a result of the unit being roadside, the system may operate with enhanced privacy without the need to transmit data remotely. In some embodiments, the described system may permit rapid set-up or installation, e.g., in less than 1 hour per site, without the need for further post-installation site specific training or tuning. The described system may further be a stationary system, or the system can be a mobile system capable of being reinstalled or configured for various sites.

[00111] In example embodiments, the system has a roadside unit with a light detection and ranging (LIDAR) unit. In example embodiments, the roadside unit can determine whether a vehicle is in an upstream portion of the lane, and can trigger an infrared light emitter to illuminate vehicle occupants through a region of interest of the vehicle (e.g., a windshield). The infrared light emitter may overcome, at least to some degree, window tint and sun-related over or under exposure. For example, a difficult lighting condition tends to arise in the daytime due to interference from the sun, white washing images. The system is capable of adjusting the imaging device parameters (e.g., number of pictures taken for each vehicle, camera exposure time, camera frame rate) and the infrared light emitter parameters (e.g., illumination intensity) based on measured ambient conditions (e.g., measured with ambient environmental sensors attached to the system, or retrieved from a network) in order to maximize the quality of the image acquisition, which in turn leads to higher overall accuracy. In example embodiments, the roadside unit can detect a vehicle and trigger one or more adjustments of the imaging device parameters.

[00112] The system further comprises an infrared camera to capture images of the vehicle (and vehicle occupants), the captured images capturing at least some of the light emitted by the infrared light emitter and reflected by the vehicle or vehicle occupants. Optionally, the system can include a second imaging device (and corresponding infrared illumination source) to capture vehicle occupancy in a further lane of the road or highway (e.g., to detect occupancy in a second lane of a highway). Optionally, the system may include an imaging device for capturing vehicle license plates (LPR).

[00113] In some embodiments, the light emitters and/or imaging devices can continuously emit light and capture images, respectively (for example in a patterned manner) and the system may be capable of detecting vehicles within the continuous series of images. In some embodiments, a roadside light detection and ranging (LIDAR) unit can be used to detect that a vehicle is incoming and adjust parameters of the light emitters and/or imaging device (e.g., to provide higher illumination power or capture images more frequently). [00114] In example embodiments, a processor running VOD software determines a front and a rear occupancy of the vehicle from the images, and securely transfers images and metadata to a tolling system. In example embodiments, the processor may determine or identify violating vehicles’ license plates, and further transmit the identified license plates to the tolling system. The system can have rules for defining parameters for violations, for example.

[00115] The proposed system may be able to operate, unattended, under all weather conditions. Operation of the system may be transparent to road users, as the roadside unit does not visually distract the driver, or impede a line of sight needed by the driver to navigate traffic.

[00116] To maintain privacy, vehicle or occupant data can be filtered and removed so that it is not retained by the system or transmitted to a cloud-based back-end service: only data related to suspected violations can be securely uploaded to the tolling systems before local deletion. License plate recognition is implemented by software of the system, with installation, maintenance and consistent configuration at every site.

[00117] The system may operate with a single imaging device adjacent to the road, as compared to a plurality of imaging devices proximate to the road, or overhead imaging devices and so forth. The single imaging device system requires taking successive high-speed images as the vehicle passes the field of view of the imaging device. Subsequently, the images are analyzed to determine both front and rear occupancy. This solution has the advantage of simplifying the system, introducing less redundancy, which in turn improves accuracy and significantly reduces the overall cost of the system.

[00118] The described system may be relatively simple to adjust or implement in a variety of environments as a result of the relatively fixed geometry associated with the single configuration. In training the system prior to installation, a plurality of images are captured with the single imaging device in a calibration unit from multiple sites, each site experiencing a variety of weather phenomena (e.g., rain, snow, etc.), and capturing a variety of vehicle occupant behaviors (e.g., turned head, wearing face coverings, etc.). T raining the system can comprise labelling the training images, and subsequently training a machine learning data model with the captured training images. The machine learning model learns to detect vehicle occupancy based on the training images. As the training images are captured in an environment which is replicated during installation, namely the single image camera geometry, and as the single camera implementation requires far fewer variables to adjust relative to a multi-camera implementation (e.g., only one pitch, and yaw of a single camera to adjust, which does not require complicated tuning to permit inter-camera image integration), the system may replicate the trained machine learning model on each system operating in a new or different site, without the need for extensive retraining or tuning of the machine learning model. Alternatively stated, the trained machine learning model may be robust and portable to various sites, in part as a result of the consistency of the sites (e.g., all roads have lanes), and in part as a result of the complexity-reducing configuration of the system (e.g., the system uses a single imaging device which captures multiple images).

[00119] In an illustrative example, the training process may determine recommended configurations of the system, including a height that is relative to the road, relative distance (horizontal and vertical) between the system components (e.g., any of the imaging device, illumination device, and LIDAR), and a pitch, yaw and roll of the system components. In example embodiments, the recommended configuration may include an error parameter, an amount to which the described parameters may be incorrect while maintaining an adequate performance level. For example, the system may permit geometry variations of up to 15 cm for the height and x-y coordinates of the system components, and up to 10 degrees of variation in the pitch, yaw and roll of the system components.

[00120] Continuing the example, once on site, the system receives different measurements: (1) a distance to the target lane (e.g., a distance from the intended mounting location of the unit to the centerline of the lane), (2) a width of the target lane, and (3) a height reflective of the difference in height of the ground between the unit mounting location and the target lane (e.g., in case there is a slope from the target lane to the mounting location).

[00121] During installation, three measurements are entered into a user interface to the system, and a processor of the system computes a required geometry of each component of the system to implement the trained machine learning model with an acceptable degree of accuracy. Because the geometry for any given site is typically a small variation relative to the training geometries (e.g., most road lane widths and heights are relatively fixed). As a final step, the imaging device parameters (e.g., a camera zoom, gain, etc.) can be adjusted to the recommended level provided by the automated program and locked into position, and the system can then begin operation.

[00122] In a diagnostic mode, live images are displayed on a diagnostic computer (laptop) that can be securely connected to the system. If no further adjustment is necessary, the system can begin its operation. [00123] The system may be able to support the capture of high-quality images in rapid succession from closely following vehicles travelling in excess of 200 km/h. The system uses efficient deep neural network software to achieve accuracy and handle difficult situations such as poor weather conditions (e.g., heavy rain, heavy snow, fog), and difficult situations (e.g., children in car seat, heavy tint, etc.).

[00124] In example embodiments, the system may be installed on either a roadside post or gantry-mounted. The system may also be deployed in a secure mobile trailer, or other mobile unit, that may be quickly moved from one location to the next for rapid adhoc applications. Each component of the system could also be separately mounted on the mobile unit providing the system high installation flexibility.

[00125] In example embodiments, the system can be updated remotely by a control system to enable further training and tuning of the machine learning model. In example embodiments including multiple systems (e.g., multiple roadside units), each system may be updated by the control system.

[00126] Reference will now be made to the figures.

[00127] FIG. 1 is a network diagram of a system 100 for vehicle occupancy detection, in accordance with example embodiments.

[00128] The system 100 includes a computing device 102, a light emitter(s) 116, imaging device(s) 118 for detecting vehicle occupancy in a first lane on a road and, optionally, second light emitter(s) 124, an ambient condition sensor 130, and second imaging device(s) 126. In some embodiments, system 100 has a vehicle detector(s) 114 to detect vehicle(s). In some embodiments, the second light emitter(s) 124, and the second imaging device(s) 126 may be used to detect vehicle occupancy in a second lane, or to detect a vehicle license plate. The system 100 may require fewer parts and lower maintenance costs as compared to systems which use multiple imaging devices to determine vehicle occupancy in a single lane.

[00129] Vehicle detector(s) 114 can be various devices which are capable of detecting the presence of a vehicle at various distances. For example, the vehicle detector(s) 114 may be a laser-based system for detecting vehicles, such as a Light Detection and Ranging system (LiDAR), which both emits light and detects the presence of return light in response to the emitted light reflecting off of a vehicle. According to some embodiments, for example, the vehicle detector(s) 114 may be a radio wave based system, such as Radio Detection and Ranging (RADAR), or a mechanical, micro-electromechanical system (MEMS), solid-state or hybrid LiDAR unit.

[00130] Vehicle detector(s) 114 may include multiple instances of devices capable of detecting the presence of the vehicle. For example, the vehicle detector(s) 114 may include two separate LiDAR units, which allows for greater robustness of the system as there is more return light to be analyzed.

[00131] Vehicle detector(s) 114 may be configured to detect one or more vehicles at various distances. For example, a LiDAR vehicle detector(s) 114 can be configured to ignore any readings representative of objects more than 10 m away. In some embodiments, for example, where the vehicle detector(s) 114 include multiple devices, each of the multiple devices can be configured to detect vehicles at different distances. Alternatively, the multiple devices may be used redundantly to detect vehicles a single distance away from the vehicle detector(s) 114.

[00132] The vehicle detector(s) 114 may be modified or augmented to withstand ambient conditions. For example, the vehicle detector(s) 114 may be weatherproofed with various materials, such as plastic covering or coatings, to protect against rain, snow, dust, insects and so forth.

[00133] The light emitter(s) 116 can include various devices capable of emitting specific ranges of light at specific frequencies (i.e., patterns) for specific durations. For example, the light emitter(s) 116 can be a strobe light configured to emit a white light at a specific frequency based on strobe light cool down limitations.

[00134] In an illustrative example, light emitter(s) 116 includes an infrared light-emitting diode (LED) configured to emit infrared light in the range of 750 nm to 1300 nm, or as another example, a range of 850 nm +/- 10 nm. Advantageously, infrared light emitter(s) 116 may be able to illuminate the inside of a vehicle, overcoming window tint and sunlight exposure.

[00135] Continuing the example, the infrared LED light emitter(s) 116 may be able to overcome cool down limitations of strobe lights, and burst infrared light at a rate of 120 pulses a second. Various frequencies of pulsing are contemplated. In some embodiments, for example, the infrared LED light emitter(s) 116 may be configured to, or remotely controlled to dynamically change the pattern of light emission. For example, the infrared LED light emitter(s) 116 may pulse at different frequencies in response to being controlled, based on the detected speed of a detected vehicle speed (e.g., light may be emitted faster in response to a higher vehicle speed being detected).

[00136] Varying types and amounts of light emitter(s) 116 may be used in system 100. For example, in the shown embodiment in FIG. 2, the light emitter(s) 116 includes the strobe light emitter(s) 116-1. In the embodiment shown in FIG. 3, the light emitter(s) 116 includes the first and second occupant light emitter(s) 116-1.

[00137] Imaging device(s) 118 (hereinafter referred to as passenger imaging devices) can include any type of imaging device capable of capturing the light emitted by the light emitter(s) 116. For example, where the light emitter(s) 116 is an infrared light emitter, the imaging device(s) 118 is an infrared imaging device. In some embodiments, for example, the imaging device(s) 118 may be adapted for the specific frequency of light being emitted by light emitter(s) 116. The imaging device(s) 118 may be a high speed imaging device, capable of taking successive images within a short period of time.

[00138] The imaging device(s) 118 may be configured (as described herein) to capture one or more images of one or more vehicles within their field of view, when the vehicles are detected by the vehicle detector(s) 114. The imaging device(s) 118 may be configured to capture multiple images (i.e. , a plurality of images) upon receiving a control command to start capturing images. For example, the imaging device(s) 118 may, in response to receiving a control command, capture 5 successive images at 90 frames per second (FPS).

[00139] The imaging device(s) 118 are positioned relative to the road to capture images of at least the side of the vehicle. In example embodiments, the imaging device(s) 118 are positioned relative to the road to capture images of various combinations of the front of the vehicle, the side of the vehicle, the rear of the vehicle, and so forth. For example, the imaging device(s) 118 may capture one image of the front and side of the vehicle, three images of the side of the vehicle, and one image of the rear and side of the vehicle.

[00140] Optionally, the system 100 may include the second light emitter(s) 124, and the second imaging device(s) 126, similar to light emitter(s) 116 and imaging device(s) 118. The second light emitter(s) 124, and the second imaging device(s) 126 may be positioned relative to the road similar to the light emitter(s) 116 and the imaging device(s) 118 but directed to capture one or more images of the rear of the vehicle to include a license plate of the detected vehicle. In example embodiments, the second light emitter(s) 124, and the second imaging device(s) 126 are similar to light emitter(s) 116 and imaging device(s) 118, positioned relative to the road to capture images of the side of a further vehicle travelling in a second lane.

[00141] Optionally, the system 100 may include the ambient condition sensor 130, which detects ambient conditions such as sunlight intensity, temperature, moisture, humidity, rainfall, snowfall, and so on. In example embodiments, the ambient condition sensor 130 includes a variety of sensors for various ambient conditions.

[00142] Referring now to computing device 102, the computing device 102 may be configured to communicate command signals to the vehicle detector(s) 114, the light emitter(s) 116, the imaging device(s) 118, the second light emitter(s) 124, second imaging device(s) 126, and the ambient condition sensor 130, and to receive captured images from the imaging device(s) 118 and second imaging device(s) 126 and detected conditions from the ambient condition sensor 130. The computing device 102 may be configured to operate with the Linux operating system.

[00143] In example embodiments, the computing device 102 is in a housing (not shown), and is also used to transmit power to one or more of the vehicle detector(s) 114, the light emitter(s) 116, the imaging device(s) 118 the second light emitter(s) 124, the imaging device(s) 118 and the second imaging device(s) 126. For example, the computing device 102 may power the imaging device(s) 118.

[00144] The computing device 102 may include various combinations of a vehicle detector controller 104, a light emitter controller 106, an imaging device controller 108, an occupant detector 110, a database(s) 112, and, optionally, a second imaging device controller 122.

[00145] The vehicle detector controller 104 can be configured to control the vehicle detector(s) 114 through a series of command signals. The command signals are interpretable by the vehicle detector(s) 114, and can include instructions to control various operating features of the vehicle detector(s) 114. For example, the command signals may adjust a threshold indicative of detection of a vehicle (e.g., certainty rate must be over 90%) used by the vehicle detector(s) 114 to determine whether a vehicle is detected. The command signals may control the distance to which the vehicle detector(s) 114 operate (e.g., vehicles that are more than 10 m away from the vehicle detector(s) 114 will be ignored), the frequency and timing of operation of the vehicle detector(s) 114 (e.g., pulse light at a first frequency to detect a vehicle), and so forth. [00146] In a non-limiting example embodiment, the vehicle detector controller 104 may transmit configuration characteristics to the vehicle detector(s) 114, allowing an operator to change the operation of the vehicle detector(s) 114 through the use of the computing device 102. For example, where the vehicle detector(s) 114 is mounted at a first height, the vehicle detector controller 104 may transmit a calibration parameter to adjust detection of vehicles by the vehicle detector(s) 114 based on the first height. Continuing the example, the vehicle detector(s) 114 may be configured to expect vehicles at the detection distance to be near or close to the top of a field of view of the vehicle detector(s) 114.

[00147] In some embodiments, for example, the vehicle detector controller 104 may transmit command signal to the vehicle detector(s) 114 to detect the speed of a vehicle. The vehicle detector(s) 114 may, in response, provide the vehicle detector controller 104 with two detections of the same car at different instances in time, allowing for the speed to be interpolated. In example embodiments, the vehicle detector(s) 114 is continuously monitoring detected vehicles in its field of view, and directly computes the speed of the detected vehicles and relays the same to the vehicle detector controller 104.

[00148] The light emitter controller 106 is configured to control the light emitter(s) 116 through a series of command signals. The command signals are interpretable by the light emitter(s) 116, and can include command signals to control the type of light emitted (e.g., emitted light should be 800 nm), command signals to control the power used by the light emitter(s) 116 (e.g., increase or decrease the intensity of the emitted light), the frequency and timing of operation of the light emitter(s) 116 (e.g., pulse light at a first frequency, for a first duration, etc.), and so on.

[00149] In non-limiting example embodiments, the light emitter controller 106 may transmit configuration characteristics to the light emitter(s) 116, allowing an operator to change the operation of the light emitter(s) 116 through the use of the computing device 102. For example, where the light emitter(s) 116 is capable of adjusting a field of view (e.g., such as being able to rotate around a first axis), the light emitter controller 106 may transmit a command signal to adjust the field of view (e.g., a command signal to swivel the light emitter(s) 116) of the light emitter(s) 116.

[00150] The imaging device controller 108 is configured to control how the imaging device(s) 118 capture one or more images through one or more command signals. The command signals are interpretable by the imaging device(s) 118, and can include command signals to control a frequency of capturing images (e.g., capture two images per second) or the timing of operation of the imaging device(s) 118 (e.g., capture images at this time, where the detected vehicle is expected to be within the imaging device(s) 118 field of view, for this duration), alter or adjust the operating focal distance (e.g., focus is directed towards the area between lanes within a road), the exposure settings (e.g., the aperture, ISO and shutter speed settings), and so forth.

[00151] Each of the vehicle detector controller 104, the light emitter controller 106, and the imaging device controller 108 may be configured to transmit command signals to the respective devices dynamically (e.g., in real time), at intervals, upon configuration, or some combination of the aforementioned. For example, the vehicle detector controller 104 may transmit command signals to the vehicle detector(s) 114 upon powering on of the system 100, and subsequently transmit command signals to adjust a distance detection dynamically in response to changing ambient conditions.

[00152] The occupant detector 110 is configured to receive the plurality of images from the imaging device(s) 118 (and possibly second imaging device 126) and determine a number of occupants in the detected vehicle (alternatively referred to as a vehicle occupancy). The occupant detector 110 may be a machine learning model trained to determine the vehicle occupancy based on roadside images.

[00153] The occupant detector 110 may further output the determined vehicle occupancy upon determination. For example, the occupant detector 110 may output the determined vehicle occupancy to a tolling system, which tolls individuals based on the number of occupants in a car. In a non-limiting embodiment, the occupant detector 110 may output the determined vehicle occupancy and the determined lane to a vehicle tolling system, which tolls individuals based on whether a vehicle occupancy complies with the occupancy requirements for the specific lane.

[00154] The occupant detector 110 may also coordinate or control the vehicle detector controller 104, the light emitter controller 106, and the imaging device controller 108. For example, the occupant detector 110 may make determinations as to the relative offsets between the operation of light emitter(s) 116 and the imaging device(s) 118, and relay the required offsets to the light emitter controller 106 and the imaging device controller 108, respectively.

[00155] According to example embodiments, the vehicle detector controller 104, the light emitter controller 106, the imaging device controller 108 and the second imaging device controller 122 may be located within the respective unit being controlled, and not within the computing device 102. For example, the vehicle detector controller 104 may be integrated within the vehicle detector(s) 114 and pre-configured with operational settings.

[00156] Continuing the example, some or all of the vehicle detector controller 104, the light emitter controller 106, the imaging device controller 108 and the second imaging device controller 122 may be interconnected with one another and relay command signals between each other. For example, the vehicle detector controller 104 which is integrated within the vehicle detector(s) 114 may receive command signals from the occupant detector 110, and relays the command signals to a light emitter controller 106 within a light emitter(s) 116 directly.

[00157] Optionally, the computing device 102 may include a sensor health monitor (not shown), which monitors the relative health of the sensor. For example, the sensor health monitor may notice a decrease in performance by the vehicle detector(s) 114 based on usage, and so forth.

[00158] In response to determining sensor deterioration, the sensor health monitor may be configured to provide a calibration parameter to any one of the components of the system 100. For example, in response to determining light emitter(s) 116 deterioration, the sensor health monitor may instruct the light emitter controller 106, occupant detector 110, or imaging device controller 108 to adjust the operation of the respective components.

[00159] The computing device 102 and the vehicle detector(s) 114, light emitter(s) 116, imaging device(s) 118, second light emitter(s) 124, the ambient condition sensor 130, and the second imaging device(s) 126 are interconnected (e.g., transmit or receive command signals) by way of the communication network 120. Communication network 120 may include a packet-switched network portion, a circuit- switched network portion, or a combination thereof. Communication network 120 may include wired links, wireless links such as radio-frequency links or satellite links, or a combination thereof. Communication network 120 may include wired access points and wireless access points. Portions of communication network 120 could be, for example, an IPv4, IPv6, X.25, IPX or similar network. Portions of network 120 could be, for example, a GSM, GPRS, 3G, LTE or similar wireless networks. Communication network 120 may include or be connected to the Internet. When communication network 120 is a public network such as the public Internet, it may be secured as a virtual private network.

[00160] In embodiments where the communication network 120 includes wired links, the wired links, similar to the vehicle detector(s) 114, may be weather-proofed with coatings or covering. [00161] The system 100 may be a mobile system for vehicle detection. For example, the system 100 may be capable of being disassembled and moved to another location along a road. In some embodiments, various components of the system 100 may be relocated. Alternatively, the system 100 may be stationary, and fixed to a fixture.

[00162] The system 100 may be configured to receive (via query or push mechanism) one or more updated operating parameters via the communication network 120. For example, the system 100 may receive new parameters for calibrating the occupant detector 110, or the imaging device controller 108, and so forth.

[00163] FIG. 18 is a network diagram of another system 1800 for vehicle occupancy detection, in accordance with other example embodiments.

[00164] The system 1800 includes a computing device 1802, a light emitter(s) 116, imaging device(s) 118 for detecting vehicle occupancy in a first lane on a road and, optionally, second light emitter(s) 124, an ambient condition sensor 130, and second imaging device(s) 126. The second light emitter(s) 124, and the second imaging device(s) 126 may be used to detect vehicle occupancy in a second lane, or to detect a vehicle license plate. The computing device 1802 and light emitter(s) 116, imaging device(s) 118, second light emitter(s) 124, the ambient condition sensor 130, and the second imaging device(s) 126 are interconnected (e.g., transmit or receive command signals) by way of the communication network 120. The system 1800 may require fewer parts and lower maintenance costs as compared to systems which use multiple imaging devices to determine vehicle occupancy in a single lane. The system 1800 is composed of the same components as system 100 except that it does not include vehicle detector(s) 114.

[00165] Referring now to computing device 1802, the computing device 1802 may be configured to communicate command signals to the light emitter(s) 116, the imaging device(s) 118, the second light emitter(s) 124, second imaging device(s) 126, and the ambient condition sensor 130, and to receive captured images from the imaging device(s) 118 and second imaging device(s) 126 and detected conditions from the ambient condition sensor 130. The computing device 1802 may be configured to operate with the Linux operating system. Computing device 1802 functions substantially similarly to computing device 1802 and it further comprises vehicle image detector 1804.

[00166] The vehicle image detector 1804 may be configured to process images received from the imaging device(s) 118 to detect vehicles in the images after image collection. In some embodiments, the light emitter(s) 116 and the imaging device(s) 118 can be configured to continuously emit light and capture images (or do so in continuous patterns). In such embodiments, vehicle detection may be carried out by a different components. In some embodiments vehicle detection may be carried out by computing device 1802 using vehicle image detector 1804.

[00167] In some exemplary embodiments, vehicle detection can be carried out the vehicle image detector 1804 consuming inputs (queued-up images) from the imaging device(s) 118 to determine which set of n images in the queue represent the best images for determining a most likely number of occupants. In such embodiments, the imaging device(s) 118 and the light emitter(s) 116 operate continuously and image the device successively at a high frame rate (i.e., they do not wait for a triggering command from a detector or the computing device 1802). The vehicle image detector 1804 then runs retrospectively on the images that are queued and looks for n images where the vehicle is detected in the most favorable locations in the horizontal field of view of the camera.

[00168] These most favorable locations can be interpreted as the optimal location that offer best visibility for occupants in terms of consistent lighting from illuminators and maximum perspective change to minimize obstruction of passengers by other passenger and by window columns.

[00169] FIG. 19 illustrates an example vehicle next to an example camera’s field of view, according to some embodiments.

[00170] For example, in some embodiments, a camera may have a field of view 1900 with a horizontal field of view 1902 of H = 7.5m. Such a camera may also include two illuminators placed so that their illumination cones 1904A and 1904B project two circles of radius H/8, symmetrically off-center by H/16. The illumination cones overlap at illumination region 1906 to generate a region of higher illumination.

[00171] A vehicle 1908 may have a width 1910 of 2*H/3=5m and can approximately be divided into four equal portions 1912 comprising the nose to front window portion 1912A, the front window length portion 1912B, rear window length portion 1912C, and end of rear window to end of vehicle portion 1912D. [00172] FIG. 20A and FIG. 20B illustrate an example vehicle passing through an example camera’s field of view at different moments corresponding to an Image 1 and an Image n respectively, according to some embodiments.

[00173] In FIG. 20A, the center of the front window of vehicle 1908 is in the middle of illumination cone 1904A. In FIG. 20B, the center of the rear window of vehicle 1908 is in the middle of illumination cone 1904B. The in-between images (i.e., Images 2 to n-1) are linearly placed across. This method can provide a series of images wherein, in all images, at least one window is illuminated. In some images, one window will fall in the illumination region 1906 which in this configuration offers the best illumination. In this way, all windows can be illuminated well and the vehicle has travelled the maximum distance while still illuminated - creating good conditions for change of perspectives and occupancy counting.

[00174] In these examples, the imaging device(s) 118 can take successive shots at a fast frame rate, and retrospectively go back in the queue of images, run a vehicle detection algorithm using vehicle image detector 1804, find, for example, the bounding box of the vehicle and locate all the images in which the right of the bounding box (e.g., the nose of the vehicle) is between 0.625 and 1.042 from the left of the image once normalized for the image coordinates. Out of these N images, n uniformly distant images can be selected. With a fast frame rate, N is always greater or equal to n. With n=5, the front window may be situated within the illumination region 1906 (i.e., a good illumination zone according to this exemplary configuration) in Image 2, and the rear window within illumination region 1906 in Image 4.

[00175] In other embodiments, different vehicle positions can be used to identify Image 1 and Image n respectively. In some embodiments different calculations may be used for different vehicle types (e.g., dependent on window position). In some embodiments the image may be normalized. In some embodiments, the image may be processed with raw length values. In some embodiments, vehicle image detector 1804 may be configured to determine one or more of the best (e.g., passenger locations best illuminated) images and subsequently provide those to the occupant detector.

[00176] In some embodiments, occupant detector 110 and vehicle image detector 1804 may communicate back and forth to ascertain the number of occupants in the vehicle using the fewest (or otherwise computationally expedient) number of images processed by occupant detector 110. For example, vehicle image detector 1804 may provide occupant detector 110 with the “best” image of the series. If occupant detector 110 can determine the number of occupants in the vehicle, then the system continues through the rest of its process. If however, a probability threshold is not achieved from the first image, occupant detector 110 may request additional captured imaged from vehicle image detector 1804 until the probability threshold is satisfied or all the captured images have been provided.

[00177] Taking successive shots at a fast frame rate may tax light emitter(s) 116 since they need to flash in sync with the camera (imaging device 118) shutter. Flashing at a lower power and increasing the power when a vehicle is detected in the image can be used to reduce the load/fatigue. In some embodiments, the system includes a near real-time vehicle detector that detects the vehicle before the nose of the vehicle reaches a normalized position. In such embodiments, the light emitter(s) 116 can be commanded to increase power so as to take maximally illuminated shots from the vehicle before the vehicle gets to the ideal position. Once the vehicle is no longer detected (meaning it has left the field of view), the illumination power can be reduced to the low level. The low illumination power is determined at a level to be enough for the general vehicle detection, while the high level needs to be enough for penetration into the windows and illuminating the passengers sitting in the car.

[00178] In some embodiments, a vehicle detector can be used to increase the frequency at which light emitter 116 and imaging device(s) 118 emits light and/or captures images respectively to produce a higher time resolved series of images. Other parameters of light emitter 116 and/or imaging device(s) 118 are conceived of as being modified when a vehicle detector detects an incoming vehicle.

[00179] FIG. 2 is an example schematic diagram 200 of the system of FIG. 1 for vehicle occupancy detection, in accordance with example embodiments.

[00180] In the shown example embodiment, the system includes the computing device 102, a laser sensor vehicle detector(s) 114, a strobe light emitter(s) 116-1 , a passenger camera imaging device(s) 118, a strobe light second light emitter 124, and a plate camera second imaging device 126.

[00181] The example implementation further includes a network switch 230, and a power supply 232 (shown as universal AC/DC power supply). The network switch 230 may be used by the computing device 102 to transmit command signals, and the network switch 230 may use packet switching to receive command signals from the computing device 102 and forward said command signals to the destination component.

[00182] The power supply 232 may be one or more devices of various types capable of powering the network switch 230, computing device 102, the laser sensor vehicle detector(s) 114, the strobe light emitter(s) 116-1 , the passenger camera imaging device(s) 118, the strobe light second light emitter 124, and the plate camera second imaging device 126. In some embodiments, for example, the power supply 232 includes multiple power supply units (not shown). For example, the power supply 232 may include various types of batteries. The power supply 232 may also be connected to an AC power source, such as a power line in road infrastructure. According to example embodiments, the power supply includes 80-264 VAC, derate output power 10% <90 VAC, and 20% <85 VAC350 & 1000 W: 85-264 VAC, derate output power 10% <90 VAC.

[00183] In example embodiments, the power supply 232 includes the ability to convert received power into power as required by the components of the system. For example, the power supply 232 may include a universal AC/DC power supply, capable of converting stored or received AC power into DC power, and providing the DC power as required by the components of the implementation 200.

[00184] Other configurations, for example, those that do not include laser sensor vehicle detector(s) 114, that correspond to the system of FIG. 18, are also conceived.

[00185] FIG. 3 is another example schematic diagram 300 of the system for vehicle occupancy detection of FIG. 1 , in accordance with example embodiments. In FIG. 3, the power source 302 is a power supply line incorporated into road infrastructure, such as the power supply line which provides power to roadside signage.

[00186] Power supply 304, similar to power supply 232, may be configured to covert the power received from the source 302 into a form usable by the components of the system 100. For example, the power supply 304 may provide power to the computing device 102, which may in turn include additional electronics for providing power to one or more of the LiDAR vehicle detector(s) 114, the light emitter(s) 116-1 , the imaging device(s) 118, the second light emitter 124, and the second imaging device 126.

[00187] In the shown embodiment, the computing device 102 is used as a conduit to provide power to the light emitter(s) 116-1 , and the second light emitter 126. [00188] According to some embodiments, for example, in the diagram 300, the computing device 102 sends power and command signals to the vehicle detector(s) 114, the imaging device(s) 118, and the second imaging device 126 which include command signals for the light emitter(s) 116-1 and second light emitter 124. Upon receipt of the command signals, the vehicle detector(s) 114, the imaging device(s) 118, and the second imaging device 126 determine which command signals are intended for the light emitter(s) 116-1 and second light emitter 124, and relay the same to the respective devices. Alternatively stated, the command signals transmitted by computing device 102 may be intended to be relayed, via the imaging devices, to the light emitters. [00189] In non-limiting example embodiments, the computing device 102 is configured to provide command signals to the vehicle detector(s) 114, the imaging device(s) 118, and the second imaging device 126, which in turn determine or generate command signals for the respective light emitter(s) 116-1 and second light emitter 124.

[00190] The vehicle detector(s) 114, the imaging device(s) 118, and the second imaging device 126 may include onboard computing devices, which implement the functions of the vehicle detector controller 104, and the light emitter controller 106, and receive command signals from the occupant detector 110.

[00191] According to example embodiments, the below table shows an example configuration of the system 100:

[00192] Other configurations, for example, those that do not include vehicle detector(s) 114, which correspond to the system of FIG. 18, are also conceived.

[00193] FIG. 4 shows an example of method 400 for configuring a vehicle detection system. [00194] At step 402, a distance from the target mounting location to a location of a target lane is determined.

[00195] The target mounting location may be determined in part by the road infrastructure available in a particular location. For example, the system for vehicle occupancy detection 100 may be installed on a roadside post or other roadside fixture. In example embodiments, the target mounting location is based on a desired location of a mobile gantry which the system for vehicle occupancy detection 100 is attached. In some embodiments, for example, the target mounting location is based on the traffic observed on the target lane, or the nature of the target lane (e.g., the target location is placed near a high occupancy vehicle (HOV) lane). [00196] A target lane can include one or more lanes of road which are expected to have vehicle traffic and where vehicle occupation is desired to be determined. For example, the target lane may be a lane of a highway. The location of the target lane is a location where a vehicle occupancy is desired to be determined. For example, the location of the target lane may be a location where a high occupancy vehicle (HOV) designated lane begins.

[00197] At step 404, a width of the target lane is determined. The width of the target lane may be determined by manually measuring the lanes width. In example embodiments, the width of target lane is determined by taking an image of the target lane and processing the image to determine a width.

[00198] At step 406, a height of the ground at the target mounting location and a height of the location of the target lane are determined. For example, where the target lane slopes, and the location of the target lane is uphill of the target mounting location, the difference between the relative heights at the target mounting location and the location of the target lane is determined.

[00199] At step 408, a preferred mounting geometry is determined. In example embodiments, the preferred mounting geometry is determined in reference to a base mounting geometry.

[00200] Referring now to FIG. 5A, a diagram of an example configuration 500 of the system 100, mounted according to a preferred mounting geometry, is shown. In example embodiments, the configuration 500 is a preferred mounting geometry, or the configuration 500 may be a base mounting geometry.

[00201] Configuration 500 includes the computing device 102, the LiDAR vehicle detector(s) 114, the passenger light emitter(s) 116-1 , the passenger camera imaging device(s) 118, the plate camera second light emitter 124, and the plate camera second imaging device 126 of system 100 connected to a mounting device 506. The mounting device 506 may be a variety of geometries, and made of a variety of materials which allow for mounting of devices of system 100. For example, the shown mounting device 506 is shaped at least in part as having a support member 506-1 , a first attachment member 506-2, a second attachment member 506-3, and a third attachment member 506-4. Some or all parts of the mounting device 506 may be a mobile gantry, or a roadside fixture. For example, the mounting device 506 may include as support member 506- 1 the roadside fixture, and the first attachment member 506-2 and the second attachment member 506-3 may be metal supports passing through the roadside fixture. In another non-limiting embodiment, the system 100 may also be deployed in a secure mobile trailer (not shown) that may be quickly moved from one location to the next for rapid adhoc applications. In example embodiments, the mounting device 506 includes lockable casters for attaching to the various constituent elements or the ground 504.

[00202] Network and power interface 502 carries out the functions of the network switch 230 and the power supply 232 of FIG. 2. Network and power interface 502 is similarly connected to mounting device 506.

[00203] In the shown embodiment, the LiDAR vehicle detector(s) 114 is shown as being connected to third attachment member 506-4 0.55[m] above the ground 504.

[00204] The passenger light emitter(s) 116-1 , and the passenger camera imaging device(s) 118 are shown as being connected to the first attachment member 506-2 1.45[m] above ground 504. The passenger light emitter(s) 116-1 are separated by a horizontal distance of 0.5[m] from center to center along the first attachment member 506-2. The passenger light emitter(s) 116-1 is also shown as 0.4[m] horizontally distant from a first end of the first attachment member 506-2, while the passenger light emitter(s) 116-1 is shown as being 0.9[m] horizontally distant from the first end of the first attachment member 506-2.

[00205] Plate camera second light emitter 124, and plate camera second imaging device 126 are connected to the second attachment member 506-3 at a distance of 2[m] above the ground.

[00206] In example embodiments, installing the system 100 may be rapid as a result of the modularity and fewer amount of parts of system 100, as compared to multi-imaging device systems. For example, in some embodiments the system 100 may be deployed in less than one hour.

[00207] FIG. 5B is a perspective view of an example system for vehicle occupancy detection, in accordance with example embodiments.

[00208] The example system 514 shown in FIG. 5B has a configuration of the LiDAR vehicle detector(s) 114, the passenger light emitter(s) 116-1 , the passenger camera imaging device(s) 118, similar to configuration 500, in that the light emitter(s) 116-1 are upstream of the passenger camera imaging device(s) 118, and the LiDAR vehicle detector(s) 114 is below said components.

[00209] The vehicle detector may be upstream of the imaging device if the vehicle detector produces only a one-dimensional lateral measurement of the distance between the station and the vehicle. Examples of such sensors are 1 D laser range finders and under-road-pavement sensors (fiber optics, etc.) that record passage of the vehicle at a certain lateral distance from the station, without giving information about the longitudinal position of the vehicle. Other vehicle detectors such as 3D LiDARs do not need to be necessarily placed upstream of the imaging device relative to the traffic direction. Such sensors can be placed in different locations as the physical and geometrical placement plays a less important role, relative to the role of the 3D perception process that detects and tracks the vehicle, and triggers images by the imaging device.

[00210] In addition, system 514 includes a concreate pad platform 512, to which mounting device 506 is attached, securing the system in a particular location. The platform 512 is shown to be about 2 meters long and 1 meter wide. The height of the platform 512 can be anywhere between 0 to 60 centimeters from the road surface (compared to the lane of interest). The platform 512 is preferably parallel to the lane of interest and has a flat and level surface 516.

[00211] In example embodiments, the platform 512 is a heavy steel structure, or other structure capable of fixing the system 514 to a particular roadside location (i.e., a ground surface adjacent to lanes of a road, which can include a shoulder, a service lane, median strip, or otherwise). A roadside location can also include road portions not directly adjacent to the road segments used for vehicle travel. For example, a roadside location includes, in example embodiments, a road verge next to a road shoulder.

[00212] A small electrical cabinet (not shown in the picture) can be installed on surface 516 between the two posts of the structure 506 or elsewhere. According to some embodiments, the cabinet receives power from road infrastructure.

[00213] FIG. 5C is a perspective view of the system 514 of FIG. 5B including another imaging device, in accordance with example embodiments.

[00214] In the shown embodiment, the vehicle occupancy detection system 514 is connected with, or controls, an imaging assembly 516 including an imaging device and LIDAR unit mounted above the road. The imaging device and the LIDAR are pointed towards the road so that the imaging device captures images of the license plate of a vehicle as it drives in the direction of traffic. The imaging assembly 516 may, for example, be located 10 to 14 meters upstream of the vehicle occupancy detection system 514, and can be installed on light poles or other roadside fixtures which overhand traffic. [00215] FIG. 5D shows a photograph of the system 514 of FIG. 5B, in accordance with example embodiments.

[00216] FIG. 6A is a top view of an example system 602 for vehicle occupancy detection, in accordance with example embodiments.

[00217] Vehicle 604 is shown travelling in an expected direction of vehicle travel 606 (hereinafter referred to as direction 606), upstream of the vehicle detector(s) 114 and the imaging device(s) 118 in this example embodiment. Direction 606 is shown as being parallel to the lane marker 620, in accordance with a typical vehicle direction travelling along a lane.

[00218] Imaging device(s) 118 is shown having a horizontal field of view (e.g., defined by edges 608A and 608B), and is pointed in a direction CA defined by a yaw angle (( /) of 15 degrees from an axis 610 perpendicular to lane marker 620. The yaw angle may be configured depending on the expected speed of the vehicle travelling along the road. For example, the yaw angle may be decreased where traffic is expected to be slower, ensuring consistency between installations which have high traffic and low traffic. In example embodiments, the yaw angle of the imaging device(s) 118 is fixed, and the patterns during which the light emitter (not shown in FIG. 6A) and imaging device(s) 118 emit light, and capture light, respectively, is varied. The field of view of imaging device(s) 118 may depend on the installation environment or intended use, and in example embodiments, the field of view is 30 degrees. The yaw angle may increase the accuracy of the vehicle detection system by forcing images to contain certain perspectives, which, when multiple consecutive images are captured of vehicle 604 traveling at different speeds are captured at said perspectives, contributes to all front and rear occupants of the vehicle 604 being visible in the captured images.

[00219] In some embodiments, the processor can compute the yaw angle. In some embodiments, there are multiple images from the vehicle so that occupants are seen from different perspectives as the vehicle travels horizontally across the field of view, which can be reflected in extracted data from the images. A camera may have a large horizontal field of view and the system may be able to achieve a good amount of change of perspective by taking multiple successive images as the vehicle is traveling from one end of the horizontal field of view to the other, even with a zero yaw angle. However, having a large of a field of view may not always possible or favorable for some other reasons. Using a nonzero yaw angle may accentuate the change of perspective within a limited horizontal motion of the vehicle in the field of view. Accordingly, the system computes data corresponding to change of perspective, and, in some embodiments uses the yaw angle” as an example metric.

[00220] The vehicle detector(s) 114 is pointed in the direction of axis LA, which is perpendicular to direction 606. The vehicle detector(s) 114 may, similar to imaging device(s) 118, be positioned with a yaw angle towards incoming traffic. Various positions of the vehicle detector(s) 114, relative to imaging device(s) 118 are contemplated. The vehicle detector(s) 114 may be relatively close to the imaging device 114, or in example embodiments, the vehicle detector(s) 114 may be much further upstream of the imaging device(s) 118 to account for the expected speed of the incoming traffic. In some embodiments, the vehicle detector(s) 114 are not upstream relative to the traffic, and are in other locations.

[00221] Both the imaging device(s) 118 and the vehicle detector(s) 114 are located a distance 618 from an expected position of the vehicle. The distance 618 may be determined based on the geometry of the lane which is being observed. For example, lanes vary in width, the location of the system 602 may be located further than the road in certain installations, and so forth. In example embodiments, the distance is determined by: L = p cos a, where L is the distance 618 in a direction defined by axis 610, where p and a are shown in FIG. 6D. In example embodiments, the distance 618 is 5 meters. In example embodiments, the distance from the imaging device(s) 118 to the expected position of region of interest 614 (e.g., the distance from the imaging device(s) 118 to the middle of the lane) may be different from the distance from the vehicle detector(s) 114 to the expected position of region of interest 614 (e.g., the distance from the vehicle detector(s) 114 to the middle of the lane).

[00222] Once the desired distance 618 is determined, the system 602 may be fixed with this distance (e.g., secured to a concrete pad). Similarly, the distance 616, in direction 606, between the imaging device(s) 118 and the vehicle detector(s) 114 may be fixed after installation. In example embodiments, the distance 616 is 30 centimeters.

[00223] In FIG. 6A, the region of interest 614 is shown in part in the field of view of imaging device(s) 118, and the region of interest 612 is not. Imaging device(s) 118 does not capture an image of vehicle 604 in FIG. 6A as the vehicle 604 has not been detected by the vehicle detector(s) 114.

[00224] In FIG. 6B, vehicle 604 has advanced in the direction 606 and the region of interest 614 is shown directly in the line of sight of imaging device axis CA. When the vehicle 604 is detected, the imaging device(s) 118 can be activated or controlled (via hardware or software) to take one or multiple images of the vehicle 604 in this instance.

[00225] In FIG. 6C, the region of interest 612 is prominently in the line of sight of imaging device axis CA. Imaging device(s) 118 may be configured to capture an image at this relevant distance instance. A region of interest is in a field of view when the feature reflects light towards imaging device(s) 118 in a direction such that it is captured by the particular configuration of imaging device(s) 118. For example, there may be a region of interest that is not in the field of view of imaging device(s) 118 as light reflected from said feature cannot travel through vehicle 604 and be captured by imaging device(s) 118.

[00226] FIG. 6D is a rear view of the example system 602 for vehicle occupancy detection of FIG. 6A. In contrast to imaging device(s) 118, the vehicle detector(s) 114 is shown as having a pitch angle a relative to axis 626 a distance h L to the ground 628. Vehicle detector(s) 114 is aimed in direction l_A towards a point 624 horizontally further from the vehicle detector(s) 114 relative to an expected position of vehicle 604. In example embodiments, where ground 628 is flat, the pitch angle a is 5.25 degrees and the point 624 may be approximately 10 meters away from vehicle detector(s) 114. In this way, vehicles travelling closer to the vehicle detector(s) 114, for example at a distance p along axis LA, will interfere with light travelling along axis LA and reflect light to vehicle detector(s) 114. In example embodiments, the vehicle detector(s) 114 is positioned such that distance hi. is larger than some vehicles' wheel wells, providing a more accurate reading of whether a vehicle is passing by. The distance hi. is approximately 90 centimetres according to some embodiments.

[00227] Imaging device(s) 118 is positioned a distance he above the ground 628. In example embodiments, the distance he is 145 centimeters, which may be a distance of an expected height of an average car to the ground 628. Imaging device(s) 118 has a line of sight CA which is parallel to the ground 628 in this example embodiment, however imaging device(s) 118 may have various pitch positions.

[00228] Referring again to step 408 in FIG. 4, the preferred mounting geometry may be determined in reference to the base geometry 500. For example, the preferred mounting geometry may include maximal variation for each of the constituent elements of a base geometry. Continuing the example, the preferred mounting geometry may be constrained to have constituent elements placed within 20 cm of the configuration 500, and have orientations (e.g., pitch, yaw, and roll) within 10° of the configuration 500. Advantageously, determining a preferred mounting geometry based on a base mounting geometry may allow for a larger variation of configurations which provide accurate results, reducing the need for meticulous calibration of the constituent elements of system 100 or 1800.

[00229] At step 410, the preferred geometry is output. The output preferred geometry may be displayed on a display, allowing for a visual reference for a technician to mount the vehicle occupancy detection system. The output preferred geometry may be a geometry which enables the imaging device(s) 118 to capture more than one image of the detected vehicle. The system can store the output preferred geometry in memory, or transmit the output preferred geometry to another system or component.

[00230] At step 412, the zoom of the imaging device(s) 118 may be adjusted in accordance with the output preferred geometry. For example, the computing device 102 or 1802 may be engaged to monitor whether the installation satisfies the output preferred geometry. Continuing the example, where the computing device 102 determines that an imaging device(s) 118’s zoom is not satisfactory, the display may display a notification including instructions required to adjust the imaging device(s) 118’s zoom.

[00231] According to some example embodiments, step 412 includes the system 100 or 1800 operating in a diagnostic mode for a period of time until the system determines that the installation satisfies the output preferred geometry. For example, the output preferred geometry may be provided to the occupant detector 110, which determines whether the preferred geometry has been complied with after installation. In example embodiments, the output preferred geometry includes an indicator of the imaging device(s) 118’s zoom, which may be continually monitored.

[00232] The system 100 or 1800 may be modular, and the constituent elements may be attached to a mounting device (e.g., mounting device 506) separately, allowing for rapid deployment and set up.

[00233] The system 100 or 1800, once mounted, may not require further training of the occupant detector 110 in order to detect occupants. Alternatively stated, the occupant detector 110 may be pre-trained to work with the preferred mounting geometry without additional training or adjustments to the machine learning model stored thereon. [00234] FIG. 7 is a perspective view of a further example system 702 for vehicle occupancy detection, in accordance with example embodiments. System 702, similar to system 514, includes light emitter(s) 116-1 , and imaging device(s) 118-1 mounted on top of a gantry connected to a concrete pad 512. The light emitter(s) 116-1 , and imaging device(s) 118 -1 are positioned approximately 1.5 meters above ground to see overtop of the concrete barrier 710. In example embodiments, the system 702 is preferably positioned between approximately 4 to 8 meters from lane 620 measured perpendicular relative to the direction of travel, on a road portion adjacent to the nearest lane 620 and alternatively referred to as a roadside.

[00235] System 702 further includes light emitter(s) 116-3 and 116-4, and imaging device(s) 118-2 for vehicle occupancy detection positioned a further distance above the ground 714, relative to light emitter(s) 116-1 , and imaging device(s) 118-1. Whereas light emitter(s) 116-1 , and imaging device(s) 118-1 are positioned to capture images of vehicles travelling in the first lane FL of traffic (e.g., based on their height above the ground and their pitch), the light emitter(s) 116-3 and 116-4, and imaging device(s) 118-2 are positioned (e.g., based on their height above the ground 714, and their pitch) to capture images of vehicles travelling in the second lane SL of traffic. For example, in the shown embodiment, the light emitter(s) 116-3 and 116-4, and imaging device(s) 118-2 are positioned approximately 2 meters above the ground 714, and imaging device(s) 118-2 is pitched downward approximately 10 degrees, with a yaw angle of 15 degrees.

[00236] Vehicle detector(s) 114 is positioned above both light emitter(s) 116-1 , and imaging device(s) 118-1 and light emitter(s) 116-3 and 116-4, and imaging device(s) 118-2. In example embodiments, a plurality of vehicle detectors 114 are used to determine, respectively, whether a vehicle is passing in each respective lane. In example embodiments, plurality of vehicle detectors 114 are used to detect passing vehicles in either lane. In example embodiments, the vehicle detectors 114 are capable of scanning a wide horizontal field of view (e.g., at least 120 degrees) and a reasonable vertical field of view (e.g., at least 30 degrees).

[00237] In the shown embodiment, the vehicle detector(s) 114 is positioned with a yaw angle such that it is able to detect vehicles relative to the system 702 (e.g., vehicles 604 and 708). For example, the vehicle detector(s) 114 may detect vehicles approximately 15 to 20 meters before they are in the field of view of the respective imaging devices. In example embodiments including a plurality of vehicle detectors 114, each vehicle detector may be respectively positioned to detect vehicles at different distances. [00238] The vehicle detectors may be 2D or 3D LIDAR units capable of capturing multiple readings of distances in multiple directions. For example, vehicle detector(s) 114 in the shown embodiment emits a plurality of light 704 (e.g., infrared light) towards both the first lane FL and the second lane SL. A potential advantage of using a 2D or 3D LIDAR vehicle detector(s) 114 that is capable of capturing a point cloud from moving vehicles compared to the single laser beam range measurement is increased robustness to dust particles and precipitation. While measurements from a single laser beam can easily get contaminated by noise, an entire point cloud of measurements from a vehicle is statistically more robust to noise. Also since the vehicles (e.g., vehicles 604 and 708) can be detected before they are within the field of view of an imaging device, the more robust detection of passing vehicles may provide for more precise adjustments of the pattern between the light emission, image capture and vehicle detection. In example embodiments, the more precise estimation allows for detecting the vehicles a greater distance from the system 702, and allows greater filtering windows (i.e., the use of larger windows of time between detection of the vehicle and capturing an image of the vehicle) without risking detecting the car too late.

[00239] Imaging devices 118-3 and 118-4 may be used to capture images of the front and rear license plates. For example, in the shown embodiment, imaging device(s) 118-3 is at a yaw angle which points in the direction 606 to capture rear license plates.

[00240] FIG. 8 is a flowchart of an example of method 800 for vehicle occupancy detection, in accordance with example embodiments.

[00241] Method 800 may be implemented by the occupant detector 110, for example, or by a remote computing device.

[00242] At step 802, a detection signal is received from the vehicle detector(s) 114. In example embodiments, the detection signal includes a detected speed of the detected vehicle.

[00243] In example embodiments, as a result of the geometry of the installation of system 514 (e.g., the yaw angle of imaging device(s) 118, the horizontal field of view of the camera (e.g., defined by the edges 608A and 608B), distance between the imaging device(s) 118 and the vehicle detector(s) 114, etc. shown in FIG. 6A), the system ensures that when the vehicle 604 is detected, the imaging device(s) 118 is triggered instantly, and the entire vehicle 604 is within the camera’s horizontal field of view (e.g., FIG. 6B). [00244] At step 804, a command signal is transmitted to the light emitter(s) 116 to emit light according to a first pattern for a first time window. In example embodiments, the first pattern is determined by the speed of the vehicle as detected by the vehicle detector 114. According to some embodiments, for example, the first pattern is a preconfigured frequency based on the configuration of the system 100. Continuing the example, the preconfigured frequency may be based on the detection distance, the latency associated with vehicle detection, and the operating frequency of the imaging device(s) 118.

[00245] In an illustrative embodiment, for traffic where vehicles are expected to be travelling with speeds around 80-140 km/h, the frequency can be 90 pulses per second. This frequency can provide for 5 sufficient quality images of passing vehicles.

[00246] In example embodiments, once the vehicle 604 is detected, the vehicle position, direction of travel and speed is tracked using a tracking approach, such as a Kalman filter. The estimation of position and speed of the vehicle 604 can then be used to trigger, for example, the license imaging device(s) 118-3 of FIG. 7 (e.g., when the vehicle 604 is about 10-14 meters upstream of system 702), and then trigger the imaging devices 118-1 and 118-2 multiple times at optimal places to take multiple shots for occupancy counting. The tracking approach keeps tracking the vehicle until it passes the system, and when it is 10-14 meters away it triggers the imaging device(s) 118-4 to capture images of the rear license plate if necessary. The detection and tracking of cars in multiple lanes (at least lane 1 and 2, and possibly the shoulder lane or lane 3) can happen simultaneously in a perception software system which can be implemented by computer 102.

[00247] In example embodiments, the occupant detector 110 may receive ambient condition information from the ambient condition sensor 130, and determine an optimal configuration for the light emitter(s) 116 based on the received ambient condition. The optimal configuration is then transmitted along with the control signals. For example, based on received ambient conditions, the occupant detector 110 may determine that the light emitter(s) 116 intensity should be increased, and transmit control signals reflecting same.

[00248] The first time window may be, similar to the first pattern, dynamic or preconfigured.

[00249] At step 806, the command signal is transmitted to the imaging device(s) 118 to capture images according to a second pattern associated with the first pattern, for a second time window associated with the first time window. The second pattern is associated with the first pattern of the light emitter(s) 116 so that the imaging device(s) 118 captures the light emitted by the light emitter(s) 116 (e.g., the imaging device(s) 118 captures images after light has been emitted). In example embodiments, the second pattern may be based on the latency associated with the light emitter(s) 116 emitting light, and the latency associated with the command signal reaching the imaging device(s) 118. In a non-limiting example embodiment, the imaging device(s) 118 may be configured to capture successive high-speed snapshots (e.g., 5 images) of the detected vehicle as it passes the system 100.

[00250] In example embodiments, the transmitted command signal includes configuration signals for adjusting the imaging device(s) 118’s acquisition parameters (e.g., the number of pictures taken for each vehicle, imaging device exposure time, imaging device frame rate, gain settings, focal length, etc.) based on the ambient conditions in order to maximize the quality of the image acquisition, which may in turn lead to higher overall accuracy. The ambient conditions may be received by the computing device 102 from the ambient condition sensor 130, from an external ambient condition data service provider, or otherwise. For example, at higher detected vehicle speeds, the imaging device(s) 118 may capture images at a greater frequency (i.e., a higher FPS) or imaging device(s) 118 may begin capturing images more rapidly in response to a vehicle being detected (e.g., using a shorter filtering algorithm window). Conversely, at lower detected vehicle speeds, imaging device(s) 118 may reduce the frequency of image capture (i.e., the FPS) or increase the algorithm window size.

[00251] In example embodiments, the command signal transmitted to the imaging device(s) 118 is configured to avoid lens distortion effects associated with the vehicle 604 being too close to the margins of the captured images (e.g., too close to edges 608A and 608B). The system 100 may compute a speed of travel, and align the command signal to capture images of the vehicle without the region of interest being within a threshold of the edges 608A and 608B. For example, based on an average vehicle length of 4.5 to 5 meters, the system 100 can estimate the speed of traffic as multiple cars pass by the vehicle detector(s) 114 (e.g., each car passage will register similar to an inverted square pulse, and the time-length of the pulse assuming a nominal length of vehicles, can be used to estimate speed of traffic). Alternatively, the system 100 can be integrated with other systems such as toll bridges that monitor traffic and estimate vehicle speeds, and system 100 may estimate a likely position of the vehicle based on said data to adjust the duration and frequency of operation of the imaging device(s) 118 (e.g., the filtering window length and image acquisition speed (FPS)). [00252] The second time window may be, similar to the first time window, dynamic or preconfigured.

[00253] Optionally, at steps 808 and step 810, the second light emitter 124 and the second imaging device 126 may be configured to, respectively, emit light and capture images associated with the rear of the detected vehicle. The images associated with the rear of the detected vehicle may be analyzed to determine a license plate number of the detected vehicle.

[00254] At step 812, the occupant detector 110 receives the captured images from the imaging devices, and determines a vehicle occupancy.

[00255] In some embodiments, the occupant detector 110 determines a first region of interest of the vehicle in each of the plurality of captured images, a second region of interest of the vehicle, and determines the number of visible occupants in the each region of interest image as the vehicle occupancy. For example, the occupant detector 110 may be trained to detect occupants based on expected positions within the detected vehicle (e.g., it is more interested in the location of a vehicle above or near to a seat, as opposed to spaces between seats). Continuing the example, the occupant detector 110 may then use as a vehicle occupancy the maximum number of determined occupants for each window (e.g., whether 1 , 2 or 3 occupants are visible in the window), combining the results from multiple images into a single result (e.g., a simple max_per_window_acrossallimages approach).

[00256] The occupant detector 110 may determine the vehicle occupancy in part based on determining a rear and a front occupancy. For example, the occupant detector 110 may separately determine the amount of occupants visible in each seating row of a detected vehicle. This approach may have the advantage of simplifying the system 100, introducing redundancy, and in turn improving accuracy and reducing overall cost of the system.

[00257] According to example embodiments, the vehicle occupancy may be determined as the most likely number of occupants based on each of the respective number of visible occupants in each image processed. For example, where the occupant detector 110 determines differing amounts of occupants in each of the images of the detected vehicle, it may be configured to determine the vehicle occupancy as the most commonly occurring number of occupants across images, or the number of occupants detected by the images which are side views of the vehicles, and so forth. [00258] In further example embodiments, the occupant detector 110 uses an occupant model to determine the number of vehicle occupants. For example, the occupant detector 110 may be trained to fit an occupant model to determine a most likely model which first all images associated with the detected vehicle.

[00259] The occupant detector 110 may normalize the images prior to processing same. For example, the occupant detector 110 may normalize the images so that the vehicle is the same size in each image, or normalize the images so that the effect of ambient conditions is consistent across images (e.g., images with strong glare may be filtered to reduce glare). The images which have normalized vehicles may be normalized based on occupant detector 110 determining respective normalization parameters based on the vehicle speed and the first pattern and the first time window.

[00260] In some embodiments, for example, the occupant detector 110 may generate and use a normalized vehicle model populated with each of the plurality of images processed with the respective normalization parameters to normalize the vehicle across the captured plurality to images.

[00261] The occupant detector 110 may be configured to discard images which do not include the detected vehicle (e.g., false positive triggers). For example, where the vehicle detector(s) 114 detects a vehicle, a separate vehicle detector (not shown), such as a machine learning model configured to detect vehicles in images, as opposed to using LiDAR, may determine whether the captured images include a car (as opposed to an animal, etc.).

[00262] The occupant detector 110 may detect a vehicle occupancy of a detected vehicle by normalizing detected two or more vehicles in the plurality of images relative to one another. For example, where there are five successive images include multiple detected vehicles, the occupant detector 110 may be configured to enlarge the portions of the images with the respective detected vehicles so that they are the same size.

[00263] According to some embodiments, for example, the occupant detector 110 may further normalize images with respect to the ambient conditions. For example, where the direction of the sun is detected (e.g., via the direction of sunlight intensity), the images wherein the vehicle is incident with more powerful sunlight may be filtered to mimic conditions in other images where the sunlight is weaker. [00264] Optionally, at steps 814 and 816, the occupant detector 110 may be configured to respectively generate and transmit a report. In example embodiments, the report is generated and transmitted in response to determining a vehicle occupancy outside of a threshold. The report may for example be generated for and transmitted to a tolling agency (not shown), which tolls vehicles in response to the occupancy detection outside of the threshold.

[00265] In example embodiments, the threshold is based on a determination of whether a vehicle is in violation of existing vehicle occupation law over a confidence interval. For example, where the detected vehicle is in a high occupancy vehicle (HOV) lane which requires more than 3 occupants, the threshold may be whether the system 100 is more than 90% confident that there are more than three occupants. Further discussion of the confidence interval is discussed below.

[00266] The report may include a date, time of day, estimated speed, vehicle type, lateral position of vehicle in lane, front occupancy, rear occupancy, overall occupancy, front occupancy confidence, rear occupancy confidence, overall occupancy confidence, and a license plate of the vehicle (detected from the images captured by the second imaging device 126), all of which may be extracted from the images received by the occupant detector 110. The report may include health monitoring information on all of the sensors and hardware components of system 100.

[00267] The report may be stored in a standard SQL database allowing for interfacing with well- known application program interfaces (APIs) to query the data and generate any desired report via SQL queries.

[00268] The report may include the one or more captured images and metadata associated with the one or more captured images of the detected vehicle, and bounding boxes representing the detected occupants.

[00269] In example embodiments, the report is a report which describes, for example, detected vehicles weaving between lanes (e.g., where a calculated speed and position of the vehicle is outside of the expected lane markers) and/or stunt driving (e.g., erratic behaviour of the region of interest - such as high speeds, dangerous proximity to other vehicles, etc.) and documenting such unlawful driving behavior for purpose of law enforcement.

[00270] According to some embodiments, the report may include information gleaned from monitoring traffic over multiple lanes over different hours of the day, different days of the week and different seasons, and extracting useful information and statistics about road usage. For example, the report may provide comprehensive statistics about which lanes are most dangerous, which lanes appear to have potholes (e.g., consistent weaving of lanes in a particular location), driving characteristics and how they change in response to the environment (e.g., tracking the performance of a snow removal contractor over time) and so forth.

[00271] FIGS. 15A and 15B show an example of vehicle weaving and an anti-weaving feature of the system 100.

[00272] As shown in FIG. 15A, the system 100 can be made to detect if a vehicle passing in front of it is trying to weave away into farther lanes in an attempt to avoid having its occupancy counted. The anti-weaving feature of the system 100 is useful to ensure usage of the entire system for HOV (high occupancy vehicle) lane and HOT (high occupancy toll) lane use cases.

[00273] As shown in FIG. 15B, an example implementation can use the same vehicle detector sensor of the system 100, such as for example a sensor that is 3D LiDAR. The example in FIG. 15B shows 3D visualizations observed by the vehicle detector sensor that can be utilized to observe the trajectory of the vehicle motion on the road and detect if a lane change is occurring in the lateral direction before and after the station longitudinally. As an alternative to 3D LiDAR in the event a station is equipped with only a 1 D laser range finder or any other vehicle detector with limited capabilities, for example, the system 100 can have an additional camera installed higher up and a large field of view to observe this vehicle motion.

[00274] The system can compute a trajectory of the vehicle motion on the road and detect if a lane change is occurring in the lateral direction before and after the station longitudinally. The system can use 3D LiDAR or add an additional camera installed higher up and a large field of view to observe this vehicle motion.

[00275] In some embodiments, there is provided a system for detecting occupancy of a vehicle travelling in an expected direction of travel along a road. The system has a first roadside imaging device positioned on a roadside, having a first field of view of the road, the first field of view incident on a side of the vehicle when the vehicle is on the road within the first field of view. The system has a first roadside light emitter emitting light towards vehicles in the first field of view. The system has a roadside vehicle detector. The system has a processor, in communication with a memory, configured to: receive a signal from the roadside vehicle detector indicating that the vehicle is within or proximate, relative to the expected direction of vehicle travel, to the first field of view; command the first roadside light emitter to emit light according to a first pattern for a first duration; command the first roadside imaging device to capture images of the side of the vehicle according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images of the side of the vehicle from the first roadside imaging device; compute a vehicle occupancy of the vehicle by, in each of the captured images: determining one or more regions of interest of the vehicle in each of the captured images; determining the vehicle occupancy as a number of visible occupants in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy. The system can transmit the vehicle occupancy to a monitoring system.

[00276] FIG. 21 is a flowchart of another example of method 2100 for vehicle occupancy detection, in accordance with example embodiments.

[00277] Method 2100 may be implemented by the occupant detector 110, for example, or by a remote computing device. Method 2100 is substantially similar to that shown by method 800, except that it determines vehicles images from continuously captured images rather than detecting a vehicle and trigger light emission and image capture. It may still include a vehicle detector 114 for other purposes, such as to trigger greater illumination by light emitter(s) 116 or higher image capture frequency by image device(s) 118 (and changes to the patterns of light emission and image capture required by such modifications).

[00278] At step 2104, a command signal is continuously transmitted to the light emitter(s) 116 to continuously emit light according to a first pattern for a first time window. According to some embodiments, for example, the first pattern is a preconfigured frequency based on the configuration of the system 1800. Continuing the example, the preconfigured frequency may be based on the operating frequency of the imaging device(s) 118.

[00279] In an illustrative embodiment, for traffic where vehicles are expected to be travelling with speeds around 80-140 km/h, the frequency can be 90 pulses per second. This frequency can provide for 5 sufficient quality images of passing vehicles.

[00280] In example embodiments, the occupant detector 110 may receive ambient condition information from the ambient condition sensor 130, and determine an optimal configuration for the light emitter(s) 116 based on the received ambient condition. The optimal configuration is then transmitted along with the control signals. For example, based on received ambient conditions, the occupant detector 110 may determine that the light emitter(s) 116 intensity should be increased, and transmit control signals reflecting same.

[00281] The first time window may be, similar to the first pattern, dynamic or preconfigured.

[00282] At step 2106, the continuous command signal is transmitted to the imaging device(s) 118 to capture images according to a second pattern associated with the first pattern, for a second time window associated with the first time window. The second pattern is associated with the first pattern of the light emitter(s) 116 so that the imaging device(s) 118 captures the light emitted by the light emitter(s) 116 (e.g., the imaging device(s) 118 captures images after light has been emitted). In example embodiments, the second pattern may be based on the latency associated with the light emitter(s) 116 emitting light, and the latency associated with the command signal reaching the imaging device(s) 118. In a non-limiting example embodiment, the imaging device(s) 118 may be configured to capture successive high-speed snapshots (e.g., 5 images) of the detected vehicle as it passes the system 1800.

[00283] In example embodiments, the transmitted command signal includes configuration signals for adjusting the imaging device(s) 118’s acquisition parameters (e.g., the number of pictures taken for each vehicle, imaging device exposure time, imaging device frame rate, gain settings, focal length, etc.) based on the ambient conditions in order to maximize the quality of the image acquisition, which may in turn lead to higher overall accuracy. The ambient conditions may be received by the computing device 1802 from the ambient condition sensor 130, from an external ambient condition data service provider, or otherwise.

[00284] The second time window may be, similar to the first time window, dynamic or preconfigured.

[00285] Optionally, at steps 2108 and step 2110, the second light emitter 124 and the second imaging device 126 may be configured to, respectively, emit light and capture images. The images may be analyzed to determine a license plate number of subsequently detected vehicles.

[00286] At step 2111 , the vehicle image detector 1804 receives the continuous images from image device(s) 118 and processes them to detect vehicles within a plurality of images. For example, vehicle image detector 1804 may be configured to detect a series of images that contains the same vehicle (i.e., from the when a vehicle entered the field of view of image device(s) 118 to when that vehicle exits the field of view). It may do this by detecting a bounding box of a vehicle and determine when it is within a normalized range of the left of the field of view. In some embodiments, vehicle image detector 1804 may pass some or all of the plurality of images with the vehicle identified to occupant detector 110. The vehicle image detector 1804 may further be configured to detect which of the series of images corresponds to the most favourable images to detect user occupancy in the front and rear windows and pass those images along for occupancy detection.

[00287] At step 2112, the occupant detector 110 receives the captured images from the imaging devices, and determines a vehicle occupancy.

[00288] In some embodiments, the occupant detector 110 determines a first region of interest of the vehicle in each of the plurality of captured images, a second region of interest of the vehicle, and determines the number of visible occupants in the each region of interest image as the vehicle occupancy. For example, the occupant detector 110 may be trained to detect occupants based on expected positions within the detected vehicle (e.g., it is more interested in the location of a vehicle above or near to a seat, as opposed to spaces between seats). Continuing the example, the occupant detector 110 may then use as a vehicle occupancy the maximum number of determined occupants for each window (e.g., whether 1 , 2 or 3 occupants are visible in the window), combining the results from multiple images into a single result (e.g., a simple max_per_window_acrossallimages approach).

[00289] The occupant detector 110 may determine the vehicle occupancy in part based on determining a rear and a front occupancy. For example, the occupant detector 110 may separately determine the amount of occupants visible in each seating row of a detected vehicle. This approach may have the advantage of simplifying the system 1800, introducing redundancy, and in turn improving accuracy and reducing overall cost of the system.

[00290] According to example embodiments, the vehicle occupancy may be determined as the most likely number of occupants based on each of the respective number of visible occupants in each image processed. For example, where the occupant detector 110 determines differing amounts of occupants in each of the images of the detected vehicle, it may be configured to determine the vehicle occupancy as the most commonly occurring number of occupants across images, or the number of occupants detected by the images which are side views of the vehicles, and so forth. [00291] In further example embodiments, the occupant detector 110 uses an occupant model to determine the number of vehicle occupants. For example, the occupant detector 110 may be trained to fit an occupant model to determine a most likely model which first all images associated with the detected vehicle.

[00292] The occupant detector 110 may normalize the images prior to processing same. For example, the occupant detector 110 may normalize the images so that the vehicle is the same size in each image, or normalize the images so that the effect of ambient conditions is consistent across images (e.g., images with strong glare may be filtered to reduce glare). The images which have normalized vehicles may be normalized based on occupant detector 110 determining respective normalization parameters based on the vehicle speed and the first pattern and the first time window.

[00293] In some embodiments, for example, the occupant detector 110 may generate and use a normalized vehicle model populated with each of the plurality of images processed with the respective normalization parameters to normalize the vehicle across the captured plurality to images.

[00294] Optionally, at steps 2114 and 2116, the occupant detector 110 may be configured to respectively generate and transmit a report. In example embodiments, the report is generated and transmitted in response to determining a vehicle occupancy outside of a threshold. The report may for example be generated for and transmitted to a tolling agency (not shown), which tolls vehicles in response to the occupancy detection outside of the threshold.

[00295] In example embodiments, the threshold is based on a determination of whether a vehicle is in violation of existing vehicle occupation law over a confidence interval. For example, where the detected vehicle is in a high occupancy vehicle (HOV) lane which requires more than 3 occupants, the threshold may be whether the system 100 is more than 90% confident that there are more than three occupants. Further discussion of the confidence interval is discussed below.

[00296] The report may include a date, time of day, estimated speed, vehicle type, lateral position of vehicle in lane, front occupancy, rear occupancy, overall occupancy, front occupancy confidence, rear occupancy confidence, overall occupancy confidence, and a license plate of the vehicle (detected from the images captured by the second imaging device 126), all of which may be extracted from the images received by the occupant detector 110. The report may include health monitoring information on all of the sensors and hardware components of system 1800. [00297] The report may be stored in a standard SQL database allowing for interfacing with well- known application program interfaces (APIs) to query the data and generate any desired report via SQL queries.

[00298] The report may include the one or more captured images and metadata associated with the one or more captured images of the detected vehicle, and bounding boxes representing the detected occupants.

[00299] In example embodiments, the report is a report which describes, for example, detected vehicles weaving between lanes (e.g., where a calculated speed and position of the vehicle is outside of the expected lane markers) and/or stunt driving (e.g., erratic behaviour of the region of interest - such as high speeds, dangerous proximity to other vehicles, etc.) and documenting such unlawful driving behavior for purpose of law enforcement.

[00300] According to some embodiments, the report may include information gleaned from monitoring traffic over multiple lanes over different hours of the day, different days of the week and different seasons, and extracting useful information and statistics about road usage. For example, the report may provide comprehensive statistics about which lanes are most dangerous, which lanes appear to have potholes (e.g., consistent weaving of lanes in a particular location), driving characteristics and how they change in response to the environment (e.g., tracking the performance of a snow removal contractor over time) and so forth.

[00301] FIG. 9 is a flowchart of an example method to complete step 812 of FIG. 8 or 2112 of FIG. 21 for detecting occupants in images, in accordance with example embodiments.

[00302] At block 902, images are received, for example by the occupant detector 110. In some embodiments, an image may be received, or in some embodiments two or more images may be received.

[00303] Capturing more than one image can enable the system to extract more information from multiple images and can help avoid obstruction of occupants by the vehicle window frames or obstruction of farther sitting occupants by closer sitting occupants. However, in some cases even one image can be sufficient. Multiple images may achieve higher performance and robustness, but capturing and processing an image may also provide sufficient data in some embodiments.

[00304] At block 904, each image is processed to determine the pixels associated with a window of a vehicle. For example, the occupant detector 110 may implement an SST detector, trained to identify a vehicle in an image. Where no vehicle is detected, the occupant detector 110 may record this as an instance of no occupants.

[00305] At block 906, a region of interest is determined for each image. In example embodiments, this block is performed simultaneously with block 904. In example embodiments, one or more regions of interest are identified, such as a front and rear side window (e.g., region of interests 612 and 614). Where a region of interest is not detected, the image is discarded, or the occupant detector 110 may record this as an instance of no occupants.

[00306] The license plate recognition can be done on the front side or the rear side, or both sides. Lighting conditions and country/province of operation (e.g., depending on requirement a front plate on vehicles) are factors to consider.

[00307] Optionally, at block 908, the images are cropped so that only pixels in the region of interest are referred to for occupant detection. In example embodiments, this may allow for a more efficient occupant detector capable of running on legacy systems with limited computing resources.

[00308] At block 910, the cropped image(s) are processed with the occupant detector 110 using a classifier to identify a number of occupants within the region of interest. For example, the classifier may be a single shot classifier SST trained to identify individuals in pixels.

[00309] At block 912, the vehicle occupancy is determined based on the classified number of individuals identified in block 910. For example, the occupant detector 110 may average the number of occupants identified.

[00310] In example embodiments, where the region of interest includes a front and a rear side window, the occupant detector 110 is configured to, (1) determine the amount of individuals present in the rear and front side windows, and (2) average, over the plurality of images, the number of detected occupants in each of the rear and the front side windows. Continuing the example, if there are five images, and the following number of occupants are detected in the rear side window in successive images: 2, 3, 2, 1 , 2, the occupant detector 110 may determine that there are 2 occupants in the rear of the vehicle. A similar process may be carried out for the front side window. In example embodiments, where the region of interest includes a front and a rear side window, the occupant detector 110 is configured to count the number of occupants identified in each image for each of the front window and the rear side window, and determines the vehicle occupancy as the sum of (1) the maximum number of detected individuals in the front side window, and (2) maximum number of detected individuals in the rear side window.

[00311] Referring now to FIGs. 10A to 10G, which each show an image of a vehicle with various regions of interest shown, in accordance with example embodiments.

[00312] FIG. 10A shows an example visual representation wherein bounding boxes have been accurately associated with four occupants in a single vehicle. FIGs. 10B - 10D show example visual representations of multiple individuals being identified in multiple vehicles across multiple lanes. FIG. 10E includes bounding boxes identifying an occupant despite tinted windows. FIG. 10F shows an example visual representation wherein four individuals have been accurately identified in the first detected vehicle, including an individual whose head is turned away from the imaging device. FIG. 10G shows an example visual representation wherein bounding boxes have been accurately associated with an occupant, and have correctly not identified an animal as an occupant.

[00313] In example embodiments, the report generated by the occupant detector 110 may include historical information about vehicle occupancy as determined by the system 100 or 1800. For example, in the shown visual representation of FIG. 11 , the occupant detector 110 outputs a report which includes an interactive chart representing the average total number of occupants detected over a period of time. Advantageously, such reports may be used to determine road capacity, road usage, and changing traveller compositions over time. In example embodiments, the occupant detector 110 outputs various report information into a visual interface capable of being interacted with. For example, the occupant detector 110 may output detection rates for storage or transmission.

[00314] The system 100 or 1800 may be capable of achieving accuracy of detecting vehicle occupancy at significantly higher rates than can be achieved by human observation.

[00315] In some embodiments, for example, once the report is generated and transmitted to the tolling authority, or other third party, the system 100 or 1800 deletes all local storage of the plurality of images associated with the occupancy detection.

[00316] In example embodiments, the system 100 or 1800 may include one or more privacy features to prevent the imaging data from being inappropriately authorized. The computing device 102 or 1802 may be configured to process the image with the occupant detector 110 locally to prevent loss of sensitive image data. The system 100 or 1800 may store data (e.g., on database 112) on hard drives that are encrypted, in addition to encrypting every file (image or otherwise) associated with the operation of the system 100 or 1800. In some embodiments, the computing device 102 or 1802 may be configured to, prior to saving any image, detect faces in the images and blur beyond recognition any detected faces. Any transmission of data originating from within system 100 or 1800 (e.g., command signals, images, etc.) may be encrypted prior to transmission, and any stored data within the system 100 or 1800 may be configured to be deleted after a data retention deadline passes.

[00317] FIG. 23 illustrates an image that has undergone an anonymized, privacy-preserving process, according to some embodiments.

[00318] The system 100 and/or 1800 may deliver only privacy-preserving anonymized images. In such embodiments, system 100 and/or 1800 may use non-invertible processed images that strip personally identifiable information from the images such as color and texture of the skin, as well as obfuscating aspect ratio and vertical-to-horizontal proportions. For example, FIG. 23 shows initial image 2302 converted to anonymized image 2304. Anonymized image 2304 has had the colour and texture of the skin obfuscated as well as distortions to the aspect ratio.

[00319] In some embodiments, such anonymized images can be used throughout the life cycle of the invention (i.e. for training, tuning, deployment and auditing) enabling the system 100 and/or 1800 work as intended and be improved via further data collection and labeling of images for improvement through supervised learning without exposing personally identifiable information. Such anonymization can be implemented at no further computational cost using deep convolutional networks in which one or a few filters from the first few convolutional blocks can be chosen after which the weights of those filters from those layers can be frozen, and the weights from other filters in those layers can be forced to zero; and only subsequent layer weights can be kept free for further training.

[00320] In example embodiments, systems 100 or 1800 may be configured to save and transmit only the images associated with deemed violators of vehicle occupation rules, thereby further minimizing the scale possible breaches.

[00321] FIG. 12 is an architecture diagram 1200 of the system 100, according to example embodiments. [00322] In FIG. 12, at step 1202, the vehicle detector(s) 114 detects and timestamps LIDAR data at high frequency.

[00323] At step 1204, the system 100 processes the received LIDAR data with a signal processing algorithm to detect a passing vehicle with low latency in one or more Lanes of Interest (Lol). In example embodiments, as described, the signal processing techniques determine whether the detected range changes.

[00324] At step 1206, the imaging device(s) 118 are activated to capture images and light emitter(s) 116 are activated (shown camera trigger 1218 as flash trigger 1220). In example embodiments, imaging device(s) 118 and light emitter(s) 116 are activated simultaneously.

[00325] At step 1208, the computing device 102 detects features of interest in the captured images. For example, computing device 102 may perform method 900.

[00326] At step 1210 the computing device 102 determines the number of occupants in the regions of interest of step 1208.

[00327] At step 1212, optionally, the computing device 102 may store all or some of the received and processed data. For example, the computing device 102 may store the received images into database 112, including a timestamp and metadata (number of people, debugging data, camera parameters, LIDAR triggering information, etc.).

[00328] At step 1214, the computing device 102 may transmit the stored data to a web server, such as the web server of a system operator.

[00329] The web server, which may be a separate computing device 102, remote to the computing device located on the roadside unit, may run in parallel to the roadside system (e.g., system 514) to access the latest acquisitions and inspect results from the system. The web server also can be used to tune configuration parameters such as shutter time, camera gain and desired Lanes of Interest for the roadside system.

[00330] According to example embodiments, the computing device 102 may determine at step 1204, or at any point after the vehicle detector(s) 114 has triggered, whether the vehicle detector(s) 114 was correct in determining a vehicle detection, referred to as trigger accuracy. Trigger accuracy may be an important aspect that determines overall performance of system 100. [00331] Trigger accuracy may be represented by the trade-off between two error types: false triggers (triggering when no vehicle is there typically because of rain, dust or snow) and missed triggers (not triggering when a vehicle is in fact there).

[00332] The computing device 102 can be configured to reject false triggers (e.g., if no vehicle is present in the set of acquired images, said images are simply discarded), as false triggers can cause premature aging of the system.

[00333] In example embodiments, the system 100 is trained to reduce both false triggers and missed triggers to a minimum. Example field test results are shown below:

[00334] In example embodiments, the computing device 102 may determine at step 1204 whether the occupant detector 110 was correct in determining a vehicle occupation, based on the system’s 100 ability to overcome dark windows. For example, the system 100 may use ultra-high power narrowband narrow-field Infrared (IR) light emitter(s) 116 with matched imaging device(s) 118 camera sensor and filters. The light emitter(s) 116 may use a light wavelength that simultaneously maximizes penetration of window tint and minimizes interference from the sun, and in example embodiments, the system 100 can have two large LED panel light emitter(s) 116 capable of penetrating window tint at a distance of up to 9 meters. The occupant detector 110 may review images to determine whether the images include vehicles where a detection signal is received, and determine whether a vehicle is present in the images.

[00335] In example embodiments, the computing device 102 may determine at step 1204 whether the occupant detector 110 was correct in determining a vehicle detection, based on the system’s 100 ability to distinguish between humans and other objects. For example, the occupant detector 110 may be a deep neural network, trained on training data consisting of over 250,000 examples specific to the vehicle occupancy detection (VOD) case, as well as millions of training images outside of the VOD context for further robustness. As a result, the occupant detector 110 may be able to distinguish human beings in some of the most difficult poses compared to animals or other objects. According to example embodiments, the use of infrared imaging may be able to distinguish human beings from dolls as doll skin material may react differently to the infrared illumination to a degree sufficiently different compared to human skin.

[00336] In example embodiments, the computing device 102 may determine at step 1204, or at any point thereafter, whether the occupant detector 110 was correct in determining a vehicle detection, based on the system’s 100 ability to detect curtains and possible obstructions (e.g., curtains, pants hanging, etc.) for further review. For example, the computing device 102 may be trained to, instead of detecting no occupants where curtains are shown, flag images with detected curtains for further validation.

[00337] At step 1216, the computing device 102 may upload the stored data with a data upload service.

[00338] The computing device 102 may prompt a user to validate the occupancy detection in response to an image being flagged, or in response to a suspected trigger accuracy error. Referring now to FIG. 13, an example user interface 1300 for validating use occupancy is shown.

[00339] In the shown embodiment, user interface 1300 includes an image display panel 1302, an image display slide 1304, and image enhancement panels 1306-1 and 1306-2.

[00340] The image display slide 1304 may be used by the user to control the image displayed in the image display panel 1302. In the shown embodiment, five images are associated with an occupancy detection, and the slider allows for changing the image display panel 1302 to any of the five images.

[00341] Each of image enhancement panels 1306-1 and 1306-2 may show an enlarged view of a portion of the image shown in image display panel 1302 for easier viewing. In some embodiments, the image enhancement panels 1306-1 and 1306-2 show the previous and subsequent image associated with the particular vehicle object detection.

[00342] Validation may consist of receiving user input associated with any one of occupant validation input 1308-1 , occupant validation input 1308-2, occupant validation input 1308-3, and occupant validation input 1308-4 (hereinafter the occupant validation inputs). User selection of the occupant validation inputs can indicate the correct number of occupants in the images shown in image display panel 1302. For example, user selection of the occupant validation input 1308-1 and occupant validation input 1308-2 can be indicative of 1 or 2 occupants, or more than 3 occupants. In example embodiments, various numbers of occupant validation inputs are contemplated.

[00343] User selection of occupant validation input 1308-3, which is representative that the image cannot be validated, can trigger the generation and display of a drop down menu which includes selectable elements for indicting the reason the image cannot be validated. In some embodiments, the drop down menu includes the following reasons: the image was too dark, too much glare, the image was obstructed, and the tint was not overcome.

[00344] Occupant validation input 1308-4 may be used to cycle between occupancy detection. Exit element 1310 can be used to stop validation of the selected image.

[00345] In example embodiments, a further imaging device (not shown) is used with the system 100, which will provide image data used to validate the detected vehicle occupancy. For example, imaging device(s) 118-3 of FIG. 7 may be used as this further imaging device. Images captured by the imaging device may be used for monitoring and tuning purposes, and accessed through interface 1300.

[00346] FIG. 22 is an architecture diagram 2200 of the system 1800, according to example embodiments. Architecture diagram 2200 operates in a substantially similar manner to architecture diagram 1200 except that architecture diagram 2200 is configured to detect vehicles in the images using vehicle detector 2207. System details related to corresponding components of architecture diagram 1200, subject to the operations differences that would be appreciated by the skilled person, apply equally to components of architecture diagram 2200.

[00347] In FIG. 22, at step 2206, the imaging device(s) 118 are continuously activated to capture images and light emitter(s) 116 are continuously activated (shown camera trigger 2218 as flash trigger 2220 to, for example, emit light and capture images in correspondent patterns). In example embodiments, imaging device(s) 118 and light emitter(s) 116 are activated simultaneously.

[00348] At step 2207, the computing device 1802 detects images from the plurality of images continuously captured by the imaging device(s) 118. In some embodiments, the computing device 1802 may detect when a vehicle is entering a certain position in the field of view and exiting another position in the field of view and provide the series of photos inclusive of and between these moments and provide them for further analysis. In some embodiments, the computing device 1802 may determine when a vehicle is in an optimal lighting position within the series of photos and provide those for further analysis.

[00349] At step 2208, the computing device 1802 detects features of interest in the captured images. For example, computing device 1802 may perform method 900.

[00350] At step 2210 the computing device 1802 determines the number of occupants in the regions of interest of step 2208.

[00351] At step 2212, optionally, the computing device 1802 may store all or some of the received and processed data. For example, the computing device 1802 may store the received images into database 112, including a timestamp and metadata (number of people, debugging data, camera parameters, etc.).

[00352] At step 2214, the computing device 1802 may transmit the stored data to a web server, such as the web server of a system operator.

[00353] At step 2216, the computing device 1802 may upload the stored data with a data upload service.

[00354] In some embodiments, a vehicle detector(s) 114 could detects and timestamps LIDAR data at high frequency and the system 1800 could processes the received LIDAR data with a signal processing algorithm to detect a passing vehicle with low latency in one or more Lanes of Interest (Lol). The detection of said passing vehicle could, in some embodiments, modify a one or more parameters of the operation of light emitter(s) 116 and/or imaging device 118 (e.g., increase the illumination intensity or increase the rate of image capture).

[00355] FIG. 14 is a schematic diagram of computing device 102, in accordance with an embodiment.

[00356] As depicted, computing device 102 includes at least one processor 1402, memory 1404, at least one I/O interface 1406, and at least one network interface 1408.

[00357] Each processor 1402 may be, for example, any type of microprocessor or microcontroller (e.g., a special-purpose microprocessor or microcontroller), a digital signal processing (DSP) processor, an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, a programmable read-only memory (PROM), or any combination thereof.

[00358] Memory 1404 may include a suitable combination of any type of computer memory that is located either internally or externally such as, for example, random-access memory (RAM), read-only memory (ROM), compact disc read-only memory (CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically- erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM) or the like.

[00359] Each I/O interface 1406 enables computing device 102 to interconnect with one or more input devices, such as a keyboard, mouse, camera, touch screen and a microphone, or with one or more output devices such as a display screen and a speaker.

[00360] Each network interface 1408 enables computing device 102 to communicate with other components, to exchange data with other components, to access and connect to network resources, to serve applications, and perform other computing applications by connecting to a network (or multiple networks) capable of carrying data including the Internet, Ethernet, plain old telephone service (POTS) line, public switch telephone network (PSTN), integrated services digital network (ISDN), digital subscriber line (DSL), coaxial cable, fiber optics, satellite, mobile, wireless (e.g., Wi-Fi, WiMAX), SS7 signaling network, fixed line, local area network, wide area network, and others, including any combination of these.

[00361] For simplicity only, one computing device 102 is shown but computing device 102 may include multiple computing devices 102. The computing devices 102 may be the same or different types of devices. The computing devices 102 may be connected in various ways including directly coupled, indirectly coupled via a network, and distributed over a wide geographic area and connected via a network (which may be referred to as “cloud computing”).

[00362] For example, and without limitation, a computing device 102 may be a server, network appliance, set-top box, embedded device, computer expansion module, personal computer, laptop, personal data assistant, cellular telephone, smartphone device, LIMPC tablet, video display terminal, gaming console, or any other computing device capable of being configured to carry out the methods described herein. [00363] In some embodiments, a computing device 102 or 1802 may function as a client device, or data source.

[00364] In some embodiments, each of the vehicle detector controller 104, the light emitter controller 106, the imaging device controller 108, the occupant detector 110, and the second imaging device controller 122 are a separate computing device 102. In some embodiments, the vehicle detector controller 104, the light emitter controller 106, the imaging device controller 108, the occupant detector 110, and the second imaging device controller 122 are operated by a single computing device 102 having a separate integrated circuit for each of the said components, or may be implemented by separate computing devices 102. Various combinations of software and hardware implementations of the vehicle detector controller 104, the light emitter controller 106, the imaging device controller 108, the occupant detector 110, and the second imaging device controller 122 are contemplated. In some embodiments, all or parts of the vehicle detector controller 104, the light emitter controller 106, the imaging device controller 108, the occupant detector 110, and the second imaging device controller 122 may be implemented using conventional programming languages such as Java, J#, C, C++, C#, Perl, Visual Basic, Ruby, Scala, etc. In some embodiments, these components of system 100 may be in the form of one or more executable programs, scripts, routines, statically/dynamically linkable libraries, or the like.

[00365] For the confidence interval, training of the occupant detector 110 may include distinguishing between false positives and false negatives.

[00366] The occupant detector’s 110 accuracy (alternatively referred to as performance) may be assessed (for example, during training) using calculations of False Positive (FP) and False Negatives (FN), which may vary depending on the application.

[00367] Simple 2-stage Model - High vs. Low Occupancy

[00368] The most common objective of the occupant detector 110 is to distinguish between a high-occupancy vehicle (for example, a vehicle with 2 or more occupants) and a low-occupancy vehicle (for example, a vehicle with only a driver - single occupant). The system performance can be expressed as a "confusion matrix" of 4 numbers (N1 , N2, N3, N4). The 4 numbers in the confusion matrices should be independent nonnegative integer numbers. The numbers do not need to add up to 100& row-wise or column-wise. A confusion matrix can include the following: Predicted Occupancy

Actual occupancy

Table 1 : Confusion Matrix

[00369] The confusion matrix example shows the rates at which actual “x occupant vehicles” are identified as “y occupant vehicles” for all possible combinations of “x” and “y”. The confusion matrix (and therefore system performance) is completely characterized by two types of errors, namely (1) False Negatives (Top-right corner, red, “FN”): A low-occupancy vehicle is incorrectly seen as high-occupancy, in a high occupancy vehicle (HOV) context, this means the percentage of violators that are given a "free pass", and (2) False Positives (Bottom-right corner, red, “FP”): A high-occupancy vehicle is incorrectly seen as low-occupancy. In an HOV context, FP represents the percentage of honest road users that are wrongfully ticketed.

[00370] The cells in the confusion matrix that represent correct guesses are related to the error rates as shown in Table 1. These errors are not a specific quality of the system 100 but are rather the nature of the vehicle occupancy detection (VOD).

[00371] The system 100 may have the capability to adjust a relative weight of the FP errors and the FN errors before or during roadside deployment. Alternatively stated, the system 100 may trade one type of error for another depending on the configuration. The system 100 may be adjusted based on a wide range of FN and FP variations. For example, according to some embodiments, the system 100 can be configured such that both types of errors (wrong tickets and free passes) are given equal importance. In other example embodiments, the system 100 can be setup in a mode where wrong tickets are given more importance and reduced at the expense of increased free passes. Multiple variations of relative weighing of the FN and FP errors are contemplated.

[00372] In example embodiments, the system 100, instead of determining the mutually exclusive “low-occupancy” or “high-occupancy” may output (e.g., in a report) a continuous probability/confidence that can be normalized. The closer the confidence/probability value can provide an indication of how confident the system 100 is about the detected vehicle having low occupancy. A road operator may select to flag or ticket or take other actions with respect to all vehicles above the threshold confidence. If the threshold is normalized to represent sensible/meaningful numbers, the threshold can be determined or configured by operators of the system 100.

[00373] In example embodiments, the degree of confidence (or confidence value) can be discretized, such that various confidence values are associated with various pre-set use cases, or any number of operating modes may be configured by the operator. For example, a confidence threshold can be configured for a first mode of operation of system 100 in order to ticket individuals. The greater the confidence threshold, the less risk the tolling operator will have on creating a false positive. However, the tolling operator will operate with a higher chance of missing violations with such a high confidence threshold.

[00374] Some example embodiments of system configuration for relative weight of the FP errors and the FN errors (alternatively referred to as modes) are further described below:

[00375] Example Mode A - In example mode A, the system 100 is deployed such that the occupant detector 110 is trained that wrongfully identifying honest users has an equal importance to giving violators a free pass. For example, this configuration may be used in an area where a fair amount of both low-occupancy and high occupancy vehicles are expected. In this mode, both FP and FN errors are treated as equally important.

Predicted Occupancy

Actual occupancy

Table 2: Mode A - Confusion Matrix for the system where both error types are considered of equal importance

[00376] In an example configuration in accordance with mode A, the test results are shown in Table 2 above, and there is a computed chance that the system will make an error and give either a "free pass" or a wrong ticket.

[00377] Example Mode B - In example mode B, the occupant detector 110 is trained to emphasize providing some violators a free pass at the expense of significantly reducing the number of honest users that are wrongfully ticketed. Example mode B may be advantageously deployed in a high-occupancy lane where it is expected that relatively more high-occupancy vehicles are present, such as HOV lanes, to increase faith in the system 100. In example mode B, the FP is reduced relative to the FN. Stated alternatively, the FP is reduced at the expense of increasing FN.

Predicted Occupancy

Table 3: - Mode-B - Confusion Matrix where false positives (honest road users being ticketed) is given more importance than false negatives (giving a violator a pass)

[00378] Table 3 shows example experimental results where the system 100 is trained to operated according to model B, where false positives (honest road users being ticketed) is given more importance than false negatives (giving a violator a pass). As is shown in Table 3, this mode makes LESS mistakes on actual 2+ occupant vehicles, and there is a chance of giving a wrong ticket.

[00379] According to example embodiments, where the system 100 is setup in a HOV lane and the expectation will be that the majority of road users are high-occupancy, the overall system accuracy may be adjusted if mode B is employed.

[00380] In example embodiments, the system 100 can be configured to switch between example modes. For example, during an initial phase, it may be expected that the target lane will experience many cases of a single occupant within a detected vehicle travelling in the HOV lanes and violating the law, and therefore mode A may be employed. This initial phase may include occupants of road vehicles “testing out” the HOV lanes or the system 100, and the tolling system described herein may be configured to issue warnings to road users during the initial phase.

[00381] As understanding and use of the HOV lanes increases, it may be likely that the distribution of detected vehicles will shift such that the large majority of users are honest high occupancy vehicles. At that time, the system 100 may be configured to operate according to mode B and experience a system with overall high accuracy. The system 100 uses a method to adaptively change the optimal trade-off of the occupancy overcounting and undercounting errors based on traffic patterns and road user behavior. The system learns and adapts overtime from toll or enforcement data gathered and fed back into the system 100 over consecutive time intervals of the system’s 100 operation on the road.

[00382] 3-stage Model

[00383] System 100 may be configured to detect the number of occupants in the vehicle, irrespective of a legal requirement for occupancy within a particular lane. In some embodiments, for example, the calculation for the vehicle occupancy accuracy may be more complex if multiple options are to be determined, such as distinguishing between 1 , 2, 3+ occupants where are all equally important. The system 100 may be able to achieve the following example performance shown in Table 4:

Table 4: Confusion Matrix when all 1, 2, 3+ errors are deemed equally important

[00384] Each of the different modes may be learned during a training stage for each system 100. The training stage can configure different operating parameters that corresponds to the desired weighing of FP and FN, balancing the different types of errors and minimizing manual intervention. The system 100 can be adapted to a vast range of FP and FN conditions even after deployment.

[00385] The embodiments of the devices, systems and methods described herein may be implemented in a combination of both hardware and software. These embodiments may be implemented on programmable computers, each computer including at least one processor, a data storage system (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one communication interface. [00386] Figure 16 shows an example system with road-side units (iRSU) 1600 transmitting data to a cloud server 1602. The iRSU 1600 has an imaging device that captures successive images of vehicles moving along in a lane on a highway or road.

[00387] The systems described throughout the description can be packaged and miniaturized into road side units (iRSU) 1600 with adjustable mechanical and software components. Multiple such units 1600 can be deployed on a region of road (highway or urban streets) with different mechanical and software configurations. As a vehicle travels through the region of the road, the vehicle gets observed by these different units 1600. Each unit 1600 can have different mechanical and software configuration, and captures its own successive images from different angles and configurations and makes its own occupancy prediction. Each unit 1600 uploads its unique prediction and the confidence the unit has over the prediction with a unique vehicle identifier (for example: license plate) on a cloud server 1602. Software on the cloud server 1602 then fuses all data coming from different units 1600 for a single vehicle. This method produces a unified higher- fidelity determination for each vehicle which then can enable making high accuracy toll or enforcement decisions. The system can be a server 1602 connected to a network of road side units 1600 capable of higher level performance (e.g., swarm intelligence). This higher level performance can be obtained because different units 1600 can have different configurations and can collectively make better decisions based on different fusion methods including but not limited to Bayesian estimation and different voting schemes.

[00388] The road-side unit 1600 is configured to capture images of vehicles travelling along a lane of a road. The road side unit can capture images from a fixed perspective. The successive images are, for each vehicle, analyzed to determine a likely vehicle occupancy. As a result of capturing multiple images from the fixed perspective, and further as a result of the images being captured from a roadside position, the images between installations may allow for more robust training, and portable occupancy detection approaches, which are adaptable to a variety of operating environments. The use of the multiple images being captured from a fixed roadside position also allows the system to generate a robust estimation of the vehicle occupancy without the need for expensive or overhead systems that are difficult to install. The roadside system may require fewer parts, have lower maintenance costs, and be easier to deploy. A central server 1602 can connect to all units 1600 and collect data from the units 1600 for processing.

[00389] In some embodiments, there is provided a roadside occupancy detector for detecting vehicle occupancy of a vehicle travelling in an expected direction of travel along a road. The roadside occupancy detector (or unit 1600) can have a first roadside imaging device positioned on a roadside, having a first field of view of the road, the first field of view incident on a side of the vehicle when the vehicle is on the road within the first field of view. The unit 1600 has a first roadside light emitter emitting light towards vehicles in the first field of view. The unit 1600 has a roadside vehicle detector. In some embodiments, the unit 1600 has a processor, in communication with a memory, configured to: receive a signal from the roadside vehicle detector indicating that the vehicle is within or proximate, relative to the expected direction of vehicle travel, to the first field of view; command the first roadside light emitter to emit light according to a first pattern for a first duration; command the first roadside imaging device to capture images of the side of the vehicle according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images of the side of the vehicle from the first roadside imaging device; compute a vehicle occupancy of the vehicle by, in each of the captured images: determining one or more regions of interest of the vehicle in each of the captured images; determining the vehicle occupancy as a number of visible occupants in the one or more regions of interest; and determining a most likely number of occupants based on each determined vehicle occupancy; and transmit the vehicle occupancy to a monitoring system.

[00390] Figure 17 shows another example system with road side units 1600 transmitting data to a cloud server 1602. A set of units 1600 can connect to an intermediate server which can connect to the cloud server 1602, in some embodiments. The road side units 1600 can send data to the cloud server 1602 such as occupancy count, confidence value, enforcement decisions for observed vehicles.

[00391] Program code is applied to input data to perform the functions described herein and to generate output information. The output information is applied to one or more output devices. In some embodiments, the communication interface may be a network communication interface. In embodiments in which elements may be combined, the communication interface may be a software communication interface such as those for inter-process communication. In other embodiments, there may be a combination of communication interfaces implemented as hardware, software, and combination thereof.

[00392] In an aspect, a system for detecting occupancy of a vehicle in a road is disclosed. The system includes an imaging device adjacent to the road, having a field of view of the road. A side of the vehicle is in the field of view when the vehicle is on the road within the field of view. The system includes a first light emitter adjacent to the road and emitting light towards vehicles in the field of view. There can be a vehicle detector adjacent to the road and upstream of the imaging device relative to an expected direction of vehicle travel, for example. A processor, in communication with a memory, is configured to receive a signal from the vehicle detector, indicating that the vehicle is within or upstream, relative to the expected direction of vehicle travel, of the field of view. The processor transmits, in response to receiving the signal, a first command signal to the light emitter to emit light according to a first pattern for a first duration, and, a second command signal to the imaging device to capture images of the side of the vehicle according to a second pattern associated with the first pattern, for a second duration associated with the first duration. The processor receives the captured images of the side of the vehicle from the imaging device, and computes a vehicle occupancy of the vehicle based on the captured images. The processor transmits the vehicle occupancy to a monitoring system.

[00393] In example embodiments, the imaging device, the first light emitter, and the vehicle detector are attached to a mobile roadside structure adjacent to the road at a height taller than an expected barrier.

[00394] In example embodiments, the imaging device is positioned to have a yaw angle relative to a horizontal axis perpendicular to the expected direction of vehicle travel such that each of the images captured by the imaging device includes different perspectives of the vehicle based on the first yaw angle. In example embodiments, the different perspectives are empirically determined so as to yield different angles of any occupants in a rear and front side window.

[00395] In some embodiments, the imaging device can capture multiple images from the vehicle so that occupants are seen from different perspectives as the vehicle travels horizontally across the field of view, which can be reflected in extracted data from the images. An imaging device can have a large horizontal field of view and the system may be able to extract data for change of perspective by taking multiple successive images as the vehicle is traveling from one end of the horizontal field of view to the other, even with a zero yaw angle. However, having a large of a field of view may not always possible or favorable for some other reasons. Using a nonzero yaw angle may accentuate the change of perspective within a limited horizontal motion of the vehicle in the field of view. Accordingly, the system computes data corresponding to change of perspective, and, in some embodiments uses the yaw angle as an example.

[00396] In example embodiments, the system further includes a second imaging device, adjacent to the road, a first height above the ground greater than a height above the ground of the imaging device. The second imaging device has a field of view of a second lane of the road, the second lane being further from the first imaging device than the first lane of the road. A side of a further vehicle is in the second field of view when the further vehicle is in the second lane within the second field of view. The system includes a second light emitter adjacent to the road and emitting light towards vehicles in the field of view of the second lane. The processor is further configured to receive a second signal from the vehicle detector indicating that a further vehicle is within or upstream, relative to the expected direction of vehicle travel, of the field of view of the second lane. The processor transmits, in response to receiving the second signal, a third command signal to the second light emitter to emit light according to a third pattern for a third duration, and a fourth command signal to the second imaging device to capture additional images of a side of the further vehicle according to a fourth pattern associated with the third pattern, for a fourth duration associated with the third duration. The processor receives the additional captured images of the side of the further vehicle from the second imaging device, and computes a vehicle occupancy of the further vehicle based on the additional captured images. The processor transmits the vehicle occupancy of the further vehicle to the monitoring system.

[00397] In example embodiments, to compute the vehicle occupancy of the vehicle, the processor is further configured to, in each of the captured images: determine one or more regions of interest of the vehicle in each of the plurality of captured images; reduce the plurality of captured images to the determined one or more regions of interest; determine a number of visible occupants in the reduced plurality of images; and determine a most likely number of occupants based on an occupant model most likely to fit all of the respective number of visible occupants in the captured images.

[00398] In example embodiments, the processor is further configured to monitor signals over time to determine an expected vehicle speed of the vehicle and adjust one or more parameters of the imaging device into a determined optimal configuration for capturing vehicles travelling the expected vehicle speed.

[00399] In example embodiments, the processor is further configured to monitor signals over time to determine an expected speed of the vehicle, and determine the first pattern and the first time window based on the expected vehicle speed.

[00400] In example embodiments, the processor is further configured to monitor signals over time to determine an expected speed of the vehicle, and determine one or more normalization parameters, the one or more normalization parameters adjusting the representation of the vehicle in the images to account for the expected vehicle speed. The processor generates a normalized vehicle model populated with each of the plurality of images processed with the respective normalization parameters to normalize the vehicle across the captured plurality to images.

[00401] In example embodiments, the system further includes a sensor for detecting ambient conditions, and the processor is further configured to receive ambient condition information from the sensor, determine an optimal configuration for the imaging device based on the received ambient condition, and transmit a further command signal to the imaging device capture images according to the optimal configuration. In example embodiments, the optimal configuration is an imaging device exposure gain or aperture.

[00402] In example embodiments, the light emitter is an LED emitting infrared or near infrared light and the first pattern is 120 pulses per second.

[00403] In example embodiments, the system further includes a license plate imaging device, and the processor is further configured to transmit another command signal to the license plate imaging device to capture images of the rear of the vehicle, and compute a license plate based on the rear end captured image.

[00404] In a further aspect, a method of configuring a system for detecting occupancy of a vehicle proximate to a road is disclosed. The method includes determining a distance from a target roadside mounting location to a target lane of the road, determining a width of the target lane, and determining a difference between the height of the ground at the target roadside mounting location and a height of the location of the target lane. The method includes determining a preferred mounting geometry of a light emitter, an imaging device, and a vehicle detector at the target roadside mounting location based on the width, the distance, and the difference; and installing the light emitter, the imaging device, and the vehicle detector to a mobile platform at the target roadside mounting location to enable the imaging device to capture successive images of a detected vehicle.

[00405] The method, in example embodiments, includes monitoring mounted positions of the light emitter, the imaging device, and the vehicle detector to determine whether the mounted positions coincide with the preferred mounting geometry on a display, and in response to determining the mounted positions of the light emitter, the imaging device, and the vehicle detector do not coincide with the preferred mounting geometry, displaying an error message on the display.

[00406] In example embodiments, the error message includes a correction parameter. Accordingly, the method can involve computing a correction parameter and providing visual guidance using augmented reality avatars on a display device. For example the visual guidance can involve showing semi-transparent virtual lane markings overlaid on camera feed and asking the calibration technician to move the camera until the virtual lane markings match and cover the real lane markings on the road in the image feed displayed on the screen.

[00407] In a further aspect, there is provided a system for detecting vehicle occupancy. The system involves a first roadside imaging device having a first field of view, a first roadside light emitter emitting light in the first field of view, a roadside vehicle detector, and a processor, in communication with a memory. The processor is configured to receive a signal from the roadside vehicle detector; command the first roadside light emitter to emit light according to a first pattern for a first duration, command the first roadside imaging device to capture one or more images according to a second pattern associated with the first pattern, during a second duration associated with the first duration; receive the captured images from the first roadside imaging device, compute a vehicle occupancy by, in each of the captured images determining one or more regions of interest in each of the captured images, determining the vehicle occupancy based on the one or more regions of interest, and determining a most likely number of occupants based on each determined vehicle occupancy, and transmit the vehicle occupancy to a monitoring system or store the vehicle occupancy in memory.

[00408] In some embodiments, the first roadside imaging device is positioned to extract data for different perspectives across the field of view and at least some of the images captured by the first roadside imaging device include the different perspectives.

[00409] In some embodiments, the processor is configured to compute a yaw angle relative to a horizontal axis perpendicular to an expected direction. The images captured by the first roadside imaging device include the different perspectives based on the first yaw angle.

[00410] In some embodiments, the processor, to compute the vehicle occupancy, is configured to discard uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images and determine a number of visible occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images. [00411] In some embodiments, the first roadside imaging device, the first roadside light emitter, and the vehicle detector are attached to a mobile roadside structure.

[00412] In some embodiments, the system has a second roadside imaging device, above the first roadside imaging device, the second roadside imaging device having a second field of view and a second roadside light emitter emitting light in the second field of view. The processor is further configured to receive another signal from the vehicle detector; command the second roadside light emitter to emit light according to a third pattern for a third duration, command the second roadside imaging device to capture additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration, receive the additional captured images from the second roadside imaging device, compute another vehicle occupancy by, in each of the additional captured images by determining one or more regions of interest in each of the additional captured images, determining the vehicle occupancy using the one or more regions of interest, and determining a most likely number of occupants based on each determined vehicle occupancy of the further vehicle, and transmit the vehicle occupancy to the monitoring system.

[00413] In some embodiments, the first field of view and the second field of view overlap, and the processor is further configured to determine the one or more regions of interest in the one or more additional captured images, determine a further number of visible occupants in the one or more additional captured images in the one or more regions of interest, and determine the most likely number of occupants based on each determined vehicle occupancy and each determined further number of visible occupants.

[00414] In some embodiments, the processor is further configured to monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed and adjust one or more parameters of the first roadside imaging device or the first light emitter into a determined optimal configuration for capturing images based on the expected vehicle speed.

[00415] In some embodiments, the processor is further configured to monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed, and determine the first pattern and the first time window based on the expected vehicle speed.

[00416] In some embodiments, the system involves a sensor for detecting ambient conditions. The processor is further configured to receive ambient condition information from the sensor, determine an optimal configuration for the imaging device based on the received ambient condition, and transmit a further command signal to the imaging device capture images according to the optimal configuration.

[00417] In some embodiments, the light emitter is an LED emitting infrared or near infrared light, the first pattern is 120 pulses per second.

[00418] In another aspect, there is provided a method for detecting vehicle occupancy. The method involves receiving a signal from a detector based on a first field of view of a first roadside imaging device, commanding a first roadside light emitter to emit light according to a first pattern for a first duration, commanding the first roadside imaging device to capture images according to a second pattern associated with the first pattern, during a second duration associated with the first duration, receiving the captured images from the first roadside imaging device, computing a vehicle occupancy by, in each of the captured images by determining one or more regions of interest in each of the captured images, determining the vehicle occupancy in the one or more regions of interest, and determining a most likely number of occupants based on each determined vehicle occupancy, and transmitting the most likely number of occupants to a monitoring system or storing the vehicle occupancy in memory.

[00419] In some embodiments, the method involves discarding uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images and determining the number of occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

[00420] In some embodiments, the one or more regions of interest include at least one of a rear side window and a front side window.

[00421] In some embodiments, each of the captured images includes different perspectives based on a yaw angle which encourages image variation.

[00422] In some embodiments, the method involves commanding a second roadside imaging device to capture additional images from a second field of view according to a fourth pattern associated with the first pattern, for a fourth duration associated with the first duration and receiving the additional captured images from the second roadside imaging device. Computing the vehicle occupancy further comprises, for each of the additional captured images determining one or more additional regions of interest of the vehicle, determining the vehicle occupancy in the additional one or more regions of interest, and determining the most likely number of occupants based on the each of the number of visible occupants and the further number of visible occupants, and transmitting the vehicle occupancy to the monitoring system.

[00423] In some embodiments, the method involves receiving a signal indicating from the detector based on the second field of view, commanding a second roadside light emitter to emit light according to a third pattern for a third duration, commanding the second roadside imaging device to capture additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration, receiving the additional captured images from the first roadside imaging device, computing a further vehicle occupancy by, in each of the additional captured images determining one or more further regions of interest in each of the additional captured images and determining the further vehicle occupancy based on the one or more further regions of interest, determining a most likely number of occupants based on each determined further vehicle occupancy, and transmitting the most likely number of occupants to the monitoring system.

[00424] In some embodiments, the method involves computing a correction parameter and providing visual guidance using augmented reality avatars on a display device.

[00425] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed and adjusting one or more parameters of the first roadside imaging device or the first light emitter into a determined adjusted configuration for capturing images based on the expected vehicle speed.

[00426] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed and determining the first pattern and the first time window based on the expected vehicle speed.

[00427] In another aspect, there is provided a system for detecting vehicle occupancy. The system involves a first roadside imaging device having a first field of view; a first roadside light emitter emitting light in the first field of view, a processor, in communication with a memory. The processor is configured to command the first roadside light emitter to emit light according to a first pattern for a first duration, capture, using the first roadside imaging device, one or more images according to a second pattern associated with the first pattern, during a second duration associated with the first duration, receive the captured images from the first roadside imaging device, compute a vehicle occupancy by, in each of the captured images determining one or more regions of interest in each of the captured images, determining the vehicle occupancy based on the one or more regions of interest, and determining a most likely number of occupants based on each determined vehicle occupancy, and transmit the vehicle occupancy to a monitoring system or store the vehicle occupancy in memory.

[00428] In some embodiments, the processor is further configured to detect the vehicle in one or more images of the captured images from the first roadside imaging device.

[00429] In some embodiments, the detect the vehicle in one or more images of the captured images involves detecting a first image of the captured images where the vehicle is at a first position in the first field of view, detecting a second image of the captured images where the vehicle is at a second position in the first field of view, and generating a series of images of the vehicle using one or more of the first image, zero or more images captured between the first and second images, and the second image.

[00430] In some embodiments, the generating a series of images of the vehicle comprises generating a series of uniformly distanced images of the vehicle.

[00431] In some embodiments, the system involves a roadside vehicle detector. The processor is further configured to receive a signal from the roadside vehicle detector; and adjust one or more parameters of the first roadside imaging device or the first light emitter based on the signal from the roadside vehicle detector.

[00432] In some embodiments, the first roadside imaging device is positioned to extract data for different perspectives across the field of view; and at least some of the images captured by the first roadside imaging device include the different perspectives.

[00433] In some embodiments, the processor is configured to compute a yaw angle relative to a horizontal axis perpendicular to an expected direction, wherein the images captured by the first roadside imaging device include the different perspectives based on the first yaw angle.

[00434] In some embodiments, the processor, to compute the vehicle occupancy, is configured to discard uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images and determine a number of visible occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

[00435] In some embodiments, the first roadside imaging device, the first roadside light emitter, and the vehicle detector are attached to a mobile roadside structure. [00436] In some embodiments, the system involves a second roadside imaging device, above the first roadside imaging device, the second roadside imaging device having a second field of view and a second roadside light emitter emitting light in the second field of view. The processor is further configured to command the second roadside light emitter to emit light according to a third pattern for a third duration, capture, using the second roadside imaging device, additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration, receive the additional captured images from the second roadside imaging device, compute another vehicle occupancy by, in each of the additional captured images by determining one or more regions of interest in each of the additional captured images, determining the vehicle occupancy using the one or more regions of interest, and determining a most likely number of occupants based on each determined vehicle occupancy of the further vehicle, and transmit the vehicle occupancy to the monitoring system.

[00437] In some embodiments, the first field of view and the second field of view overlap, and the processor is further configured to determine the one or more regions of interest in the one or more additional captured images, determine a further number of visible occupants in the one or more additional captured images in the one or more regions of interest, and determine the most likely number of occupants based on each determined vehicle occupancy and each determined further number of visible occupants.

[00438] In some embodiments, the processor is further configured to, monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed and adjust one or more parameters of the first roadside imaging device or the first light emitter into a determined optimal configuration for capturing images based on the expected vehicle speed.

[00439] In some embodiments, the processor is further configured to monitor, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed and determine the first pattern and the first time window based on the expected vehicle speed.

[00440] In some embodiments, the system involves a sensor for detecting ambient conditions. The processor is further configured to: receive ambient condition information from the sensor, determine an optimal configuration for the imaging device based on the received ambient condition, and transmit a further command signal to the imaging device capture images according to the optimal configuration. [00441] In some embodiments, the light emitter is an LED emitting infrared or near infrared light, the first pattern is 120 pulses per second.

[00442] In some embodiments, the processor is further configured to anonymize the captured images.

[00443] In accordance with another aspect, there is provided a method for detecting vehicle occupancy. The method involves commanding a first roadside light emitter to emit light according to a first pattern for a first duration, capturing, using the first roadside imaging device, images according to a second pattern associated with the first pattern, during a second duration associated with the first duration, receiving the captured images from the first roadside imaging device; computing a vehicle occupancy by, in each of the captured images determining one or more regions of interest in each of the captured images, determining the vehicle occupancy in the one or more regions of interest, and determining a most likely number of occupants based on each determined vehicle occupancy, and transmitting the most likely number of occupants to a monitoring system or storing the vehicle occupancy in memory.

[00444] In some embodiments, the method involves detecting the vehicle in one or more images of the captured images from the first roadside imaging device.

[00445] In some embodiments, the detecting the vehicle in one or more images of the captured images involves detecting a first image of the captured images where the vehicle is at a first position in the first field of view, detecting a second image of the captured images where the vehicle is at a second position in the first field of view, and generating a series of images of the vehicle using one or more of the first image, zero or more images captured between the first and second images, and the second image.

[00446] In some embodiments, the generating a series of images of the vehicle involves generating a series of uniformly distanced images of the vehicle.

[00447] In some embodiments, the method involves receiving a signal from a roadside vehicle detector indicating that the vehicle is within or proximate, relative to the expected direction of vehicle travel, to a first field of view of a first roadside imaging device and adjusting one or more parameters of the first roadside imaging device or the first light emitter based on the signal.

[00448] In some embodiments, the method involves discarding uninteresting regions of the plurality of captured images to generate subsets of the plurality of captured images and determining the number of occupants based on determining one or more regions of interest in the respective subset of the plurality of captures images.

[00449] In some embodiments, the one or more regions of interest include at least one of a rear side window and a front side window.

[00450] In some embodiments, each of the captured images includes different perspectives based on a yaw angle which encourages image variation.

[00451] In some embodiments, the method involves capturing, using a second roadside imaging device, additional images from a second field of view according to a fourth pattern associated with the first pattern, for a fourth duration associated with the first duration; receive the additional captured images from the second roadside imaging device. Computing the vehicle occupancy further involves, for each of the additional captured images determining one or more additional regions of interest of the vehicle, determining the vehicle occupancy in the additional one or more regions of interest, and determining the most likely number of occupants based on the each of the number of visible occupants and the further number of visible occupants, and transmitting the vehicle occupancy to the monitoring system.

[00452] In some embodiments, the method involves commanding a second roadside light emitter to emit light according to a third pattern for a third duration, capturing, using the second roadside imaging device, additional images according to a fourth pattern associated with the third pattern, during a fourth duration associated with the third duration, receiving the additional captured images from the first roadside imaging device, computing a further vehicle occupancy by, in each of the additional captured images determining one or more further regions of interest in each of the additional captured images, determining the further vehicle occupancy based on the one or more further regions of interest, and determining a most likely number of occupants based on each determined further vehicle occupancy, and transmitting the most likely number of occupants to the monitoring system.

[00453] In some embodiments, the method involves computing a correction parameter and providing visual guidance using augmented reality avatars on a display device.

[00454] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed and adjusting one or more parameters of the first roadside imaging device or the first light emitter into a determined adjusted configuration for capturing images based on the expected vehicle speed.

[00455] In some embodiments, the method involves monitoring, over time, a plurality of signals from the roadside vehicle detector to determine an expected vehicle speed and determining the first pattern and the first time window based on the expected vehicle speed.

[00456] In some embodiments, the method involves anonymizing the captured images.

[00457] Throughout the following discussion, numerous references will be made regarding servers, services, interfaces, portals, platforms, or other systems formed from computing devices. It should be appreciated that the use of such terms is deemed to represent one or more computing devices having at least one processor configured to execute software instructions stored on a computer readable tangible, non-transitory medium. For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions.

[00458] The following discussion provides many example embodiments. Although each embodiment represents a single combination of inventive elements, other examples may include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, other remaining combinations of A, B, C, or D, may also be used.

[00459] The term “connected” or "coupled to" may include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements).

[00460] The technical solution of embodiments may be in the form of a software product. The software product may be stored in a non-volatile or non-transitory storage medium, which can be a compact disk read-only memory (CD-ROM), a USB flash disk, or a removable hard disk. The software product includes a number of instructions that enable a computer device (personal computer, server, or network device) to execute the methods provided by the embodiments.

[00461] The embodiments described herein are implemented by physical computer hardware, including computing devices, servers, receivers, transmitters, processors, memory, displays, and networks. The embodiments described herein provide useful physical machines and particularly configured computer hardware arrangements. The embodiments described herein are directed to electronic machines and methods implemented by electronic machines adapted for processing and transforming electromagnetic signals which represent various types of information. The embodiments described herein pervasively and integrally relate to machines, and their uses; and the embodiments described herein have no meaning or practical applicability outside their use with computer hardware, machines, and various hardware components. Substituting the physical hardware particularly configured to implement various acts for non-physical hardware, using mental steps for example, may substantially affect the way the embodiments work. Such computer hardware limitations are clearly essential elements of the embodiments described herein, and they cannot be omitted or substituted for mental means without having a material effect on the operation and structure of the embodiments described herein. The computer hardware is essential to implement the various embodiments described herein and is not merely used to perform steps expeditiously and in an efficient manner.

[00462] Although the embodiments have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope as defined by the appended claims.

[00463] Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, and composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

[00464] As can be understood, the examples described above and illustrated are intended to be exemplary only.