Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TRAFFIC MONITORING, ANALYSIS, AND PREDICTION
Document Type and Number:
WIPO Patent Application WO/2023/049453
Kind Code:
A1
Abstract:
One or more devices obtain traffic data, such as video from intersection cameras. The one or more devices perform object detection and classification using the data. The one or more devices determine and/or output structured data using the detected and classified objects. The one or more devices calculate metrics using the structured data. The one or more devices may prepare processed data for visualization and/or other uses. The one or more devices may present the prepared processed data via one or more dashboards and/or otherwise use the prepared processed data.

Inventors:
D'ANDRE NICHOLAS (US)
Application Number:
PCT/US2022/044733
Publication Date:
March 30, 2023
Filing Date:
September 26, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GRIDMATRIX INC (US)
International Classes:
G08G1/01; G08G1/015; G08G1/04; G08G1/16
Domestic Patent References:
WO2021073716A12021-04-22
WO2021061488A12021-04-01
Foreign References:
US20180253973A12018-09-06
US20200388156A12020-12-10
Attorney, Agent or Firm:
ATKINSON, David S. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A system for traffic monitoring, analysis, and prediction, comprising: a memory allocation configured to store at least one executable asset; and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service that: obtains traffic data; performs object detection and classification; determines structured data; calculates metrics using the structured data; prepares processed data for visualization from the metrics; and presents the prepared processed data via at least one dashboard.

2. The system of claim 1 , wherein the at least one service determines the structured data by: determining a frame number for a frame of video; determining an intersection identifier for the frame of video; assigning a unique tracker identifier to each object detected in the frame of video; and determining coordinates of each object detected in the frame of video.

3. The system of claim 2, wherein the at least one service further determines the structured data by determining a class of each object detected in the frame of video.

4. The system of claim 1 , wherein the at least one service calculates the metrics using the structured data by calculating a difference between one or more x or y positions for an object in different frames of video.

5. The system of claim 4, wherein the at least one service uses the difference along with times respectively associated with the different frames to calculate at least one of the metrics that is associated with a speed of the object.

6. The system of claim 5, wherein the speed is an average speed of the object during the video or a cumulative speed of the object.

- 72 -

7. The system of claim 1 , wherein the at least one service calculates the metrics using the structured data by correlating a traffic light phase determined for a frame of video along with a determination that an object arrived at an intersection in the frame.

8. A system for traffic monitoring, analysis, and prediction, comprising: a memory allocation configured to store at least one executable asset; and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service that: retrieves structured data determined from point cloud data from LiDAR sensors used to monitor traffic; calculates metrics using the structured data; prepares processed data for visualization from the metrics; and presents the prepared processed data via at least one dashboard.

9. The system of claim 8, wherein the metrics include at least one of: vehicle volume; average speed; distance travelled; pedestrian volume; non-motor volume; light status on arrival; arrival phase; a route through an intersection; or a light time.

10. The system of claim 8, wherein the at least one service summons at least one vehicle using at least one of the metrics or the processed data.

11 . The system of claim 8, wherein the at least one service tracks near misses/collisions using at least one of the metrics or the processed data.

12. The system of claim 8, wherein the at least one service determines a fastest route using at least one of the metrics or the processed data.

- 73 -

13. The system of claim 8, wherein the at least one service controls traffic signals to prioritize traffic using at least one of the metrics or the processed data.

14. The system of claim 8, wherein the at least one service determines a most efficient route using at least one of the metrics or the processed data.

15. A system for traffic monitoring, analysis, and prediction, comprising: a memory allocation configured to store at least one executable asset; and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service that: constructs a digital twin of an area of interest; retrieves structured data determined from traffic data for the area of interest; calculates metrics using the structured data; prepares processed data for visualization from the metrics; and presents the prepared processed data in a context of the digital twin via at least one dashboard that displays the digital twin.

16. The system of claim 15, wherein the at least one service simulates traffic via the at least one dashboard using the processed data.

17. The system of claim 16, wherein the at least one service simulates how a change affects traffic patterns.

18. The system of claim 17, wherein the change alters at least one of a simulation of: the traffic; a traffic signal; or a traffic condition.

19. The system of claim 15, wherein the digital twin includes multiple intersections.

20. The system of claim 19, wherein the at least one dashboard includes indicators selectable to display information for each of the multiple intersections.

- 74 -

Description:
TRAFFIC MONITORING, ANALYSIS, AND PREDICTION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This Patent Cooperation Treaty patent application claims priority to U.S. Nonprovisional Patent Application No. 17/952,068, filed September 23, 2022, titled “Traffic Monitoring, Analysis, and Prediction;” which claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application No. 63/248,948, filed September 27, 2021 , titled “Traffic Monitoring, Analysis, and Prediction;” U.S. Provisional Patent Application No. 63/315,200, filed March 1 , 2022, titled “Simulation of a Digital Twin Intersection with Real Data;” U.S. Provisional Patent Application 63/318,442, filed March 10, 2022, titled “Traffic Near Miss Detection;” and U.S. Provisional Patent Application No. 63/320,010, filed March 15, 2022, titled “Traffic Near Miss/Collision Detection;” the contents of which are incorporated herein by reference as if fully disclosed herein.

FIELD

[0002] The described embodiments relate generally to traffic monitoring. More particularly, the present embodiments relate to traffic monitoring, analysis, and prediction.

BACKGROUND

[0003] All population areas experience traffic. Traffic may be motorized, non-motorized, and so on. Traffic may include cars, trucks, pedestrians, scooters, bicycles, and so on. Traffic appears to only increase as the population of the world continues to increase.

[0004] Some population areas, such as cities, use cameras and/or other traffic monitoring devices to capture data about traffic. This data may be used to evaluate congestion, traffic signal configurations, road layout, and so on.

SUMMARY

[0005] The present disclosure relates to traffic monitoring, analysis, and prediction. Traffic data may be obtained, such as video from intersection cameras. Object detection and classification may be performed using the data. Structured data may be determined and/or output using the detected and classified objects. Metrics may be calculated using the structured data. Processed data may be prepared for visualization and/or other uses. The prepared processed data may be presented via one or more dashboards and/or the prepared processed data may be otherwise used. [0006] In various embodiments, a system for traffic monitoring, analysis, and prediction includes a memory allocation configured to store at least one executable asset and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service. The at least one service obtains traffic data, performs object detection and classification, determines structured data, calculates metrics using the structured data, prepares processed data for visualization from the metrics, and presents the prepared processed data via at least one dashboard.

[0007] In some examples, the at least one service determines the structured data by determining a frame number for a frame of video, determining an intersection identifier for the frame of video, assigning a unique tracker identifier to each object detected in the frame of video, and determining coordinates of each object detected in the frame of video. In a number of implementations of such examples, the at least one service further determines the structured data by determining a class of each object detected in the frame of video.

[0008] In various examples, the at least one service calculates the metrics using the structured data by calculating a difference between one or more x or y positions for an object in different frames of video. In some implementations of such examples, the at least one service uses the difference along with times respectively associated with the different frames to calculate at least one of the metrics that is associated with a speed of the object. In various implementations of such examples, the speed is an average speed of the object during the video or a cumulative speed of the object.

[0009] In a number of examples, the at least one service calculates the metrics using the structured data by correlating a traffic light phase determined for a frame of video along with a determination that an object arrived at an intersection in the frame.

[0010] In some embodiments, a system for traffic monitoring, analysis, and prediction includes a memory allocation configured to store at least one executable asset and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service. The at least one service retrieves structured data determined from point cloud data from LiDAR sensors used to monitor traffic, calculates metrics using the structured data, prepares processed data for visualization from the metrics, and presents the prepared processed data via at least one dashboard.

[0011] In various examples, the metrics include at least one of vehicle volume, average speed, distance travelled, pedestrian volume, non-motor volume, light status on arrival, arrival phase, a route through an intersection, or a light time. In some examples, the at least one service summons at least one vehicle using at least one of the metrics or the processed data. In a number of examples, the at least one service tracks near misses/collisions using at least one of the metrics or the processed data. In various examples, the at least one service determines a fastest route using at least one of the metrics or the processed data. In some examples, the at least one service controls traffic signals to prioritize traffic using at least one of the metrics or the processed data. In various examples, the at least one service determines a most efficient route using at least one of the metrics or the processed data.

[0012] In a number of embodiments, a system for traffic monitoring, analysis, and prediction includes a memory allocation configured to store at least one executable asset and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service. The at least one service constructs a digital twin of an area of interest, retrieves structured data determined from traffic data for the area of interest, calculates metrics using the structured data, prepares processed data for visualization from the metrics, and presents the prepared processed data in a context of the digital twin via at least one dashboard that displays the digital twin.

[0013] In various examples, the at least one service simulates traffic via the at least one dashboard using the processed data. In some implementations of such examples, the at least one service simulates how a change affects traffic patterns. In various implementations of such examples, the change alters at least one of a simulation of the traffic, a traffic signal, or a traffic condition.

[0014] In a number of examples, the digital twin includes multiple intersections. In various implementations of such examples, the at least one dashboard includes indicators selectable to display information for each of the multiple intersections.

BRIEF DESCRIPTION OF THE DRAWINGS

[0015] The disclosure will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements.

[0016] FIG. 1 depicts an example system for traffic monitoring, analysis, and prediction.

[0017] FIG. 2 depicts a flow chart illustrating an example method for traffic monitoring, analysis, and prediction. This method may be performed by the system of FIG. 1 . [0018] FIGs. 3A and 3B depict a first example data pipeline structure that may be used for traffic monitoring, analysis, and prediction.

[0019] FIG. 3C depicts a second example data pipeline structure that may be used for traffic monitoring, analysis, and prediction.

[0020] FIG. 3D depicts a third example data pipeline structure that may be used for traffic monitoring, analysis, and prediction.

[0021] FIG. 4 depicts an example of traffic monitoring data.

[0022] FIG. 5 depicts the example of traffic monitoring data of FIG. 4 after object detection and classification.

[0023] FIG. 6 depicts an example of structured data that may be determined from the detected and classified objects depicted in FIG. 5.

[0024] FIGs. 7A-1 through 7A-4 depict a first portion of a list of metrics that may be used in traffic monitoring, analysis, and prediction.

[0025] FIGs. 7B-1 through 7B-4 depict a second portion of a list of metrics that may be used in traffic monitoring, analysis, and prediction.

[0026] FIGs. 7C-1 and 7C-2 depict a third portion of a list of metrics that may be used in traffic monitoring, analysis, and prediction.

[0027] FIG. 8 depicts an example intersection table.

[0028] FIG. 9 depicts an example vehicle table.

[0029] FIG. 10 depicts an example approaches table.

[0030] FIG. 11 depicts a first example dashboard display that may be used to visualize the results of data processed traffic monitoring.

[0031] FIG. 12 depicts a second example dashboard display that may be used to visualize the results of data processed traffic monitoring.

[0032] FIG. 13 depicts a third example dashboard display that may be used to visualize the results of data processed traffic monitoring. [0033] FIG. 14 depicts a fourth example dashboard display that may be used to visualize the results of data processed traffic monitoring.

[0034] FIG. 15 depicts a fifth example dashboard display that may be used to visualize the results of data processed traffic monitoring.

[0035] FIG. 16 depicts a sixth example dashboard display that may be used to visualize the results of data processed traffic monitoring.

[0036] FIG. 17 depicts a seventh example dashboard display that may be used to visualize the results of data processed traffic monitoring.

[0037] FIG. 18 depicts an eighth example dashboard display that may be used to visualize the results of data processed traffic monitoring.

[0038] FIG. 19 depicts a ninth example dashboard display that may be used to visualize LiDAR traffic monitoring.

[0039] FIGs. 20A and 20B depict an example of building a digital twin using a Dashboard and OpenStreetMap where FIG. 20A depicts a dashboard with 9 cameras in Bellevue and FIG. 20B depicts a network built with OSM and imported in SUMO (red circles for the camera locations).

[0040] FIG. 21 depicts a visualization of the network with the speed (MPH).

[0041] FIG. 22A and 22B depict a visualization of the outputs where FIG. 22A depicts a graph of the NOx (Nitrogen Oxide) emissions against time (in seconds) and FIG. 22B depicts a graph of speed against time (in seconds).

[0042] FIG. 23 depicts an example system for traffic near miss/collision detection.

[0043] FIG. 24 depicts a flow chart illustrating a first example method for traffic near miss/collision detection. This method may be performed by the system of FIG. 23.

[0044] FIG. 25 depicts a flow chart illustrating a second example method for traffic near miss/collision detection. This method may be performed by the system of FIG. 23.

[0045] FIG. 26 depicts a flow chart illustrating a third example method for traffic near miss/collision detection. This method may be performed by the system of FIG. 23.

[0046] FIG. 27A depicts a first frame of traffic data video. [0047] FIG. 27B depicts a second frame of traffic data video.

[0048] FIG. 27C depicts a third frame of traffic data video.

[0049] FIG. 27D depicts a fourth frame of traffic data video.

[0050] FIG. 27E depicts a fifth frame of traffic data video.

DETAILED DESCRIPTION

[0051] Reference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following descriptions are not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as can be included within the spirit and scope of the described embodiments as defined by the appended claims.

[0052] The description that follows includes sample systems, methods, apparatuses, and computer program products that embody various elements of the present disclosure.

However, it should be understood that the described disclosure may be practiced in a variety of forms in addition to those described herein.

[0051] The raw data from one or more traffic devices may only be able to provide so much insight into the traffic. Processing of the data may be more useful, providing the ability to visualize various metrics about the traffic, enable adaptive traffic signal control, predict traffic congestion and/or accidents, aggregate data from multiple population areas for various uses, such as in the auto insurance industry, rideshare industry, logistics industry, autonomous vehicle original equipment manufacturer industry, and so on.

[0053] The following disclosure relates to traffic monitoring, analysis, and prediction. Traffic data may be obtained, such as video from intersection cameras. Object detection and classification may be performed using the data. Structured data may be determined and/or output using the detected and classified objects. Metrics may be calculated using the structured data. Processed data may be prepared for visualization and/or other uses. The prepared processed data may be presented via one or more dashboards and/or the prepared processed data may be otherwise used.

[0054] In this way, the system may be able to perform traffic monitoring, analysis, and prediction that the system would not previously have been able to perform absent the technology disclosed herein. This may enable the system to operate more efficiently while consuming fewer hardware and/or software resources as more resource consuming techniques could be omitted. Further, a variety of components may be omitted while still enabling traffic monitoring, analysis, and prediction, reducing unnecessary hardware and/or software components, and providing greater system flexibility.

[0055] These and other embodiments are discussed below with reference to FIGs. 1 - 18. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these Figures is for explanatory purposes only and should not be construed as limiting.

[0056] FIG. 1 depicts an example system 100 for traffic monitoring, analysis, and prediction. The system may include one or more data processing pipeline devices 101 (which may be implemented using one or more loud computing arrangements), security and/or other gateways 102, dashboard presenting devices 103, and so on.

[0057] The system may perform traffic monitoring, analysis, and prediction. Traffic data may be obtained, such as via the gateway from one or more traffic monitoring devices 104 (such as one or more intersection and/or other still image and/or video cameras, Light Detection and Ranging sensors (or “LiDAR”), loops, radar, weather data, Internet of Things sensors, fleet vehicles, traffic controllers and/or other city and/or other population area supplied data devices, navigation app data, connected vehicles, and so on). The one or more data processing pipeline devices 101 may perform object detection and classification using the data. For example, objects may be detected and classified as cars, trucks buses, pedestrians, light vehicles, heavy vehicles, non-motor vehicles, and so on. Objects may be assigned individual identifiers, identifiers by type, and so on. The one or more data processing pipeline devices 101 may determine and/or output structured data using the detected and classified objects. The one or more data processing pipeline devices 101 may calculate one or more metrics using the structured data. For example, the metrics may involve vehicle volume, vehicle volume by vehicle type, average speed, movement status, distance travelled, queue length, pedestrian volume, non-motor volume, light status on arrival, arrival phase, route through intersection, light times, near misses, longitude, latitude, city, state, country, and/or any other metrics that may be calculated using the structured data.

[0058] The one or more data processing pipeline devices 101 may prepare the processed data for visualization and/or other uses. The one or more data processing pipeline devices 101 may present the prepared processed data via one or more dashboards, such as via the dashboard presenting device 103, and/or otherwise use the prepared processed data. [0059] The data processing pipeline device 101 may be any kind of electronic device. Examples of such devices include, but are not limited to, one or more desktop computing devices, laptop computing devices, server computing devices, mobile computing devices, tablet computing devices, set top boxes, digital video recorders, televisions, displays, wearable devices, smart phones, digital media players, and so on. The data processing pipeline device 101 may include one or more processing units and/or other processors and/or controllers, one or more non-transitory storage media (which may take the form of, but is not limited to, a magnetic storage medium; optical storage medium; magneto-optical storage medium; read only memory; random access memory; erasable programmable memory; flash memory; and so on), one or more communication units, and/or other components. The processing unit may execute instructions stored in the non-transitory storage medium to perform various functions.

[0060] Alternatively and/or additionally, the data processing pipeline device 101 may involve one or more memory allocations configured to store at least one executable asset and one or more processor allocations configured to access the one or more memory allocations and execute the at least one executable asset to instantiate one or more processes and/or services, such as one or more services, and so on.

[0061] Similarly, the gateway 102, dashboard presenting device 103, and/or traffic monitoring device 104 may be any kind of electronic device. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[0062] As used herein, the term “computing resource” (along with other similar terms and phrases, including, but not limited to, “computing device” and “computing network”) refers to any physical and/or virtual electronic device or machine component, or set or group of interconnected and/or communicably coupled physical and/or virtual electronic devices or machine components, suitable to execute or cause to be executed one or more arithmetic or logical operations on digital data.

[0063] Example computing resources contemplated herein include, but are not limited to: single or multi-core processors; single or multi-thread processors; purpose-configured coprocessors (e.g., graphics processing units, motion processing units, sensor processing units, and the like); volatile or non-volatile memory; application-specific integrated circuits; field-programmable gate arrays; input/output devices and systems and components thereof (e.g., keyboards, mice, trackpads, generic human interface devices, video cameras, microphones, speakers, and the like); networking appliances and systems and components thereof (e.g., routers, switches, firewalls, packet shapers, content filters, network interface controllers or cards, access points, modems, and the like); embedded devices and systems and components thereof (e.g., system(s)-on-chip, Internet-of-Things devices, and the like); industrial control or automation devices and systems and components thereof (e.g., programmable logic controllers, programmable relays, supervisory control and data acquisition controllers, discrete controllers, and the like); vehicle or aeronautical control devices and systems and components thereof (e.g., navigation devices, safety devices or controllers, security devices, and the like); corporate or business infrastructure devices or appliances (e.g., private branch exchange devices, voice-over internet protocol hosts and controllers, end-user terminals, and the like); personal electronic devices and systems and components thereof (e.g., cellular phones, tablet computers, desktop computers, laptop computers, wearable devices); personal electronic devices and accessories thereof (e.g., peripheral input devices, wearable devices, implantable devices, medical devices, and so on); and so on. It may be appreciated that the foregoing examples are not exhaustive.

[0064] Example information can include, but may not be limited to: personal identification information (e.g., names, social security numbers, telephone numbers, email addresses, physical addresses, driver’s license information, passport numbers, and so on); identity documents (e.g., driver’s licenses, passports, government identification cards or credentials, and so on); protected health information (e.g., medical records, dental records, and so on); financial, banking, credit, or debt information; third-party service account information (e.g., usernames, passwords, social media handles, and so on); encrypted or unencrypted files; database files; network connection logs; shell history; filesystem files; libraries, frameworks, and binaries; registry entries; settings files; executing processes; hardware vendors, versions, and/or information associated with the compromised computing resource; installed applications or services; password hashes; idle time, uptime, and/or last login time; document files; product renderings; presentation files; image files; customer information; configuration files; passwords; and so on. It may be appreciated that the foregoing examples are not exhaustive.

[0065] The foregoing examples and description of instances of purpose-configured software, whether accessible via API as a request-response service, an event-driven service, or whether configured as a self-contained data processing service are understood as not exhaustive. In other words, a person of skill in the art may appreciate that the various functions and operations of a system such as described herein can be implemented in a number of suitable ways, developed leveraging any number of suitable libraries, frameworks, first or third-party APIs, local or remote databases (whether relational, NoSQL, or other architectures, or a combination thereof), programming languages, software design techniques (e.g., procedural, asynchronous, event-driven, and so on or any combination thereof), and so on. The various functions described herein can be implemented in the same manner (as one example, leveraging a common language and/or design), or in different ways. In many embodiments, functions of a system described herein are implemented as discrete microservices, which may be containerized or executed/instantiated leveraging a discrete virtual machine, that are only responsive to authenticated API requests from other microservices of the same system. Similarly, each microservice may be configured to provide data output and receive data input across an encrypted data channel. In some cases, each microservice may be configured to store its own data in a dedicated encrypted database; in others, microservices can store encrypted data in a common database; whether such data is stored in tables shared by multiple microservices or whether microservices may leverage independent and separate tables/schemas can vary from embodiment to embodiment. As a result of these described and other equivalent architectures, it may be appreciated that a system such as described herein can be implemented in a number of suitable ways. For simplicity of description, many embodiments that follow are described in reference to an implementation in which discrete functions of the system are implemented as discrete microservices. It is appreciated that this is merely one possible implementation.

[0066] As described herein, the term “processor” refers to any software and/or hardware- implemented data processing device or circuit physically and/or structurally configured to instantiate one or more classes or objects that are purpose-configured to perform specific transformations of data including operations represented as code and/or instructions included in a program that can be stored within, and accessed from, a memory. This term is meant to encompass a single processor or processing unit, multiple processors, multiple processing units, analog or digital circuits, or other suitably configured computing element or combination of elements.

[0067] Although the system 100 is illustrated and described as including particular components arranged in a particular configuration, it is understood that this is an example. In a number of implementations, various configurations of various components may be used without departing from the scope of the present disclosure.

[0068] For example, the system 100 is illustrated and described as including the gateway 102. However, it is understood that this is an example. In various implementations, the gateway 102 may be omitted. For example, the system 100 is illustrated and described as including the traffic monitoring device 104. However, it is understood that this is an example. In various implementations, the traffic monitoring device 104 may not be part of the system 100. The system 100 may instead communicate with the traffic monitoring device 104. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[0069] Although the above illustrates and describes performance of functions like detection and classification, determination of structured data, and so on, it is understood that this is an example. In various implementations, one or more such functions may be omitted without departing from the scope of the present disclosure.

[0070] For example, in some implementations, data that has already been detected and classified may be obtained. Various metrics may be calculated from such, similar to above, which may then be prepared for visualization and/or visualized and/or otherwise used similar to above.

[0071] FIG. 2 depicts a flow chart illustrating a first example method 200 for traffic monitoring, analysis, and prediction. This method may be performed by the system of FIG. 1.

[0072] At operation 210, traffic data may be obtained. At operation 220, object detection and classification may be performed. At operation 230, structured data may be determined and/or output. At operation 240, one or more metrics maybe calculated. At operation 250, processed data may be prepared for visualization and/or other use. At operation 260, the prepared processed data may be presented via one or more dashboards and/or otherwise used.

[0073] In various examples, this example method 200 may be implemented as a group of interrelated software modules or components that perform various functions discussed herein. These software modules or components may be executed within a cloud network and/or by one or more computing devices, such as the data processing pipeline device 101 of FIG. 1.

[0074] Although the example method 200 is illustrated and described as including particular operations performed in a particular order, it is understood that this is an example. In various implementations, various orders of the same, similar, and/or different operations may be performed without departing from the scope of the present disclosure.

[0075] For example, the method 200 is illustrated and described as including the operation 260. However, it is understood that this is an example. In some implementations, operation 260 may be omitted. Various configurations are possible and contemplated without departing from the scope of the present disclosure. [0076] By way of example, a data pipeline may begin with a raw, real-time video feed from an intersection camera that is in use by a city department of transportation. This video may then be passed through a secure gateway to a cloud based processing pipeline (such as Amazon Web Services™ and/or any other cloud vendor, service, and/or implementation).

[0077] The pipeline's first instance may allow for rapid development of machine learning and computer vision applications within the cloud provider's on-demand infrastructure. It may run object detection and classification deep learning models on the video. Examples of such video detection and classification algorithms include, but are not limited to, YOLOv4 + DeepSORT, YOLOv4 + DeepMOT, and so on.

[0078] After this detection and classification layer has run, the pipeline may output structured data, such as the position, trajectory, count, and type of motorized and nonmotorized road users. The structured data from this module may be stored in a cloud instance and then be passed to a second instance that may calculate the intersection metrics.

[0079] An example list of these metrics and example calculation methodology for these metrics are included below, particularly with respect to FIGs. 7A-7B.

[0080] These metrics may be stored in 3 tables to minimize latency and storage size. These tables may include an intersection table, a vehicle table, and an approaches table, and so on. Examples of these tables are included below, particularly with respect to FIGs. 8- 10.

[0081] This processed data may be held in an additional cloud instance, and then exported (such as in JSON files) to a data warehouse where it may be further optimized for visualization. After, the processed data may be written to a live model (such as SiSense™, an enterprise dashboard provider) that may allow for the data to be visualized in real time and/or a SiSense™ ElastiCube™ for later retrieval and visualization of time-based metrics.

[0082] After the data has been stored in these instances, it may be visualized as a dashboard, which may be shown as camera locations in a city. A user may click in (and/or otherwise select) and see specific metrics about an intersection’s health and performance.

[0083] Although the above is described in the context of supporting a visualization dashboard, it is understood that this is an example. In various implementations, such data processing may be used to support adaptive traffic signal control, predicting traffic congestion and accidents, as well as productizing aggregated data from multiple cities for private sector use in the auto insurance, rideshare, logistics, autonomous vehicle original equipment manufacturer spaces, and so on.

[0084] Although the above is described in the context of intersection cameras, it is understood that this is an example. In various implementations, other data sources beyond data extracted from intersection video feeds may be used. This may include weather, Internet of Things sensors, LiDAR sensors, fleet vehicles, city suppliers (e.g. traffic controller), navigation app data, connected vehicle data, and so on.

[0085] Although the above illustrates and describes performance of functions like detection and classification, determination of structured data, and so on, it is understood that this is an example. In various implementations, one or more such functions may be omitted without departing from the scope of the present disclosure.

[0086] For example, in some implementations, data that has already been detected and classified may be obtained. Various metrics may be calculated from such, similar to above, which may then be prepared for visualization and/or visualized and/or otherwise used similar to above.

[0087] FIGs. 3A and 3B depict a first example data pipeline structure that may be used for traffic monitoring, analysis, and prediction. As illustrated, the example data pipeline structure includes a camera stage, a security layer stage, an object detection and classification stage, a video pipe stage, a metric calculation stage, a data bucket 2 stage, a data warehouse stage, a live model 1 stage, a SiSense ElastiCube stage, a dashboard visual stage, a live model 2 stage, and a connected vehicle data API (application programing interface) stage. As also illustrated, the camera stage is connected to the security layer stage, the security layer stage is connected to the object detection and classification stage, the object detection and classification stage is connected to the video pipe stage, the video pipe stage is connected to the metric calculation stage, the metric calculation stage is connected to the data bucket 2 stage, the data bucket 2 stage is connected to the data warehouse stage, the data warehouse stage is connected to the live model 1 stage, the live model 1 stage is connected to the SiSense ElastiCube stage and the dashboard visual stage, the SiSense ElastiCube stage and the dashboard visual stage are connected to the live model 2 stage, and the live model 2 stage is connected to the connected vehicle data API stage. As further illustrated, the camera stage is where video footage may be collected at intersections (this stage may be hosted by cities or other population centers), the security layer stage and the object detection and classification stage may run one or more algorithms for object detection and classification, the video pipe stage may stream data from one or more algorithm processes run by the object detection and classification stage and store it for a specified amount of time (such as ten seconds, three minutes, two days, and so on), the metric calculation stage may include an Al (artificial intelligence) instance that may perform a metric calculation (such as by running a python script) and/or process object detection and classification content into traffic metrics, the data bucket 2 stage may capture processed metrics and/or output (such as in one or more JSON files), the data warehouse stage may optimize data format for visualization, the live model 1 stage may stream video (such as within an adjustable time window) based on metrics, the SiSense ElastiCube stage may store data and/or cache historical data, the dashboard visual stage may display data for one or more end users, the live model 2 stage may stream (such as within an adjustable time window) connected vehicle data, and the connected vehicle data API stage may interface with a connected vehicle data service and/or stream (such as within an adjustable time window) connected vehicle data.

[0088] Although the above illustrates and describes performance of functions like detection and classification, determination of structured data, and so on, it is understood that this is an example. In various implementations, one or more such functions may be omitted without departing from the scope of the present disclosure.

[0089] For example, in some implementations, data that has already been detected and classified may be obtained. Various metrics may be calculated from such, similar to above, which may then be prepared for visualization and/or visualized and/or otherwise used similar to above.

[0090] FIG. 3C depicts a second example data pipeline structure that may be used for traffic monitoring, analysis, and prediction.

[0091] Architecture Description

[0092] In order to facilitate a highly available and scalable system which is less error prone, a modularized approach may be used where each part of the system is separated and independent of each other. The pipeline may be split into four sections.

[0093] 1 . Stream Ingestion & Processing Module: Besides handling camera discovery, registry and video stream health check, this module may be responsible for starting, ending and restarting processes of stream data. After ingesting the stream, it may handle the object detection and tracking processes. [0094] 2. Metrics Processing Module: Once the stream data is processed, the tracked objects data may go through the metrics processing module which may output the desired metrics.

[0095] 3. Storage Module: All of the calculated metrics may then be stored in a permanent storage system.

[0096] 4. Dashboard Module.

[0097] Data Pipeline

[0098] Stream ingestion and processing code may be bundled into a single container and deployed as an ECS task inside an ECS service. ECS is short for Elastic Container Service which is an AWS (Amazon™ Web Services) proprietary container orchestration service.

The ECS service may be deployed in an EC2 GPU instance group inside a private network (VPC).

[0099] Once the configuration files for the streams are read, a container per stream may be deployed, and camera metadata as well as stream properties may be stored inside a DynamoDB collection. The DynamoDB table may allow the tracking and synchronization of the status and state of all different containers.

[00100] Kinesis Data Streams may assure the stream of data between the stream ingestion and processing module and the metrics processing service. The metrics processing code may be deployed on a Fargate cluster - a serverless deployment option of the ECS service. Kinesis Client Library may assist the task of consuming and processing data from the stream and a DynamoDB collection may be used to keep track of relevant stream-related metadata.

[00101] Finally, S3 may enable permanent storage for the output of the metrics processing module. With a lifecycle policy, stored data may be moved to different storage tiers for cost effectiveness.

[00102] Services Glossary

[00103] • VPC: all of the services may be be deployed in a private network to control the flow of traffic between the systems.

[00104] • ECS Cluster: an ECS service may be deployed to group the ECS tasks and to handle the scaling of the tasks. Containers may be deployed on an EC2 instance. [00105] • DynamoDB Cluster: may be used to store any kind of metadata.

[00106] • Kinesis Data Streams: a provisioned Kinesis data stream may be deployed for streaming the tracked object data and to act as a buffer between the video processing module and the metric module.

[00107] • Fargate (ECS Cluster): a Fargate launch type ECS cluster may be deployed to metric calculation.

[00108] • S3 Bucket: may be used to store all the data resulting from the metrics processing module. May be a standard tier bucket configured with lifecycle to move the data after a certain period to deep archive more accurately Glacier.

[00109] FIG. 3D depicts a third example data pipeline structure that may be used for traffic monitoring, analysis, and prediction.

[00110] In order to facilitate a highly available and scalable system which would be less error prone, a modularized approach may be used where each part of the system is separated and independent of each other. The pipeline may be split into four sections:

[00111] 1 . Stream Ingestion Module: Besides handling camera discovery, registry, and video stream health check, this module may be responsible for starting, ending, and restarting processes of stream data.

[00112] 2. Stream Processing Module: After ingesting the stream, the stream processing module may handle the object detection and tracking processes.

[00113] 3. Metrics Processing Module: Once the stream data is processed, the tracked objects data may go through the metrics processing module which may output the desired metrics.

[00114] 4. Storing Module: All of the calculated metrics may then be stored in a permanent storage system.

[00115] Description of Data Pipeline

[00116] The stream ingestion and stream processing code may be bundled into a single container and deployed as an ECS task inside an ECS service. ECS is short for Elastic Container Service which is an AWS proprietary container orchestration service. The ECS service may be deployed in an auto scaling EC2 GPU instance group inside a private network (VPC). [00117] Deployed in front of the ECS service will be an Elastic Load Balancer (ELB) which may distribute the traffic across the different containers, one for each camera. Once the configuration files for the cameras are read, a container per camera may be deployed, and the camera metadata may be stored inside a DynamoDB collection. Network endpoints may be created to connect the Load Balancer, the output data stream, and the DynamoDB instance from the private network.

[00118] All of the metrics processing code may be deployed on a Fargate cluster which is a serverless deployment option of the ECS service. S3 may be used as a permanent storage where all of the metric data may be stored and a lifecycle policy may be configured where data may be moved to different storage tiers for cost effectiveness.

[00119] Services Glossary

[00120] • VPC: All of the services may be deployed in a private network to control the flow of traffic between the systems.

[00121] o VPC endpoints - there may be endpoints between the ECS cluster and the Kinesis Data stream and between the ECS cluster and the DynamoDB cluster. This may keep the network traffic private which will help with latency.

[00122] o NAT gateway - a NAT gateway may be deployed in the private subnets to route internet traffic.

[00123] o IGW - internet gateway for internet access.

[00124] • ELB: An application load balancer may be deployed to distribute traffic across the containers and also may be used for service discovery.

[00125] • ECS cluster

[00126] o ECS service - an ECS service may be deployed to group the ECS tasks and to handle the scaling of the tasks.

[00127] o EC2 launch type - containers may be deployed on an EC2 GPU instance.

[00128] • DynamoDB cluster: Used to store any kind of metadata.

[00129] • Kinesis Data Stream: A provisioned Kinesis data stream may be deployed for streaming the tracked object data and to act as a buffer between the video processing module and the metric module. [00130] • ECS cluster (Fargate): A Fargete launch type ECS cluster may be deployed to metric calculation.

[00131] • S3 bucket: May be used to store all the metric data. May be a standard tier bucket configured with lifecycle to move the data after a certain period to deep archive more accurately. Glacier Other services may be added once the initial deployment of the system is complete. Those services may be an EC2 instance to act as a front-facing control module, Cloudwatch alarms will be set to track the container cluster and data stream metrics, a notification service will be deployed for Cloudwatch events and email notifications.

[00132] The above may have several benefits. These may include: resiliency (the architecture may be tolerant to fault), scalability (the system may be able to scale easily, without changes in the architecture), and modularization (this may make problem solving within each module much easier as well as latency and cost optimization).

[00133] FIG. 4 depicts an example of traffic monitoring data. In this example, the traffic monitoring data includes a still image of an intersection including traffic. However, it is understood that this is an example. In various implementations, the traffic monitoring data may take a variety of different forms. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00134] FIG. 5 depicts the example of traffic monitoring data of FIG. 4 after object detection and classification. As illustrated, the different objects in the example of traffic monitoring data of FIG. 4 may be detected and classified, such as by identifying the objects as one or more different cars that may be associated with an identifier, tracked, and so on.

[00135] FIG. 6 depicts an example of structured data that may be determined from the detected and classified objects depicted in FIG. 5. Various behaviors about the identified and/or tracked objects may be evaluated and recorded.

[00136] FIGs. 7A-1 through 7A-4 depict a first portion of a list of metrics that may be used in traffic monitoring, analysis, and prediction. FIGs. 7B-1 through 7B-4 depict a second portion of a list of metrics that may be used in traffic monitoring, analysis, and prediction. FIG. 7C1 -7C2 depict a third portion of a list of metrics that may be used in traffic monitoring, analysis, and prediction. This list of metrics may also be referred to below as the “Data Dictionary.”

[00137] FIG. 8 depicts an example intersection table. FIG. 9 depicts an example vehicle table. FIG. 10 depicts an example approaches table. [00138] FIG. 11 depicts a first example dashboard display that may be used to visualize the results of data processed traffic monitoring. This first example dashboard display may be a city view where the camera icons represent individual intersections that may be selected to switch to a dashboard display focusing on that intersection, which may present specific metrics about that intersection’s health, performance, and so on.

[00139] FIG. 12 depicts a second example dashboard display that may be used to visualize the results of data processed traffic monitoring. This second example dashboard display may be an intersection view that presents various specific metrics for an intersection, which may indicate various information about the intersection’s health, performance, and so on. By way of illustration, this second example dashboard display may depict arrival on green over a period of time (which may be adjustable) versus arrival on red over a period of time (which may be adjustable), average speed over a period of time (which may be adjustable), arrival phase, and so on.

[00140] FIG. 13 depicts a third example dashboard display that may be used to visualize the results of data processed traffic monitoring. This second example dashboard display may also be an intersection view that presents various specific metrics for an intersection, which may indicate various information about the intersection’s health, performance, and so on. This second example dashboard display may depict short term traffic intensity; a feed of the intersection; the distribution of travel left, right, or through over a period of time (which may be adjustable); an approach-exit distribution, and so on.

[00141] FIG. 14 depicts a fourth example dashboard display that may be used to visualize the results of data processed traffic monitoring. The top boxes and the bottom left may include animations that may be supported by processed traffic data, connected vehicle data, and so on. The data may include latitude and longitude for each unique vehicle over time. The top left box may be a GIS (Geographic Information System) layer with long trails. The top right box may be a GIS layer with points. The bottom left box may be a GIS layer that may be rotated with space. The bottom right box may show individual intersections with cameras/sublayers that may be highlighted and may be selectable. In some examples, if the speed and position of a small sample of connected vehicles are known from connected vehicle data, total vehicle volume may be inferred using traffic data volume data and a sample percentage variable, such as a total percentage of all vehicles that are connected and/or represented by the connected vehicle data.

[00142] FIG. 15 depicts a fifth example dashboard display that may be used to visualize the results of data processed traffic monitoring. FIG. 16 depicts a sixth example dashboard display that may be used to visualize the results of data processed traffic monitoring. FIG. 17 depicts a seventh example dashboard display that may be used to visualize the results of data processed traffic monitoring. FIG. 16 may be a magnified “pop out” of FIG. 15. FIG. 17 may be a magnified pop out of FIG. 16.

[00143] FIG. 15 may include a variable selection menu. In some examples, up to 2 may be selected. Selected variables may be contrasted on the X & Y axes of 7. FIG. 15 also include an interface to export selected data as a .csv or .xlsx file. FIG. 15 may also include a time series graph. The time series graph may show up to 2 variables selected from 5. The time series graph may be updated in real time. The time window shown in the time series graph may be chosen by a user. FIG. 15 may show “point in time values” for a current intersection. The point in time values may update in real time. FIG. 15 may also include depict a live camera feed. The live camera feed may be streamed from a selected intersection camera. FIG. 15 may also include an interface zoomed in view of a “primary layer” view (such as one or more of the views of FIG. 14) of the intersection. This view may show vehicles and such moving through the intersection, such as they would in the primary layer. This may be a top-down “T” view. FIG. 15 may also include an interface to return to the primary layer city grid view. FIG. 15 may also include an interface to initiate a deep dive into a playback of the intersection. FIGs. 16 and 17 may include interfaces that may be selectable to expand.

[00144] FIG. 18 depicts an eighth example dashboard display that may be used to visualize the results of data processed traffic monitoring. FIG. 18 may include a select metrics and time window. FIG. 18 may also include an interface that enables export of selected data as a .csv or .xlsx file. FIG. 18 may display a time series according to one or more selected variables and a time window, which may be 1x/graph. FIG. 18 may include an interface that enables navigation back to a secondary level (such as FIG. 15). FIG. 18 may depict vehicles and such moving through an intersection during a time interval. FIG. 18 may include a time slider. The time slider may enable scrolling through a time interval to see vehicles and such moving through the intersection.

[00145] An example of the dashboard will now be discussed in detail. This example dashboard may include the examples shown in FIGs. 15-18.

[00146] The objective of the dashboard in this example may be to visualize traffic data in real time, which may use SiSense’s enterprise dashboard platform. These visualizations may be based on a stream of data generated from live traffic camera video feeds and may constitute the dashboard. This example dashboard may involve user interface formatting, features, functionality, and so on. This dashboard may consist of three layers: a primary, secondary, and tertiary layer. Each layer, as well as its design, display, and performance expectations are detailed in its respective section below. References will be made throughout each layer’s section to the Data Dictionary, which is shown in FIGs. 7A and 7B. Data Dictionary metrics may be referred to by their “key” names under each layer-feature that they support.

[00147] The primary layer’s purpose may be to display a map of a selected city, real time traffic flow, and provide a click-through gateway to the secondary layer. FIG. 14 may show an example of the primary layer. The primary layer may consist of the following features:

[00148] A. May show the geo-spatial layout of a city’s road network

[00149] a. Data Dictionary: camera_city

[00150] b. Data Dictionary: camera_state

[00151] c. Data Dictionary: camera_country

[00152] B. May allow users to manipulate displayed network’s magnification (zoom in/out)

[00153] C. May allow users to manipulate displayed network’s perspective (pitch/yaw/roll)

[00154] D. May show the location of active cameras’ locations within the city’s road network

[00155] a. Data Dictionary: camerajongitude

[00156] b. Data Dictionary: camerajatitude

[00157] E. May show the real time location of cars as they move through a city’s road network

[00158] a. Data Dictionary: longitude

[00159] b. Data Dictionary: latitude

[00160] c. Data Dictionary: speed

[00161] d. Data Dictionary: time

[00162] e. Data Dictionary: otonomojd (subject to change based on data provider) [00163] The secondary layer’s purpose may be to provide users with single-intersection level data. FIGs. 15-17 may show examples of the secondary layer. FIG. 16 may be a magnified “pop out” of FIG. 15. FIG. 17 may be a magnified pop out of FIG. 16. The secondary layer may consist of the following features:

[00164] A. May display a menu of intersection performance metrics for user selection

[00165] a. Users may be able to select up to two maximum metrics to display

[00166] b. There may be Class, Category, and Metric variables

[00167] c. Class variables may be characters

[00168] i. Class variables may modify both metric and category variables

[00169] e. Category variables may be characters

[00170] i. Category variables may modify metric variables

[00171] II. Users may select up to two category variables where applicable

[00172] f. Metric variables are numeric

[00173] g. Only some metric variables may be filtered by category and/or class variables

[00174] h. Metric variables that cannot be filtered may already assume category and/or class variables

[00175] i. Class variables:

[00176] i. Data Dictionary: object_class (class_variable)

[00177] 1. e.g„ “car”, “truck”, “person”

[00178] j. Category variables that may filter metric variables:

[00179] i. Data Dictionary: arrival_phase (category variable)

[00180] 1 . Class modifiers: all vehicle classes

[00181] 2. Category modifiers: none

[00182] II. Data Dictionary: approach (category variable) [00183] 1 . Class modifiers: all classes

[00184] 2. Category modifiers: none

[00185] ill. Data Dictionary: movement (category variable)

[00186] 1 . Class modifiers: all vehicle classes

[00187] 2. Category modifiers: approach

[00188] k. Metric that can be filtered by category and/or class variables:

[00189] i. Data Dictionary: arrival_on_green (filtered metric variable)

[00190] 1 . Class modifiers: all vehicle classes

[00191] 2. Category modifiers: approach

[00192] II. Data Dictionary: arrival_on_red (filtered metric variable)

[00193] 1 . Class modifiers: all vehicle classes

[00194] 2. Category modifiers: approach

[00195] ill. Data Dictionary: average_speed (filtered metric variable)

[00196] 1 . Class modifiers: all classes

[00197] 2. Category modifiers: approach, movement

[00198] iv. Data Dictionary: light_vehicle_volume (filtered metric variable)

[00199] 1 . Class modifiers: none (assumed as “car” & “motorbike”)

[00200] 2. Category modifiers: approach, movement

[00201] v. Data Dictionary: heavy_vehicle_volume (filtered metric variable)

[00202] 1 . Class modifiers: none (assumed as “truck” & “bus”)

[00203] 2. Category modifiers: approach, movement

[00204] vi. Data Dictionary: car_volume (filtered metric variable) [00205] 1 . Class modifiers: none (assumed as “car”)

[00206] 2. Category modifiers: approach, movement

[00207] vii. Data Dictionary: truck_volume (filtered metric variable)

[00208] 1 . Class modifiers: none (assumed as “truck”)

[00209] 2. Category modifiers: approach, movement

[00210] viii. Data Dictionary: bus_volume (filtered metric variable)

[00211] 1 . Class modifiers: none (assumed as “bus”)

[00212] 2. Category modifiers: approach, movement

[00213] ix. Data Dictionary: vehicle_volume (filtered metric variable)

[00214] 1 . Class modifiers: none (assumed as all vehicle classes)

[00215] 2. Category modifiers: approach, movement

[00216] x. Data Dictionary: pedestrian_volume (filtered metric variable)

[00217] 1 . Class modifiers: none (assumed as “person”)

[00218] 2. Category modifiers: approach

[00219] xi. Data Dictionary: bicycle_volume (filtered metric variable)

[00220] 1 . Class modifiers: none (assumed as “bicycle”)

[00221] 2. Category modifiers: approach

[00222] xii. Data Dictionary: average_queue_length_car (non-filtered metric variable

[00223] 1 . Class modifiers: none (assumed as all vehicle classes)

[00224] 2. Category modifiers: approach

[00225] xiii. Data Dictionary: average_queue_length_feet (non-filtered metric variable)

[00226] 1 . Class modifiers: none (assumed as all vehicle classes) [00227] 2. Category modifiers: approach

[00228] xiv. Data Dictionary: platoon_ratio (non-filtered metric variable)

[00229] 1 . Class modifiers: none (assumed as all vehicle classes)

[00230] 2. Category modifiers: approach

[00231] xv. Data Dictionary: effective_green_time (non-filtered metric variable)

[00232] 1 . Class modifiers: none (assumed as all vehicle classes)

[00233] 2. Category modifiers: approach

[00234] xvi. Data Dictionary: cycle_time (non-filtered metric variable)

[00235] 1 . Class modifiers: none (assumed as all vehicle classes)

[00236] 2. Category modifiers: approach

[00237] B. May allow for the export of raw data

[00238] a. This feature may allow for the raw data corresponding to a user’s selections to be exported to a .csv format

[00239] C. May display a time series of selected metric variables

[00240] a. This time series may display up to two selected metric variables

[00241] b. This time series graph may continuously update in real time with new data

[00242] c. This time series may allow users to adjust desired periodicity

[00243] d. Pilot periodicity may cover 1 week worth of time

[00244] i. 15 second interval

[00245] II. 30 second interval

[00246] ill. 1 minute interval

[00247] iv. 5 minutes interval [00248] v. 15 minute interval

[00249] vi. 30 minute interval

[00250] vii. 1 hour interval

[00251] viii. 2 hour interval

[00252] ix. 6 hour interval

[00253] x. 12 hour interval

[00254] xi. 24 hour interval

[00255] xii. 48 hour interval

[00256] xiii. 72 hour interval

[00257] xiv. Entire time frame (7 day, 1 week)

[00258] f. This time series may be interactive, and may allow users to see percent increase and decrease between two points

[00259] g. Time series may display absolute values when hovered over

[00260] D. May display real time “last value” (last point in time series for selected interval)values for all metrics listed below

[00261] a. Each of these metrics may be modified by class and category variables and may be displayed for all possible combinations on a per-variable basis

[00262] b. Selected “point in time” metrics are as follows:

[00263] i. Data Dictionary: arrival_on_green

[00264] II. Data Dictionary: arrival_on_red

[00265] ill. Data Dictionary: average_speed

[00266] iv. Data Dictionary: light_vehicle_volume

[00267] v. Data Dictionary: heavy_vehicle_volume [00268] vi. Data Dictionary: car_volume

[00269] vii. Data Dictionary: truck_volume

[00270] viii. Data Dictionary: bus_volume

[00271] ix. Data Dictionary: vehicle_volume

[00272] x. Data Dictionary: pedestrian_volume

[00273] xi. Data Dictionary: bicycle_volume

[00274] xii. Data Dictionary: average_queue_length_car

[00275] xiii. Data Dictionary: average_queue_length_feet

[00276] xiv. Data Dictionary: platoon_ratio

[00277] xv. Data Dictionary: effective_green_time

[00278] xvi. Data Dictionary: cycle_time

[00279] E. May display a live stream video feed from the selected intersection’s camera

[00280] a. This layer may be clicked in order to create a magnified pop out

[00281] F. Displays a geo-spatial layout of the selected intersection

[00282] a. This feature may be a replicated, “zoomed-in” version of the Dashboard’s Primary Layer

[00283] b. This layer may be clicked in order to create a magnified pop out

[00284] G. May allow the user to navigate back to the Primary Layer

[00285] H. May allow the user to navigate forward to the Tertiary Layer

[00286] The tertiary’s layer’s purpose may be to provide users with intersection playback functionality (as compared with a continuously updating real-time stream of data as in the secondary layer). FIG. 18 may show an example of the tertiary layer. The tertiary layer may consist of the following features:

[00287] A. May display a menu of intersection performance metrics for user selection [00288] a. This menu may also include variable period lengths

[00289] B. May allow the user to export raw data to a .csv

[00290] C. May display a time series of intersection level metrics selected by the user

[00291] a. These graphs may display only one combination of class/category/metric at a time, possibly according to options and restrictions

[00292] c. There may be space for up to 3 of these graphs to populate

[00293] D. May allow the user to navigate back to the secondary layer

[00294] E. May display a geo spatial rendition of the selected intersection

[00295] F. May have a “play-back” slider to allow a user to toggle through a selected time period

[00296] a. The periodicity may correspond to a user’s selection in the Tertiary Layer

[00297] Example metrics will now be provided:

[00298]

[00299] Vehicle wise

[00300]

[00301] record_key

[00302] Object identifier from deepsort. Integer.

[00303]

[00304] object_class

[00305] Yolo’s detected class.

[00306] - 'person'

[00307] - car'

[00308] - 'truck' [00309] - bus'

[00310] - 'motorbike'

[00311] - 'bicycle'

[00312]

[00313] x_mid

[00314] Geometric center of the detected object’s bounding box in the x coordinate.

[00315] (x_max - x_min)/2+x_min

[00316]

[00317] y_mid

[00318] Geometric center of the detected object’s bounding box in the y coordinate.

[00319] (y_max - y_min)/2+y_min

[00320]

[00321] x_ diff

[00322] Difference between x_mid from the current and previous frame.

[00323] x_difference_(t) - x_difference_(t-1 )

[00324]

[00325] y_diff

[00326] Difference between x_mid from the current and previous frame.

[00327] y_difference_(t) - y_difference_(t-1 )

[00328]

[00329] I2_distance_units [00330] L2 distance between the geometric centers of the object’s bounding box in the current and previous frame.

[00331 ] sqrt(x_difference A 2+y_difference A 2)

[00332]

[00333] speed_in_units

[00334] Speed of the object.

[00335] 12_distance_units * fps or I2_distance_units/(timestamp_(t) - timestamp_(t-1 ))

[00336]

[00337] average_speed_units

[00338] Average speed of the object while in view.

[00339] Mean(speed_in_units)

[00340]

[00341] Movement

[00342] Movement performed by the object when in view.

[00343] - Right

[00344] - Left

[00345] - Through

[00346] - Uncertain

[00347]

[00348] arrival_phase

[00349] T raffic light phase when the object arrived.

[00350] - green [00351] - red

[00352]

[00353] approachjd

[00354] Intersection approach the vehicle is currently on. Integer (0 -> center of the intersection)

[00355]

[00356] right_on_red

[00357] Has the vehicle turned right on a red light? Only reliable on the last frame the car is visible.

[00358] Boolean, movement == right && arrival_phase == red

[00359]

[00360] time_since_update

[00361] Number of frames since the object was last detected. Integer. Intermediate metric status

[00362] Status for the object from deepsort. Intermediate metric. Can be:

[00363] - Tentative

[00364] - Active

[00365] - Deleted (should not be seen)

[00366]

[00367] Age

[00368] Number of frames since first detection, integer. Intermediate metric.

[00369]

[00370] motion_status [00371] Can be:

[00372] - “moving”

[00373] - “not moving”

[00374]

[00375] cumulative_speed

[00376] Used to calculate average speed. Intermediate metric

[00377]

[00378] relative_distance

[00379] List with vectors for motion calculations. Intermediate metric

[00380]

[00381] speed_in_mph

[00382] Miles per hour. Current speed of vehicle. This may involve checking direction of a vehicle within a bounding box. This direction may be used to calculate length of vehicle in pixels. The average size of vehicles in the United States (or other places) may be used to get pixels to miles conversion factor. The miles conversion factor may be used to convert to speed_in_units. In other examples, other units may be used instead of miles per hour.

[00383]

[00384] direction

[00385] Degrees. Current direction of vehicle (angle). Angle of vector between current and last position and horizontal axis.

[00386]

[00387] split failure

[00388] Binary, "True" or "False." Whether vehicle is forced to go through more than 1 light cycle (2 red lights or more). Determination may involve subtracting outflow from one intersection versus inflow from a feeder between light cycles. If the vehicle arrives at the intersection and is present for more than 1 cycle change.

[00389]

[00390] travel_time

[00391] Total time a vehicle spends in intersection. May be measured in seconds or other units. Determination may use age of object and fps to convert to time units.

[00392]

[00393] control_delay

[00394] Difference between the actual travel time for a vehicle to move through the intersection and the reference travel time. May be measured in seconds or other units. Determination may involve subtracting travel time and reference travel time.

[00395]

[00396] total_stop_delay

[00397] Time during which vehicle is stationary at an intersection approach. May be measured in seconds or other units. Determination may involve counting each frame the vehicle is stopped and using fps to get the time value.

[00398]

[00399] gas_consumption

[00400] Gas consumption. Determination may involve distance covered * average consumption.

[00401]

[00402] Approach wise

[00403]

[00404] approachjd [00405] Integer that identifies the approach of a specific intersection. Is the same as the key in the json

[00406]

[00407] Coordinates

[00408] Hardcoded pixel coordinates (list x,y) for this camera. Used for metric calculations.

[00409]

[00410] Lanes

[00411] Hardcoded integer that indicates the number of lanes in this approach. Not implemented

[00412]

[00413] approach_volumes

[00414]

[00415] vehicle_volume

[00416] Number of vehicles (cars, motorbikes, trucks and buses) in this approach.

[00417]

[00418] light. _vehicle_volume

[00419] Number of light vehicles (cars and motorbikes) in this approach.

[00420]

[00421] heavy_vehicle_volume

[00422] Number of heavy vehicles (trucks and buses) in this approach.

[00423]

[00424] car_volume

[00425] Number of cars in this approach. [00426]

[00427] truck_volume

[00428] Number of trucks in this approach.

[00429]

[00430] bus_volume

[00431] Number of buses in this approach.

[00432]

[00433] person_volume

[00434] Number of pedestrians (‘person’) in this approach.

[00435]

[00436] bicycle_volume

[00437] Number of bicycles in this approach.

[00438]

[00439] arrival_on_green

[00440] Number of cars that arrived while the traffic light was green.

[00441]

[00442] arrival_on_red

[00443] Number of cars that arrived while the traffic light was red.

[00444]

[00445] average_queue_length_cars

[00446] Number of cars queueing in the approach, number of cars in approach divided by number of lanes. [00447]

[00448] average_queue_length_feet

[00449]

[00450] average_queue_length_cars times average car length

[00451]

[00452] light -information

[00453]

[00454] light..status

[00455] Whether the light in that approach is “red” or “green”.

[00456]

[00457] effective_green_time

[00458] Length of time cars are moving through the intersection.

[00459]

[00460] effective_red_time

[00461] Length of time cars are moving through the intersection.

[00462]

[00463] cycle_time

[00464] Length of time the light is green.

[00465] effect ive_g ree n_t i m e/0.75

[00466]

[00467] platoon_ratio [00468] Measure of individual phase progression performance derived from the percentage arrivals on green.

[00469] arrival_on _green/vehicle_volume*0.75

[00470]

[00471] last_stop_time

[00472] Last timestamp a vehicle was stopped at this approach. Intermediate metric

[00473]

[00474] last_start_time

[00475]

[00476] Last timestamp a vehicle was moved into the intersection at this approach. Intermediate metric

[00477]

[00478] green_time_demand

[00479] Necessary green time for all vehicles to not stop at arrival within certain time window. May be measured in seconds or other units. Determination may involve # of vehicles per hour per approach * (average time per movement). May be based off of observed use - (# of cars/hour/approach )* ( average time I movement).

[00480]

[00481] green_time_dispersion

[00482] Difference between green_time_demand & green_time for a given interval. May be measured in seconds or other units.

[00483]

[00484] reference_travel_time

[00485] 5th percentile of the travel time of all vehicles that did not stop with same movement within time window of interest. May be measured in seconds or other units. [00486]

[00487] flow_rate

[00488] Number of vehicles per hour or other unit of time per approach.

[00489]

[00490] Intersection wise

[00491]

[00492] camerajd

[00493] String that identifies a specific intersection, specifically the stream url.

[00494]

[00495] Frame

[00496] Integer that indicates the frame number

[00497]

[00498] observation_time

[00499]

[00500] Frame timestamp. Due to missing metadata from the stream the timestamp may be the system_time at ingestion.

[00501]

[00502] cumulative_time

[00503] Time elapsed since the beginning of the stream (processing system start not camera start). In.seconds.

[00504]

[00505] Total_volumes

[00506] [00507] vehicle_volume

[00508] Number of vehicles (cars, motorbikes, trucks and buses) in this intersection in this frame. Sum.of all approaches in intersection.

[00509]

[00510] light. _vehicle_volume

[00511] Number of light vehicles (cars and motorbikes) in this intersection in this frame. Sum of all.approaches in intersection.

[00512]

[00513] Heavy_vehicle_volume

[00514] Number of heavy vehicles (trucks and buses) in this intersection in this frame.

Sum of all.approaches in intersection.

[00515]

[00516] car_volume

[00517] Number of cars in this intersection in this frame. Sum of all approaches in intersection.

[00518]

[00519] truck_volume

[00520] Number of trucks in this intersection in this frame. Sum of all approaches in intersection.

[00521]

[00522] bus_volume

[00523] Number of buses in this intersection in this frame. Sum of all approaches in intersection.

[00524] [00525] person_volume

[00526] Number of pedestrians (‘person’) in this intersection in this frame. Sum of all approaches injntersection.

[00527]

[00528] bicycle_volume

[00529] Number of bicycles in this intersection in this frame. Sum of all approaches in intersection.

[00530]

[00531] arrival_on_green

[00532]

[00533] arrival_on_red

[00534]

[00535] near_misses

[00536] Number of near misses in the intersection in this frame (not_cumulative)

[00537] If the distance between two objects’ centers are less than a threshold we increment the.count.

[00538]

[00539] near_miss_vehicle_vehicle

[00540] When two cars almost hit each other. May involve checking for requirements: distance between objects, direction of movement, current speed.

[00541]

[00542] near_miss_vehicle_person

[00543] When a car and a person almost hit each other. May involve checking for requirements: distance between objects, direction of movement, current speed. [00544]

[00545] emissions_per_vehicle

[00546] Average emissions of giver intersection per vehicle. May include determining average of gas_consumption of all objects.

[00547]

[00548] In various implementations, frames of a raw, real-time video feed from an intersection camera and/or other traffic data may be obtained (though it is understood that this is an example and that in other examples other data, such as point cloud LiDAR data, may be obtained and used). Detection and classification may be performed on each frame to identify and classify the objects in the frame. Structured data may then be determined for the objects detected. For example, a frame number may be determined for a frame, an intersection identifier may be determined for a frame, a unique tracker identifier may be assigned to each object detected, the class of the object may be determined (such as person, car, truck, bus, motorbike, bicycle, and so on), coordinates of the object detected in the frame may be determined (which may be determined with reference to known coordinates of the intersection and/or the intersection camera, such as camera longitude, latitude, city, state, country, and so on) (such as the minimum and maximum x positions of the object, the minimum and maximum y positions of the object, and so on), and the like. An example of such information is shown in FIG. 6.

[00549] Various metrics may be calculated from the structured data mentioned above. For example, a bounding box may be calculated for the object based on one or more x and/or y positions for the object. By way of another example, one or more geometric centers of the object’s bounding box may be calculated for the object in the x and/or y coordinate (such as an x min, a y min, and so on). By way of still another example, an intersection approach that the object is currently on may be calculated, such as based on a position of the object and a position of the center of the intersection.

[00550] Further, other structured data may be determined from the frames. For example, one or more time stamps associated with frames may be determined and/or associated with other structured data, such as to determine a time at which an object was at a determined x and/or y position. By way of another example, a light phase for the frame may be determined (such as whether a traffic light in the frame is green, red, and so on), though this may instead be determined by means other than image analysis (such as time-stamped traffic light data that may be correlated to a frame time stamp). This may be used to determine the traffic light phase when an object arrived at the intersection, such as by correlating a traffic light phase determined for a frame along with a determination that an object arrived at the intersection in the frame. In yet another example, data for an approach and/or intersection associated with a frame may be determined (such as based on a uniform resource locator of the video feed and/or any other intersection camera identifier associated with the frame, an approach identifier associated with the frame, an intersection identifier associated with the frame, and so on).

[00551] The structured data determined for an object in a frame may be used with the structured data determined for the object in other frames to calculate various metrics. For example, the difference between one or more x and/or y positions for the object (such as the difference and/or distance between x or y midpoints of the object’s bounding box) in different frames (such as in a current and a previous frame) may be calculated. Such difference in position between frames, along with times respectively associated with the frames (such as from one or more time-stamps) may be used to calculate one or more metrics associated with the speed of the object (such as an average speed of the object during the video feed (such as in miles per hour and/or other units), cumulative speed, and so on). Such difference in position between frames may also be used to calculate various metrics about the travel of the object (such as the direction of travel between frames, how the object left an intersection, whether or not the object made a right on red, and so on). By way of another example, structured data from multiple frames may be used to determine a status of the object (such as an approach associated with the object, how an object moved through an intersection, an approach an object used to enter an intersection, the approach an object used to exit an intersection, and so on), a time or number of frames since the object was last detected (and/or since first detected and so on), whether or not the object is moving, and so on.

[00552] Structured data and/or metrics for individual detected objects and/or other data (such as light phase, time, intersection position, and so on) determined using one or more frames and/or from one or more video feeds from one or more intersection cameras associated with one or more intersections may be used together to calculate various metrics, such as metrics associated with approaches. For example, structured data and/or metrics for individual detected objects associated with an approach identifier (which may be determined based on an association with the intersection camera from which frames of the video feed were obtained) may be aggregated and analyzed to determine one or more approach volumes (such as a number of vehicles (cars, motorbikes, trucks, buses, and so on)) in a particular approach, a number of light vehicles (such as cars, motorbikes, and so on) in a particular approach, a number of heavy vehicles (such as trucks, buses, and so on) in a particular approach, a number of cars in a particular approach, a number of trucks in a particular approach, a number of buses in a particular approach, a number of pedestrians in a particular approach, a number of non-motor vehicles in a particular approach, a number of bicycles in a particular approach, and so on), an average queue length (such as in feet and/or another unit of measurement) of a particular approach, and so on. By way of another example, light status in one or more frames may be tracked and/or correlated with other information to determine a light status, an effective green time (such as a length of time that objects are moving through a particular intersection), an effective red time (such as a length of time that objects are stopped at a particular intersection), a cycle time (such as a length of time that a light is green determined by comparing the light phase across multiple frames), a number of cars that arrived while a traffic light is green, a number of cars that arrived while a traffic light is red, a measure of individual phase progression performance derived from a percentage of vehicle volume arrivals on green, and so on.

[00553] Other structured data and/or metrics associated with approaches may be calculated. For example, a last stop time may be calculated based on a last time-stamp that an object stopped at an approach. By way of another example, a last start time may be calculated based on a last time-stamp that an object moved into the intersection at a particular approach. In other examples, an approach identifier for a particular approach may be determined, coordinates for a camera associated with a particular intersection may be determined, number of lanes associated with a particular approach may be determined, and so on.

[00554] Structured data and/or metrics for individual detected objects and/or other data (such as light phase, time, intersection position, and so on) determined using one or more frames and/or from one or more video feeds from one or more intersection cameras associated with one or more intersections may be also used together to calculate various metrics associated with intersections. For example, a vehicle volume for a particular intersection may be determined by summing objects (such as cars, motorbikes, trucks, buses, and so on) in all approaches of a frame associated with the intersection, a light vehicle volume for a particular intersection may be determined by summing objects (such as cars, motorbikes, and so on) in all approaches of a frame associated with the intersection, a heavy vehicle volume for a particular intersection may be determined by summing objects (such as trucks, buses, and so on) in all approaches of a frame associated with the intersection, a car volume for a particular intersection may be determined by summing cars in all approaches of a frame associated with an intersection, a truck volume for a particular intersection may be determined by summing trucks in all approaches of a frame associated with an intersection, a bus volume for a particular intersection may be determined by summing buses in all approaches of a frame associated with an intersection, a person volume for a particular intersection may be determined by summing people in all approaches of a frame associated with an intersection, a bicycle volume for a particular intersection may be determined by summing bicycles in all approaches of a frame associated with an intersection, arrivals on green in all approaches of a frame associated with an intersection, arrivals on red in all approaches of a frame associated with an intersection, number of near misses in a frame associated with a particular intersection (which may be calculated based on positions of objects in the frame, such as based on the distance between the geometric centers of the bounding boxes associated with two objects being less than a threshold), a current frame when a light went red, a frame when a light went green, and so on.

[00555] Other information for an intersection may be determined using the video feed, frames, and/or other structured data and/or metrics. For example, an identifier for a camera associated with an intersection may be determined, identifiers for frames of one or more video feeds associated with the intersection may be determined, observation times associated with an intersection may be determined (such as a time-stamp based on ingestion time when other metadata from a stream or other video feed is not available, a cumulative time (such as from the start of processing of the video feed) may be determined, and so on.

[00556] Although determination and calculation of structured data and/or metrics relating to one or more vehicles and/or other objects, approaches, intersections, and so on are discussed above, it is understood that these are examples. In various examples, any structured data and/or metrics (such as those discussed above with relation to FIGs. 7A1 - 7C-2) relating to one or more vehicles and/or other objects, approaches, intersections, and so on may be determined and calculated from the objects detected in one or more frames of one or more video feeds of one or more intersection cameras and/or other traffic data without departing from the scope of the present disclosure.

[00557] Alternatively and/or additionally to determining and/or calculating structured data and/or metrics from one or more video feeds from one or more intersection cameras and/or other traffic data, connected vehicle data may be obtained and used. For example, structured data and/or metrics may be determined and/or calculated using a combination of connected vehicle data and data from one or more video feeds from one or more intersection cameras and/or other traffic data. By way of another example, a visualization dashboard may visualize connected vehicle data along with structured data and/or metrics determined and/or calculated from one or more video feeds from one or more intersection cameras and/or other traffic data.

[00558] To summarize the above, real-time video feed from an intersection camera and/or other traffic data may be obtained. Objects in frames of the video feed may be detected and classified. Positions of the objects at various times in the frames of the video feed may be determined, as well as information such as light statuses related to the objects. Differences between the objects in different frames may be used to determine behavior of the objects over time. Such calculated object metrics may be stored, such as in one or more vehicle tables. Such calculated object metrics for objects that are associated with a particular approach may be aggregated in order to determine various approach object volumes and/or other metrics related to the approach, which may then be stored, such as in one or more approach tables. Further, such object metrics for objects that are associated with a particular intersection may be aggregated in order to determine various intersection object volumes and/or other metrics related to the intersection, which may then be stored, such as in one or more intersection tables.

[00559] The above structured data and/or metrics related to one or more vehicles and/or other objects, approaches, intersections, and so on discussed above may then be processed and/or otherwise prepared for visualization (such as one or more of the example dashboard displays of FIGs. 11 -18) and/or one or more other purposes. For example, structured data and/or metrics related to one or more vehicles and/or other objects may be stored in one or more vehicle tables (such as the example vehicle table of FIG. 9), structured data and/or metrics related to one or more intersections may be stored in one or more intersection tables (such as the example intersection table of FIG. 8), structured data and/or metrics related to one or more approaches may be stored in one or more approach tables (such as the example approach table of FIG. 10), and so on. Such tables may then be used for visualization (such as one or more of the example dashboard displays of FIGs. 11 -18) and/or one or more other purposes.

[00560] By way of illustration, a visualization dashboard may include a graphical model generated of a city or other area. The graphical model may include one or more intersections and may visualize various of the structured data and/or metrics to the depicted intersections and so on. In some examples, one or more intersections depicted by the graphical model may be selected to present various information related to the structured data and/or metrics associated with the intersection (such as arrival phase over an interval, average speed over an interval, various object volumes (such as right turn volume, left turn volume, through volume, and so on), approach data related to the intersection, how objects proceeded through the intersection, a current and/or historical video feed associated with the intersection, and so on). Various controls may be provided that enable a user to select which information is displayed, export data related to the information, playback historic data, and so on.

[00561] In other examples, the structured data and/or metrics may be used for purposes other than visualization. Example uses include, but are not limited to, adaptive traffic signal control, predicting traffic congestion and accidents, productizing aggregated data from multiple cities for private sector use (such as in the auto insurance, rideshare, logistics, autonomous vehicle original equipment manufacturer spaces, and so on), routing (such as for rideshare, logistics, autonomous vehicle control, and so on), simulating traffic, using structured data and/or metrics to simulate how changes to traffic (and/or traffic signals, traffic conditions, and so on) will change traffic patterns, and so on.

[00562] Although the above illustrates and describes performance of functions (such as detection and classification, determination of structured data, and so on) on frames of a raw, real-time video feed from an intersection camera and/or other traffic data, it is understood that this is an example. In various implementations, one or more such functions (and/or other functions) may be performed on other traffic data, such as data from one or more LiDAR sensors.

[00563] LiDAR sensors may be operable to determine data, such as ranges (variable distance), by targeting an object with elements, such as one or more lasers, and measuring the time for the reflected light to return to one or more receivers. LiDAR sensors may generate point cloud data that may be used for the analysis discussed herein instead of frames of a raw, real-time video feed from an intersection camera and/or other traffic data.

[00564] In some examples, functions similar to those described above performed on frames of a raw, real-time video feed from an intersection camera and/or other traffic data (such as detection and classification, determination of structured data, and so on) may be performed on the LiDAR sensor data. In other examples, structured data generated from LiDAR cloud data that has already been detected and classified may be obtained and various metrics may be calculated from such, similar to above, which may then be prepared for visualization and/or visualized and/or otherwise used similar to above.

[00565] LiDAR sensor data may have a number of advantages over frames of a raw, realtime video feed from an intersection camera and/or other traffic data. To begin with, point cloud data from one or more LiDAR sensors may not have the same privacy issues as frames of a raw, real-time video feed from an intersection camera and/or other traffic data as facial and/or other similar images may not be captured. Further, LiDAR sensor data may not be dependent on lighting and thus may provide more reliable data over all times of day and night as compared to frames of a raw, real-time video feed from an intersection camera and/or other traffic data. Additionally, LiDAR sensor data may provide data in three- dimensional space as opposed to the two-dimensional data from frames of a raw, real-time video feed from an intersection camera and/or other traffic data and thus may provide depth, which may not be provided via frames of a raw, real-time video feed from an intersection camera and/or other traffic data.

[00566] For example, a determination may be made about the size of an average vehicle in pixels. This may be used with the LiDAR sensor data to determine the pixels from the center of a vehicle represented in the LiDAR sensor data and then infer the speed of the vehicle. Compared to approaches using frames of a raw, real-time video feed from an intersection camera and/or other traffic data, an assumption may not have to be made about object speed. This may be more accurate, but also may improve the processing speed of computing devices processing the data as functions performed on frames of a raw, real-time video feed from an intersection camera and/or other traffic data to determine speed may not need to be performed and can be omitted since this information may already be represented in LiDAR sensor data. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00567] Although the above illustrates and describes a number of examples where traffic is in the context of vehicular and/or non-vehicular traffic on streets, roads, and/or similar structures, it is understood that these are examples. In other examples, functions similar to those discussed above may be performed in the context of other kinds of traffic without departing from the scope of the present disclosure.

[00568] By way of a first example, pedestrian and/or other traffic through one or more parking lots, walkways, and/or other areas related to one or more events and/or event venues may be monitored, analyzed, directed, controlled, simulated, and so on. In another example, cargo in one or more container trucks moving in relation to one or more ports may be monitored, analyzed, directed, controlled, simulated, and so on. In still another example, cargo truck queues related to one or more ports may be monitored, analyzed, directed, controlled, simulated, and so on. In yet another example, various airport traffic (such as pedestrians and/or vehicles approaching an airport, moving through an airport, and so on) may be monitored, analyzed, directed, controlled, simulated, and so on. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00569] FIG. 19 depicts a ninth example dashboard display that may be used to visualize LiDAR traffic monitoring.

[00570] As shown, the ninth example dashboard display may include a map showing approach density and/or an average speed tab including an indication of average speed, an indication of speed by approach, and/or an indicator of average platoon ratio. As also shown, the ninth example dashboard display may include an object volume tab that may be selected to show one or more indicators related to object volume, an arrival phase tab that may be selected to show one or more indicators related to arrival phase, and/or an average queue tab that may be selected to show one or more indicators related to average queue. As further shown, the ninth example dashboard display may also include a feed illustrating cloud point LiDAR data on an image and/or other representation of an intersection.

[00571] In some implementations, simulation of a digital twin intersection may be performed with real data. This will now be discussed in detail.

[00572] FIGs. 20A and 20B depict an example of building a digital twin using a Dashboard and OpenStreetMap where FIG. 20A depicts a dashboard with 9 cameras in Bellevue and FIG. 20B depicts a network built with OSM and imported in SUMO (red circles for the camera locations).

[00573] FIG. 21 depicts a visualization of the network with the speed (MPH).

[00574] FIGs. 22A and 22B depict a visualization of the outputs where FIG. 22A depicts a graph of the NOx (Nitrogen Oxide) emissions against time (in seconds) and FIG. 22B depicts a graph of speed against time (in seconds).

[00575] I. Introduction

[00576] SUMO (Simulation of Urban Mobility) is an open source, portable, microscopic, and continuous multi-modal traffic simulation package designed to handle large networks. SUMO is developed by the German Aerospace Center and community users. Using SUMO, any traffic scenario may be simulated, whether it’s building a new unique network of roads or creating a digital twin of an intersection of interest. Digital twin is defined as a virtual representation that serves as the real-time digital counterpart of a physical object or process. [00577] I. Context

[00578] The context may involve intelligent traffic solutions. Products may be built for cities to help them economically achieve their transportation objectives, including reductions in traffic congestion, accidents, and associated emissions.

[00579] Products may use intersection cameras with a machine learning stack to provide insights on that intersection whether it is a count of classified vehicles, speed, or emissions. Using this information, products may be able to pinpoint areas of interest so that a traffic engineer may be able to easily work to reduce congestion, cut emissions, etc.

[00580] III. Building a Digital Twin

[00581] Using a dashboard (the “Dashboard”) which overlooks 9 intersections in the city of Bellevue, a digital twin of the intersections may first be created using OSM (OpenStreetMaps).

[00582] SUMO has a program with OSM that allows the user to be able to crop out a section of the map that one would want to simulate in SUMO and edit accordingly.

[00583] After the process of importing the network, the parameters of the simulation may be adjusted. SUMO currently allows one to adjust the count of cars, trucks, buses, motorcycles, bicycles, and pedestrians. There are other options such as trams, urban trains, and trains but those may be excluded due to the nature of the simulation.

[00584] IV. Parameter Tuning

[00585] From the Dashboard, data from any intersection and the counts of classified objects that go through by hour (or any other time metric) may be grabbed. The classifications of the model may be: bicycle, bus, car, motorbike, person, truck.

[00586] V. Simulation

[00587] Using the method of digital twinning and importing data from the Dashboard the 9 intersections may be simulated with real data taken. Only 2 hours worth of data taken from 12:00PM ~ 2:00PM may be used. This parameter of this simulation may be taken from the GridMatrix Dashboard and may be as followed:

[00588] Time/Duration 2 hours [00589] # of Vehicles 3,125

[00590] # of Pedestrians 472

[00591] There are numerous output files that SUMO may provide after the simulation is over. There is the floating vehicle data which is the position and speed of the vehicles. The congestion of the vehicles can be seen and may be seen from the data from the floating vehicle file in FIG. 21 .

[00592] Another file that SUMO may provide is the emission data from the simulation. Using this dataset the output of CO2, CO, HC, NOx, PMx, fuel consumption, and electricity consumption may be seen. Using this information insight on how the intersections are doing may be obtained, and then whether the traffic light signals may need polishing or more/less lanes may be decided.

[00593] VI. Thoughts and Conclusion

[00594] This is just the beginning of the many accomplishments that may be achieved with simulating a digital twin intersection with real data. The simulation may not be comprehensive and complete. The output of speed and congestion may be analyzed and then networks may be rebuilt to see what will yield a more optimized network. On top of that, it is possible to analyze emission output from the simulated file and adjust traffic signal light timing to make sure that we reduce emissions for a smarter city.

[00595] It is also important to note that the behaviors of the vehicles may not be controlled yet in SUMO.

[00596] Secondly, this kind of data may yield more robust and full-bodied output to a traffic engineer. This kind of information may be able to give traffic engineers a full understanding of the intersections that they are working with alongside with products discussed herein.

[00597] A near miss may be when two objects (such as motorized or non-motorized vehicles, pedestrians, and so on) in traffic almost collide. A collision may be when the two objects actually collide. Near misses/collisions may signal traffic problems that may need to be addressed. Further, near misses may be more challenging to track than actual collisions as near misses may not be reported to insurance providers, law enforcement, and/or other entities. Data regarding detected near misses/collisions may be useful in the ability to visualize various metrics about the traffic, enable adaptive traffic signal control, predict traffic congestion and/or accidents, and aggregate data from multiple population areas for various uses, such as in the auto insurance industry, rideshare industry, logistics industry, autonomous vehicle original equipment manufacturer industry, and so on.

[00598] Traffic near miss/collision detection will now be discussed in detail. Traffic data may be obtained, such as video from intersection cameras, point cloud data from LiDAR sensors, and so on. Object detection and classification may be performed using the data, and structured data may be determined and/or output using the detected and classified objects. Alternatively, structured data may be obtained that has been generated by performing such object detection and classification on traffic data (such as point cloud data from LiDAR sensors). Metrics may be calculated using the structured data. For each frame of traffic data, the metrics may be analyzed to detect whether a near miss/collision occurs between each object in the frame (such as motorized or non-motorized vehicles, pedestrians, and so on) and each of the other objects in the frame. These metrics may be analyzed to evaluate whether or not a group of conditions are met. If the group of conditions are met, a near miss/collision may be detected. This may be recorded in the metrics for the objects involved. In some implementations, one or more indicators may be added to the traffic data and/or to one or more visualizations generated using the metrics, the traffic data, the structured data, and so on.

[00599] In this way, the system may be able to perform near miss/collision detection and/or various other actions based thereon that the system would not previously have been able to perform absent the technology disclosed herein. This may enable the system to operate more efficiently while consuming fewer hardware and/or software resources as more resource consuming techniques could be omitted. Further, a variety of components may be omitted while still enabling traffic near miss detection, reducing unnecessary hardware and/or software components, and providing greater system flexibility.

[00600] FIG. 23 depicts an example system 2300 for traffic near miss/collision detection. The system 100 may include one or more analysis devices 2301 (which may be implemented using one or more loud computing arrangements) and/or one or more traffic monitoring devices 2302 (such as one or more intersection and/or other still image and/or video cameras, LiDAR sensors, loops, radar, weather data, Internet of Things sensors, fleet vehicles, traffic controllers and/or other city and/or other population area supplied data devices, navigation app data, connected vehicles, and so on).

[00601] The analysis device 2301 and/or one or more other devices may obtain traffic data (such as video from intersection cameras, point cloud data from Light Detection and Ranging (or “LiDAR”) sensors, and so on) from the traffic monitoring device. The analysis device 2301 and/or one or more other devices may perform object detection and classification using the data and may determine an output structured data using the detected and classified objects.

[00602] Alternatively, the analysis device 2301 and/or one or more other devices may obtain structured data that has been generated by performing such object detection and classification on traffic data (such as point cloud data from LiDAR sensors). The analysis device 2301 and/or one or more other devices may calculate one or more metrics using the structured data. For each frame of traffic data, the analysis device 2301 and/or one or more other devices may analyze the metrics to detect whether a near miss/collision occurs between each object in the frame (such as motorized or non-motorized vehicles, pedestrians, and so on) and each of the other objects in the frame. The analysis device 2301 and/or one or more other devices may analyze the metrics to evaluate whether or not a group of conditions are met. If the group of conditions are met, the analysis device 2301 and/or one or more other devices may determine that a near miss/collision is detected. The analysis device 2301 and/or one or more other devices may record the near miss/collision and/or other data based thereon in the metrics for the objects involved. In some implementations, analysis device 2301 and/or one or more other devices may add one or more indicators to the traffic data and/or to one or more visualizations generated using the metrics, the traffic data, the structured data, and so on.

[00603] The group of conditions may include a variety of factors, such as the distance between the objects, the direction of movement of the objects, the current speed of the objects, and so on.

[00604] In some implementations, a near miss/collision may be detected between a pair of objects when the distance between them is below a distance threshold, the speed of at least one of the objects is higher than a speed threshold, the angle between the vehicles is higher than an angle threshold, the objects are not coming from the same approach (intersection entrance), at least one of the vehicles was not detected in a near miss already in the same traffic data, and the sum of previous speeds of each of the objects is higher than zero (avoiding parked and/or otherwise stationary objects). Near misses/collisions may be calculated in a metrics module and analyzed for every frame. A near miss/collision detection process may include detecting objects in a given frame, making a list of all objects in that frame, analyzing all possible pairs of objects in that frame and, if the group of conditions above are met, determine that a near miss/collision is detected. After a near miss/collision is detected, both objects may be marked as having been involved in a near miss/collision. [00605] In various implementations, only portions of the traffic data may be analyzed for near misses/collisions. For example, most near misses/collisions may happen within actual intersections between objects of opposing directions. Further, 40% of near misses/collisions may happen with left turns, either a left turn or an exit approach. As such, near miss/collision detection analysis may be restricted to objects within intersections of opposing directions, objects involved in left turns, and so on.

[00606] Accuracy of near miss/collision detection may depend on the traffic monitoring device 2302, the angle to the intersection, and so on. Performance of near miss/collision detection may be evaluated using a near miss/collision data set (traffic data with a near miss/collision) and a number of different performance measurements. These performance measurements may include recall (the number of near misses/collisions detected over the total number of near misses/collisions), precision (the number of near misses/collisions over the total number of possible pairs of objects in the data set), and so on.

[00607] Near misses/collisions may be detected and data stored for such at the frame level. Alternatively and/or additionally, near misses/collisions may be detected and data stored for such at a second level above the frame level, such as a group of frames. This may reduce the amount of data that may be stored.

[00608] Latency of near miss/collision detection may be approximately equal to the latency of the system 2300. This may relate to the time that passes between ingestion of traffic data and processing of a frame.

[00609] In some implementations, a near miss/collision detection procedure may be run every frame, along with other relevant metrics. Such a near miss/collision detection procedure may involve the following steps. The near miss/collision detection procedure may start with object detection and tracking in the frame under analysis. The output of this process may be a list of record keys identifying each object, bounding boxes pointing to the object’s position, class, the total number of frames that the object has appeared in so far (age), the number of frames that have passed since the object has been detected for the last time (time_since_update), and so on. The second step may involve going through this list of objects and processing a group of metrics calculations that involve the near miss/collision detection. These metrics may be calculated on the base of each object. For each object, the following metrics may be calculated that may play a role in the detection of near misses/collisions: direction angle (direction of the object’s movement), conversion factor between pixels and miles, speed in miles per hour (or kilometers per hour and/or other measure), approach ID, intersection entrance, or where the object is coming from. With regards to direction angle, changes in position and previous value of direction angle may be used to calculate the direction of the object and exclude outlier values (such as erroneous values). With regards to the conversion factor, the direction angle of the object within its bounding box may be used to obtain the longitudinal size of the object in pixels. Then, the average size of the object class (such as vehicle class) may be used to get a conversion factor between pixels and miles (or other measurement) for that object in the frame. For example, cars in the United States measure on average 14.7 feet long. With regards to speed in miles per hour (or kilometers per hour and/or other measure), the distance in pixels travelled between the current and last frame may be measured. Then this may be multiplied by the conversion factor to get the distance in miles. Finally, the distance may be divided by the time between frames to get the speed in miles per hour. With regards to approach ID, when an object appears in the traffic data, its distance to all approach center points may be calculated and the closest approach may be attributed to the object. Following the calculation of the metrics mentioned above, a near miss detection determination may be applied as follows. For each object, the distance in pixels to all other objects may be measured and transformed into miles (and/or other measurement) using the conversion factor calculated previously. The conversion factor may only allow calculation of distance in miles correctly for objects that are close to each other, as near miss analysis targets are. For each object, the angle with all other objects may be calculated using the direction angle. Then, a group of conditions may be evaluated for every possible pair with the object under analysis. If the group of conditions are all met, the case may be determined to be a near miss. The groups of conditions may be different for different objects, such as between two vehicles, between a vehicle and a pedestrian, and so on.

[00610] For example, the group of conditions for detecting a near miss/collision between two vehicles may be as follows. The second vehicle may be required to be at a distance lower than a threshold distance, such as 7 feet. Both vehicles may be required to have speed values higher than a speed threshold, such as 0 miles per hour. The angle between the vehicles may be higher than an angle threshold, such as 12 degrees. The second vehicle may be required to be coming from a different approach than the vehicle under analysis.

[00611 ] By way of another example, the group of conditions to detect a near miss/collision between a vehicle and a pedestrian may be the same as the above with the following exceptions. The person may need to be at a distance lower than a pedestrian distance threshold, such as 5 feet. There may be no conditions related to angles between objects for detecting a near miss/collision between a vehicle and a pedestrian. The vehicle may be required to have a speed higher than a vehicle speed threshold, such as 0.3 miles per hour.

[00612] The analysis device 2301 and/or one or more other devices may perform object detection and classification using the data. For example, objects may be detected and classified as cars, trucks, buses, pedestrians, light vehicles, heavy vehicles, non-motor vehicles, and so on. Objects may be assigned individual identifiers, identifiers by type, and so on. The analysis device 101 and/or one or more other devices may determine and/or output structured data using the detected and classified objects. The analysis device 2301 and/or one or more other devices may calculate one or more metrics using the structured data. For example, the metrics may involve vehicle volume, vehicle volume by vehicle type, average speed, movement status, distance travelled, queue length, pedestrian volume, nonmotor volume, light status on arrival, arrival phase, route through intersection, light times, near misses, longitude, latitude, city, state, country, and/or any other metrics that may be calculated using the structured data.

[00613] The analysis device 2301 may be any kind of electronic device. Examples of such devices include, but are not limited to, one or more desktop computing devices, laptop computing devices, server computing devices, mobile computing devices, tablet computing devices, set top boxes, digital video recorders, televisions, displays, wearable devices, smart phones, digital media players, and so on. The analysis device 2301 may include one or more processors 2303 and/or other processing units and/or controllers, one or more non- transitory storage media 2304 (which may take the form of, but is not limited to, a magnetic storage medium; optical storage medium; magneto-optical storage medium; read only memory; random access memory; erasable programmable memory; flash memory; and so on), one or more communication units 2305, one or more input and/or output devices 2306 (such as one or more displays, speakers, keyboards, mice, track pads, touch pads, touch screens, sensors, printers, and so on), and/or other components. The processor 2303 may execute instructions stored in the non-transitory storage medium to perform various functions.

[00614] Alternatively and/or additionally, the analysis device 2301 may involve one or more memory allocations configured to store at least one executable asset and one or more processor allocations configured to access the one or more memory allocations and execute the at least one executable asset to instantiate one or more processes and/or services, such as one or more near miss and/or collision detection and/or response services, and so on. [00615] Similarly, the traffic monitoring device 2302 may be any kind of electronic device. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00616] Although the system 2300 is illustrated and described as including particular components arranged in a particular configuration, it is understood that this is an example. In a number of implementations, various configurations of various components may be used without departing from the scope of the present disclosure.

[00617] For example, the system 2300 is illustrated and described as including the traffic monitoring device 2302. However, it is understood that this is an example. In various implementations, the traffic monitoring device 2302 may not be part of the system 2300.

The system 2300 may instead communicate with the traffic monitoring device 2302. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00618] Although the above illustrates and describes performance of functions like detection and classification, determination of structured data, and so on, it is understood that this is an example. In various implementations, one or more such functions may be omitted without departing from the scope of the present disclosure.

[00619] For example, in some implementations, data that has already been detected and classified may be obtained. Various metrics may be calculated from such, similar to above, which may then be prepared for visualization and/or visualized and/or otherwise used similar to above.

[00620] FIG. 24 depicts a flow chart illustrating a first example method 2400 for traffic near miss/collision detection. This method 2400 may be performed by the system 2300 of FIG. 1 .

[00621] At operation 2410, an electronic device (such as the analysis device 2301 of FIG. 23) may obtain traffic data, such as video from one or more intersection cameras. At operation 2420, the electronic device may analyze the traffic data. At operation 2430, the electronic device may determine whether or not a near miss/collision occurred. If not, the flow may proceed to operation 2440 and end. Otherwise, at operation 2450, the electronic device may respond to the detected near miss/collision. This may include recording data regarding the near miss/collision, marking the traffic data with one or more near miss/collision indicators, transmitting one or more notifications (such as to one or more cities and/or other municipalities or authorities, emergency responders, and so on for the purpose of summoning emergency services, tracking near misses/collisions, and so on), and so on.

[00622] In some examples, responding to the detected near miss/collision may include one or more automatic and/or other alerts. By way of illustration, the electronic device may determine within a confidence level, threshold, or similar mechanism that a detected near miss/collision is a collision. In response, the electronic device may automatically and/or otherwise send an alert, such as to a 911 operator and/or other emergency and/or other vehicle dispatcher, emergency and/or other vehicle, vehicle controller, vehicle navigation device, and so on via one or more mechanisms such as cellular and/or other communication network. As such, the collision detection may be external to the vehicle dispatched to render aid and/or perform other actions related to the collision. In some cases, this may add functions to vehicle collision detection systems, add redundancy to vehicle collision detection systems, and so on. The electronic device may also utilize traffic data and/or control other devices, such as to determine the fastest and/or most efficient route to the collision, control traffic signals to prioritize traffic to the collision (such as creating an empty corridor to the collision), and so on. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00623] In various examples, when the electronic device determines within a confidence level, threshold, or similar mechanism that a detected near miss/collision is a near miss, the electronic device may record the near miss with other traffic data and/or otherwise analyze such as part of analyzing traffic data. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00624] In various examples, this example method 2400 may be implemented as a group of interrelated software modules or components that perform various functions discussed herein. These software modules or components may be executed within a cloud network and/or by one or more computing devices, such as the analysis device 2301 of FIG. 23.

[00625] Although the example method 2400 is illustrated and described as including particular operations performed in a particular order, it is understood that this is an example. In various implementations, various orders of the same, similar, and/or different operations may be performed without departing from the scope of the present disclosure.

[00626] For example, the method 2400 is illustrated and described as including the operation 2440. However, it is understood that this is an example. In some implementations, operation 2440 may be omitted and the electronic device may instead return to operation 2420 and continue analyzing the traffic data. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00627] Although the above is described in the context of intersection cameras, it is understood that this is an example. In various implementations, other data sources beyond data extracted from intersection video feeds may be used. This may include weather, Internet of Things sensors, LiDAR sensors, fleet vehicles, city suppliers (e.g., traffic controller), navigation app data, connected vehicle data, and so on.

[00628] Although the above illustrates and describes performance of functions like detection and classification, determination of structured data, and so on, it is understood that this is an example. In various implementations, one or more such functions may be omitted without departing from the scope of the present disclosure.

[00629] For example, in some implementations, data that has already been detected and classified may be obtained. Various metrics may be calculated from such, similar to above, which may then be prepared for visualization and/or visualized and/or otherwise used similar to above.

[00630] FIG. 25 depicts a flow chart illustrating a second example method 2500 for traffic near miss/collision detection. This method 2500 may be performed by the system 2300 of FIG. 23.

[00631] At operation 2510, an electronic device (such as the analysis device 2301 of FIG. 23) may detect all objects in a frame, such as in frames of a video from one or more intersection cameras. At operation 2520, the electronic device may analyze all pairs of objects. At operation 2530, the electronic device may determine whether or not a group of conditions are met (such as one or more of the groups of conditions discussed with respect to FIG. 23 above). If not, the flow may proceed to operation 2540 and end. Otherwise, at operation 2550, the electronic device may determine that a near miss has occurred.

[00632] In various examples, this example method 2500 may be implemented as a group of interrelated software modules or components that perform various functions discussed herein. These software modules or components may be executed within a cloud network and/or by one or more computing devices, such as the analysis device 2301 of FIG. 23.

[00633] Although the example method 2500 is illustrated and described as including particular operations performed in a particular order, it is understood that this is an example. In various implementations, various orders of the same, similar, and/or different operations may be performed without departing from the scope of the present disclosure.

[00634] For example, the method 2500 is illustrated and described as determining that a near miss/collision has occurred. However, it is understood that this is an example. In some implementations, the electronic device may perform one or more actions in response to determining that a near miss/collision has occurred. This may include recording data regarding the near miss/collision, marking the traffic data with one or more near miss/collision indicators, and so on. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00635] FIG. 26 depicts a flow chart illustrating a third example method 2600 for traffic near miss/collision detection. This method 2600 may be performed by the system 2300 of FIG. 23.

[00636] At operation 2610, an electronic device (such as the analysis device 2301 of FIG. 23) may analyze traffic video. At operation 2620, the electronic device may determine that a near miss/collision has occurred. At operation 2630, the electronic device may add a near miss/collision indicator to the traffic video.

[00637] In various examples, this example method 2600 may be implemented as a group of interrelated software modules or components that perform various functions discussed herein. These software modules or components may be executed within a cloud network and/or by one or more computing devices, such as the analysis device 2301 of FIG. 23.

[00638] Although the example method 2600 is illustrated and described as including particular operations performed in a particular order, it is understood that this is an example. In various implementations, various orders of the same, similar, and/or different operations may be performed without departing from the scope of the present disclosure.

[00639] For example, the method 2600 is illustrated and described as analyzing traffic video. However, it is understood that this is an example. In some implementations, other traffic data, such as point cloud data from one or more LIDAR sensors, may instead be analyzed. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00640] FIG. 27A depicts a first frame 2700A of traffic data video. As shown, the objects 2751 A-2751 D, 2751 F in the first frame 2700A each includes an indicator 2752A-2752D, 2752F that indicates that each of the objects 2751 A-2751 D, 2751 F has not been involved in a near miss/collision. As illustrated, the indicators 2752A-2752D, 2752F may be in dashed lines to show that each of the objects 2751 A-2751 D, 2751 F has not been involved in near miss/collision. However, it is understood that this is an example. In other examples, the indicators 2752A-2752D, 2752F may be green to show that the objects 2751 A-2751 D, 2751 F have not been involved in a near miss/collision. In still other examples, other indicators 2752A-2752D, 2752F may be used without departing from the scope of the present disclosure. FIG. 27B depicts a second frame 2700B of traffic data video. The second frame 2700B may be a subsequent frame to the first frame 2700A of FIG. 27A. Similar to FIG. 27A, the objects 2751 A-2751 F in the second frame 2700B each includes an indicator 2752A-2752F that indicates that each of the objects 2751 A-2751 F has not been involved in a near miss/collision. FIG. 27C depicts a third frame 2700C of traffic data video. The third frame 2700C may be a subsequent frame to the second frame 2700B of FIG. 27B. Similar to FIGs. 27A and 27B, the objects 2751 A-2751 D, 2751 F-27511 in the third frame 2700C each includes an indicator 2752A-2752D, 2752F-2752I that indicates that each of the objects 2751 A-2751 D, 2751 F-27511 has not been involved in a near miss/collision.

However, unlike FIGs. 27A and 27B, two of the objects 2751 G, 2751 H in the intersection are about to have a near miss.

[00641] FIG. 27D depicts a fourth frame 2700D of traffic data video. The fourth frame 2700D may be a subsequent frame to the third frame 2700C of FIG. 27C. As mentioned above, the two objects 2751 G, 2751 H in the intersection have proceeded and are involved in a near miss. As such, the indicators 2752G, 2752H for those two objects 2751 G, 2751 H that previously indicated that the two objects 2751 G, 2751 H had not been involved in a near miss have been modified to indicate that the two objects 2751 G, 2751 H have been involved in a near miss. However, it is understood that this is an example. In other implementations, the indicators 2752G, 2752H for those two objects 2751 G, 2751 H that previously indicated that the two objects 2751 G, 2751 H had not been involved in a near miss may instead be removed and other indicators 2752G, 2752H indicating that the two objects 2751 G, 2751 H have been involved in a near miss may be added. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00642] As illustrated, the indicators 2752G-2752H may be in solid lines to show that the objects 2751 G, 2751 H have been involved in a near miss. However, it is understood that this is an example. In other examples, the indicators 2752G, 2752H may be red to show that the objects 2751 G, 2751 H have been involved in a near miss/collision. In still other examples, other indicators 2752G, 2752H may be used without departing from the scope of the present disclosure. [00643] FIG. 27E depicts a fifth frame 2700E of traffic data video. The fifth frame 2700E may be a subsequent frame to the fourth frame 2700D of FIG. 27D. As shown, the indicators 2752G, 2752H for the two objects 2751 G, 2751 H that were involved in the near miss in FIG. 27D still indicate that the two objects 2751 G, 2751 H were involved in a near miss.

[00644] Although FIGs. 27A-27E are illustrated and discussed above with respect to a near miss, it is understood that this is an example. In various implementations, the present disclosure may detect and respond to collisions between objects 2751 A-2751 D, 2751 F instead of and/or in addition to detecting and responding to near misses. Collisions may be detected and responded to similarly to how near misses are detected and responded to in the present disclosure. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00645] Although FIGs. 27A-27E are illustrated and discussed above with respect to particular traffic data, objects, frames, and indicators, it is understood that these are examples. In various implementations, other traffic data, objects, frames, indicators, and so on may be used without departing from the scope of the present disclosure.

[00646] In various implementations, frames of a raw, real-time video feed from an intersection camera and/or other traffic data may be obtained (though it is understood that this is an example and that in other examples other data, such as point cloud LiDAR data, may be obtained and used). Detection and classification may be performed on each frame to identify and classify the objects in the frame. Structured data may then be determined for the objects detected. For example, a frame number may be determined for a frame, an intersection identifier may be determined for a frame, a unique tracker identifier may be assigned to each object detected, the class of the object may be determined (such as person, car, truck, bus, motorbike, bicycle, and so on), coordinates of the object detected in the frame may be determined (which may be determined with reference to known coordinates of the intersection and/or the intersection camera, such as camera longitude, latitude, city, state, country, and so on) (such as the minimum and maximum x positions of the object, the minimum and maximum y positions of the object, and so on), and the like.

[00647] Various metrics may be calculated from the structured data mentioned above. For example, a bounding box may be calculated for the object based on one or more x and/or y positions for the object. By way of another example, one or more geometric centers of the object’s bounding box may be calculated for the object in the x and/or y coordinate (such as an x min, a y min, and so on). By way of still another example, an intersection approach that the object is currently on may be calculated, such as based on a position of the object and a position of the center of the intersection.

[00648] Further, other structured data may be determined from the frames. For example, one or more time stamps associated with frames may be determined and/or associated with other structured data, such as to determine a time at which an object was at a determined x and/or y position. By way of another example, a light phase for the frame may be determined (such as whether a traffic light in the frame is green, red, and so on), though this may instead be determined by means other than image analysis (such as time-stamped traffic light data that may be correlated to a frame time stamp). This may be used to determine the traffic light phase when an object arrived at the intersection, such as by correlating a traffic light phase determined for a frame along with a determination that an object arrived at the intersection in the frame. In yet another example, data for an approach and/or intersection associated with a frame may be determined (such as based on a uniform resource locator of the video feed and/or any other intersection camera identifier associated with the frame, an approach identifier associated with the frame, an intersection identifier associated with the frame, and so on).

[00649] The structured data determined for an object in a frame may be used with the structured data determined for the object in other frames to calculate various metrics. For example, the difference between one or more x and/or y positions for the object (such as the difference and/or distance between x or y midpoints of the object’s bounding box) in different frames (such as in a current and a previous frame) may be calculated. Such difference in position between frames, along with times respectively associated with the frames (such as from one or more time stamps) may be used to calculate one or more metrics associated with the speed of the object (such as an average speed of the object during the video feed (such as in miles per hour and/or other units), cumulative speed, and so on). Such difference in position between frames may also be used to calculate various metrics about the travel of the object (such as the direction of travel between frames, how the object left an intersection, whether or not the object made a right on red, and so on). By way of another example, structured data from multiple frames may be used to determine a status of the object (such as an approach associated with the object, how an object moved through an intersection, an approach an object used to enter an intersection, the approach an object used to exit an intersection, and so on), a time or number of frames since the object was last detected (and/or since first detected and so on), whether or not the object is moving, and so on. [00650] Structured data and/or metrics for individual detected objects and/or other data (such as light phase, time, intersection position, and so on) determined using one or more frames and/or from one or more video feeds from one or more intersection cameras associated with one or more intersections may be used together to calculate various metrics, such as metrics associated with approaches. For example, structured data and/or metrics for individual detected objects associated with an approach identifier (which may be determined based on an association with the intersection camera from which frames of the video feed were obtained) may be aggregated and analyzed to determine one or more approach volumes (such as a number of vehicles (cars, motorbikes, trucks, buses, and so on)) in a particular approach, a number of light vehicles (such as cars, motorbikes, and so on) in a particular approach, a number of heavy vehicles (such as trucks, buses, and so on) in a particular approach, a number of cars in a particular approach, a number of trucks in a particular approach, a number of buses in a particular approach, a number of pedestrians in a particular approach, a number of non-motor vehicles in a particular approach, a number of bicycles in a particular approach, and so on), an average queue length (such as in feet and/or another unit of measurement) of a particular approach, and so on. By way of another example, light status in one or more frames may be tracked and/or correlated with other information to determine a light status, an effective green time (such as a length of time that objects are moving through a particular intersection), an effective red time (such as a length of time that objects are stopped at a particular intersection), a cycle time (such as a length of time that a light is green determined by comparing the light phase across multiple frames), a number of cars that arrived while a traffic light is green, a number of cars that arrived while a traffic light is red, a measure of individual phase progression performance derived from a percentage of vehicle volume arrivals on green, and so on.

[00651] Other structured data and/or metrics associated with approaches may be calculated. For example, a last stop time may be calculated based on a last time stamp that an object stopped at an approach. By way of another example, a last start time may be calculated based on a last time stamp that an object moved into the intersection at a particular approach. In other examples, an approach identifier for a particular approach may be determined, coordinates for a camera associated with a particular intersection may be determined, a number of lanes associated with a particular approach may be determined, and so on.

[00652] Structured data and/or metrics for individual detected objects and/or other data (such as light phase, time, intersection position, and so on) determined using one or more frames and/or from one or more video feeds from one or more intersection cameras associated with one or more intersections may be also used together to calculate various metrics associated with intersections. For example, a vehicle volume for a particular intersection may be determined by summing objects (such as cars, motorbikes, trucks, buses, and so on) in all approaches of a frame associated with the intersection, a light vehicle volume for a particular intersection may be determined by summing objects (such as cars, motorbikes, and so on) in all approaches of a frame associated with the intersection, a heavy vehicle volume for a particular intersection may be determined by summing objects (such as trucks, buses, and so on) in all approaches of a frame associated with the intersection, a car volume for a particular intersection may be determined by summing cars in all approaches of a frame associated with an intersection, a truck volume for a particular intersection may be determined by summing trucks in all approaches of a frame associated with an intersection, a bus volume for a particular intersection may be determined by summing buses in all approaches of a frame associated with an intersection, a person volume for a particular intersection may be determined by summing people in all approaches of a frame associated with an intersection, a bicycle volume for a particular intersection may be determined by summing bicycles in all approaches of a frame associated with an intersection, arrivals on green in all approaches of a frame associated with an intersection, arrivals on red in all approaches of a frame associated with an intersection, a number of near misses in a frame associated with a particular intersection (which may be calculated based on positions of objects in the frame, such as based on the distance between the geometric centers of the bounding boxes associated with two objects being less than a threshold), a current frame when a light went red, a frame when a light went green, and so on.

[00653] Other information for an intersection may be determined using the video feed, frames, and/or other structured data and/or metrics. For example, an identifier for a camera associated with an intersection may be determined, identifiers for frames of one or more video feeds associated with the intersection may be determined, observation times associated with an intersection may be determined (such as a time stamp based on ingestion time when other metadata from a stream or other video feed is not available, a cumulative time (such as from the start of processing of the video feed) may be determined, and so on.

[00654] Further, the above raw data, structured data, metrics, and so on may be used to detect one or more near misses/collisions. Detection of such near misses/collisions may be performed using one or more of the methods and/or procedures discussed above.

[00655] Although determination and calculation of structured data and/or metrics relating to one or more vehicles and/or other objects, approaches, intersections, and so on are discussed above, it is understood that these are examples. In various examples, any structured data and/or metrics relating to one or more vehicles and/or other objects, approaches, intersections, and so on may be determined and calculated from the objects detected in one or more frames of one or more video feeds of one or more intersection cameras and/or other traffic data without departing from the scope of the present disclosure.

[00656] Alternatively and/or additionally to determining and/or calculating structured data and/or metrics from one or more video feeds from one or more intersection cameras and/or other traffic data, connected vehicle data may be obtained and used. For example, structured data and/or metrics may be determined and/or calculated using a combination of connected vehicle data and data from one or more video feeds from one or more intersection cameras and/or other traffic data. By way of another example, a visualization dashboard may visualize connected vehicle data along with structured data and/or metrics determined and/or calculated from one or more video feeds from one or more intersection cameras and/or other traffic data.

[00657] To summarize the above, real-time video feed from an intersection camera and/or other traffic data may be obtained. Objects in frames of the video feed may be detected and classified. Positions of the objects at various times in the frames of the video feed may be determined, as well as information such as light statuses related to the objects. Differences between the objects in different frames may be used to determine behavior of the objects over time. Such calculated object metrics may be stored, such as in one or more vehicle tables. Such calculated object metrics for objects that are associated with a particular approach may be aggregated in order to determine various approach object volumes and/or other metrics related to the approach, which may then be stored, such as in one or more approach tables. Further, such object metrics for objects that are associated with a particular intersection may be aggregated in order to determine various intersection object volumes and/or other metrics related to the intersection, which may then be stored, such as in one or more intersection tables.

[00658] The above structured data and/or metrics related to one or more vehicles and/or other objects, approaches, intersections, and so on discussed above may then be processed and/or otherwise prepared for visualization and/or one or more other purposes, such as near miss/collision detection. For example, structured data and/or metrics related to one or more vehicles and/or other objects may be stored in one or more vehicle tables, structured data and/or metrics related to one or more intersections may be stored in one or more intersection tables, structured data and/or metrics related to one or more approaches may be stored in one or more approach tables, and so on. Such tables may then be used for visualization and/or one or more other purposes.

[00659] Although the above illustrates and describes performance of functions (such as detection and classification, determination of structured data, near miss/collision detection, and so on) on frames of a raw, real-time video feed from an intersection camera and/or other traffic data, it is understood that this is an example. In various implementations, one or more such functions (and/or other functions) may be performed on other traffic data, such as data from one or more LiDAR sensors.

[00660] LiDAR sensors may be operable to determine data, such as ranges (variable distance), by targeting an object with elements, such as one or more lasers, and measuring the time for the reflected light to return to one or more receivers. LiDAR sensors may generate point cloud data that may be used for the analysis discussed herein instead of frames of a raw, real-time video feed from an intersection camera and/or other traffic data.

[00661] In some examples, functions similar to those described above performed on frames of a raw, real-time video feed from an intersection camera and/or other traffic data (such as detection and classification, determination of structured data, near miss/collision detection, and so on) may be performed on the LiDAR sensor data. In other examples, structured data generated from LiDAR cloud data that has already been detected and classified may be obtained and various metrics may be calculated from such, similar to above, which may then be prepared for visualization and/or visualized and/or otherwise used similar to above.

[00662] LiDAR sensor data may have a number of advantages over frames of a raw, realtime video feed from an intersection camera and/or other traffic data. To begin with, point cloud data from one or more LiDAR sensors may not have the same privacy issues as frames of a raw, real-time video feed from an intersection camera and/or other traffic data as facial and/or other similar images may not be captured. Further, LiDAR sensor data may not be dependent on lighting and thus may provide more reliable data over all times of day and night as compared to frames of a raw, real-time video feed from an intersection camera and/or other traffic data. Additionally, LiDAR sensor data may provide data in three- dimensional space as opposed to the two-dimensional data from frames of a raw, real-time video feed from an intersection camera and/or other traffic data and thus may provide depth, which may not be provided via frames of a raw, real-time video feed from an intersection camera and/or other traffic data. [00663] For example, a determination may be made about the size of an average vehicle in pixels. This may be used with the LiDAR sensor data to determine the pixels from the center of a vehicle represented in the LiDAR sensor data and then infer the speed of the vehicle. Compared to approaches using frames of a raw, real-time video feed from an intersection camera and/or other traffic data, an assumption may not have to be made about object speed. This may be more accurate, but also may improve the processing speed of computing devices processing the data as functions performed on frames of a raw, real-time video feed from an intersection camera and/or other traffic data to determine speed may not need to be performed and can be omitted since this information may already be represented in LiDAR sensor data. Various configurations are possible and contemplated without departing from the scope of the present disclosure.

[00664] In various implementations, a system may include a memory allocation configured to store at least one executable asset and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate a near miss/collision detection service. The near miss/collision detection service may detect objects in a frame, analyze pairs of the objects, determine that a group of conditions are met, and determine that a near miss/collision has occurred.

[00665] In some examples, the group of conditions may include that a distance between an object pair of the pairs of the objects is less than a distance threshold. In a number of such examples, the distance threshold may be approximately 7 feet. By way of illustration, approximately 7 feet may be within 6-8 feet.

[00666] In various examples, the group of conditions may include that a speed of an object of an object pair of the pairs of the objects is greater than a speed threshold. In some such examples, the speed threshold may be approximately zero miles per hour. By way of illustration, approximately zero may be within 1 mile per hour of zero.

[00667] In a number of examples, the group of conditions may include that an angle between an object pair of the pairs of the objects is higher than an angle threshold. In some such examples, the angle threshold may be approximately 12 degrees. By way of illustration, approximately 12 degrees may be within 11 -13 degrees.

[00668] In various examples, the group of conditions may include that an object pair of the pairs of the objects are not both coming from a same approach. In some implementations, the group of conditions may include that one of an object pair of the pairs of the objects was not previously determined to be involved in another near miss/collision. In a number of implementations, the group of conditions may include that a sum of previous speeds is higher than zero for both of an object pair of the pairs of the objects.

[00669] In some examples, the group of conditions may include a first group of conditions for a first object pair of the pairs of the objects that are both vehicles and a second group of conditions for a second object pair of the pairs of the objects that include a vehicle and a pedestrian. In a number of such examples, the second group of conditions may include a lower distance threshold than the first group of conditions. In various such examples, the second group of conditions may include no condition related to an angle between the vehicle and the pedestrian. In some such examples, the second group of conditions may include a higher speed threshold than the first group of conditions and the second group of conditions evaluates the vehicle according to the higher speed threshold.

[00670] In a number of examples, the near miss/collision detection service may determine a conversion factor between pixels and a speed measurement. In various such examples, the speed measurement may be in miles per hour. In some examples, the system may further include adding a near miss/collision indicator to the frame.

[00671] In some embodiments, a method of near miss/collision detection may include obtaining traffic data, analyzing the traffic data, determining that a near miss/collision occurred, and responding to the near miss/collision.

[00672] In various examples, responding to the near miss/collision may include determining that the near miss/collision is a collision. In a number of such examples, the method may further include transmitting an alert regarding the collision.

[00673] Although the above illustrates and describes a number of embodiments, it is understood that these are examples. In various implementations, various techniques of individual embodiments may be combined without departing from the scope of the present disclosure.

[00674] In various implementations, a system for traffic monitoring, analysis, and prediction may include a memory allocation configured to store at least one executable asset and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service. The at least one service may obtain traffic data, perform object detection and classification, determine structured data, calculate metrics using the structured data, prepare processed data for visualization from the metrics, and present the prepared processed data via at least one dashboard. [00675] In some examples, the at least one service may determine the structured data by determining a frame number for a frame of video, determining an intersection identifier for the frame of video, assigning a unique tracker identifier to each object detected in the frame of video, and determining coordinates of each object detected in the frame of video. In a number of such examples, the at least one service further may determine the structured data by determining the class of each object detected in the frame of video.

[00676] In various examples, the at least one service may calculate the metrics using the structured data by calculating a difference between one or more x or y positions for an object in different frames of video. In some such examples, the at least one service may use the difference along with times respectively associated with the different frames to calculate at least one of the metrics that is associated with a speed of the object. In various such examples, the speed may be an average speed of the object during the video or a cumulative speed of the object.

[00677] In a number of examples, the at least one service may calculate the metrics using the structured data by correlating a traffic light phase determined for a frame of video along with a determination that an object arrived at the intersection in the frame.

[00678] In some implementations, a system for traffic monitoring, analysis, and prediction may include a memory allocation configured to store at least one executable asset and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service. The at least one service may retrieve structured data determined from point cloud data from LiDAR sensors used to monitor traffic, calculate metrics using the structured data, prepare processed data for visualization from the metrics, and present the prepared processed data via at least one dashboard.

[00679] In various examples, the metrics may include at least one of vehicle volume, average speed, distance travelled, pedestrian volume, non-motor volume, light status on arrival, arrival phase, a route through an intersection, or a light time. In some examples, the at least one service may summon at least one vehicle using at least one of the metrics or the processed data. In a number of examples, the at least one service may track near misses/collisions using at least one of the metrics or the processed data. In various examples, the at least one service may determine a fastest route using at least one of the metrics or the processed data. In some examples, the at least one service may control traffic signals to prioritize traffic using at least one of the metrics or the processed data. In various examples, the at least one service may determine a most efficient route using at least one of the metrics or the processed data.

[00680] In a number of implementations, a system for traffic monitoring, analysis, and prediction may include a memory allocation configured to store at least one executable asset and a processor allocation configured to access the memory allocation and execute the at least one executable asset to instantiate at least one service. The at least one service may construct a digital twin of an area of interest, retrieve structured data determined from traffic data for the area of interest, calculate metrics using the structured data, prepare processed data for visualization from the metrics, and present the prepared processed data in the context of the digital twin via at least one dashboard that displays the digital twin.

[00681] In various examples, the at least one service may simulate traffic via the at least one dashboard using the processed data. In some such examples, the at least one service may simulate how a change affects traffic patterns. In various such examples, the change may alter at least one of a simulation of the traffic, a traffic signal, or a traffic condition.

[00682] In a number of examples, the digital twin may include multiple intersections. In various such examples, the at least one dashboard may include indicators selectable to display information for each of the multiple intersections.

[00683] As described above and illustrated in the accompanying figures, the present disclosure relates to traffic monitoring, analysis, and prediction. Traffic data may be obtained, such as video from intersection cameras. Object detection and classification may be performed using the data. Structured data may be determined and/or output using the detected and classified objects. Metrics may be calculated using the structured data. Processed data may be prepared for visualization and/or other uses. The prepared processed data may be presented via one or more dashboards and/or the prepared processed data may be otherwise used.

[00684] In the present disclosure, the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are examples of sample approaches. In other embodiments, the specific order or hierarchy of steps in the method can be rearranged while remaining within the disclosed subject matter. The accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented. [00685] The described disclosure may be provided as a computer program product, or software, that may include a non-transitory machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A non-transitory machine- readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The non-transitory machine-readable medium may take the form of, but is not limited to, a magnetic storage medium (e.g., floppy diskette, video cassette, and so on); optical storage medium (e.g., CD- ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; and so on.

[00686] The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings.