Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR CONTROLLING A MOBILE CAMERA
Document Type and Number:
WIPO Patent Application WO/2020/084288
Kind Code:
A1
Abstract:
A method of controlling a mobile camera (300) is performed at a server (200). The server (300) receives a plurality of live video data streams from one or more mobile cameras (300), and stores data indicative of a time and a location at which each live video data stream was recorded. The server (300) receives a query (110) for video data, the query specifying a time and a location. The server identifies a mobile camera (300) that recorded a video data stream at a time and a location satisfying the query, and sends one or more instructions to the identified mobile camera (300). The instructions are configured to control the identified mobile camera (300).

Inventors:
DIGVA KAVALJEET SINGH (GB)
Application Number:
PCT/GB2019/052995
Publication Date:
April 30, 2020
Filing Date:
October 21, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DIGVA KAVALJEET SINGH (GB)
International Classes:
G06F16/487; G08B13/196; H04N1/21; H04N5/77; H04N21/21; H04N21/2743; H04N21/414; H04N21/4223; H04W4/18
Foreign References:
US20130039542A12013-02-14
US20170054948A12017-02-23
US20150189466A12015-07-02
Attorney, Agent or Firm:
CUPITT, Philip (GB)
Download PDF:
Claims:
CLAIMS:

1. A method of controlling a mobile camera, the method being performed at a server and comprising:

receiving a plurality of live video data streams from one or more mobile cameras; storing data indicative of a time and a location at which each live video data stream was recorded;

receiving a query for video data, the query specifying a time and a location; identifying a mobile camera that recorded a video data stream at a time and a location satisfying the query; and

sending one or more instructions to the identified mobile camera, the instructions being configured to control the identified mobile camera.

2. A method in accordance with claim 1 , the method further comprising:

storing the received plurality of live video data streams, wherein each of the stored video data streams is associated with the data indicative of the time and location at which it was recorded;

in response to receiving the query for video data, identifying at least one stored video data stream recorded at a time and a location satisfying the query; and

outputting the identified stored video data stream.

3. A method in accordance with any of the preceding claims, wherein the query specifying time and location specifies one or more of a period of time and a range of locations.

4. A method in accordance with any of the preceding claims, wherein the one or more instructions comprise an instruction configured to cause the identified mobile camera to transmit video data at a second resolution, the second resolution being higher than a resolution of a live video data stream received from the identified mobile camera, and

wherein the method further comprises receiving, in response to the instruction, video data at the second resolution from the identified mobile camera.

5. A method in accordance with any of the preceding claims, wherein the one or more instructions comprise an instruction configured to cause the identified mobile camera to store video data recorded by that camera in a video data file.

6. A method in accordance with claim 5, wherein the video data stored in the video data file is associated with the time specified in the query.

7. A method according to any of the preceding claims, wherein the one or more instructions comprise an instruction configured to cause the identified mobile camera to transmit a video data file to the server, and

wherein the method further comprises receiving, in response to the instruction, the video data file from the identified mobile camera.

8. A method in accordance with any of the preceding claims, further comprising determining that the identified mobile camera is not currently transmitting a live video data stream, and wherein the one or more instructions comprise an instruction configured to cause the identified mobile camera to begin transmitting a live video data stream to the server.

9. A method in accordance with any of the preceding claims, comprising:

identifying a plurality of mobile cameras, and sending an instruction to each of the identified mobile cameras to control the identified mobile cameras.

10. A method in accordance with any of the preceding claims, wherein identifying a mobile camera comprises:

determining stored time and location data corresponding to the specified time and location of the query; and

determining one or more mobile cameras corresponding to the determined time and location data.

1 1. A method in accordance with claim 10, wherein determining the time and location data comprises identifying time and location data within a predetermined range around the specified time and location data of the query.

12. A method in accordance with any of the preceding claims, further comprising: storing data indicative of one or more image features of the video data stream.

13. A method in accordance with claim 12, wherein at least some of the one or more image features are received by the server in response to an instruction from the server.

14. A method in accordance with any of claims 12 or 13, wherein the one or more image features comprise at least one of:

number plate data;

road sign data;

facial feature data; and

speed data of a vehicle.

15. A method of controlling a mobile camera, the method being performed at the mobile camera and comprising:

transmitting a live video data stream to a server;

transmitting, to the server, data indicative of a time and a location at which the live video stream was recorded;

receiving an instruction from the server, the instruction being configured to control the mobile camera; and

in response to receiving the instruction, performing one or more operations in accordance with the received instruction to control the mobile camera.

16. A method in accordance with claim 15, further comprising:

detecting an event using one or more camera sensors; and

in response to detection of the event, performing one or more operations to control the camera.

17. A method in accordance with any of claims 15 or 16, wherein the operations comprise one or more of:

initiating a recording of video data; and/or

transmitting a live video data stream at a first resolution; and/or

transmitting video data at a second resolution, wherein the second resolution is higher than the first resolution of the live video data stream; and/or

storing video data in a file; and/or

transmitting a video data file to the server.

18. A method according to any of claims 15 to 17, the method further comprising: receiving an instruction from the server to identify one or more image features of the live video data stream;

in response to the instruction from the server, analysing video data corresponding to the live video data stream;

identifying one or more image features of the video data; and

transmitting, to the server, data indicative of one or more image features.

19. A method according to claim 18, further comprising:

receiving, from the server, one or more selection criteria;

checking the one or more identified image features against the one or more selection criteria; and

transmitting, to the server, data indicative of image features matching the one or more selection criteria.

20. A method of providing video data, the method being performed at a server and comprising:

receiving a plurality of live video data streams from one or more mobile cameras, wherein each of the live video data streams is associated with data indicative of a location at which it was recorded;

receiving a query for video data, the query specifying a location;

identifying at least one live video data stream recorded at a location satisfying the query; and

outputting the identified video data stream.

21. A method in accordance with claim 20, wherein:

each of the live video data streams is associated with the data indicative of the time at which it was recorded;

the received query further specifies a time; and

identifying at least one live video data stream comprises identifying a live video data stream recorded at a time and a location satisfying the query.

22. A method in accordance with claim 20 or claim 21 , wherein outputting the identified video data stream comprises outputting a live video data stream.

23. A method in accordance with any of claims 20 to 22, the method further comprising storing the received plurality of live video data streams, and wherein outputting the identified video data stream comprises outputting a stored video data stream.

24. A method in accordance with any of claims 20 to 23, wherein the query specifies one or more of a period of time and a range of locations. 25. An apparatus configured to perform a method in accordance with any of the preceding claims.

26. A processor-readable medium comprising instructions which, when executed by a processor, cause the processor to perform a method in accordance with any of claims 1 to 24.

27. A computer program product comprising instructions which, when executed by a computer, cause the computer to perform a method in accordance with any of claims 1 to 24.

Description:
METHOD AND APPARATUS FOR CONTROLLING A MOBILE CAMERA

FIELD

The present disclosure relates to methods and apparatuses for controlling a mobile camera. In particular, but not exclusively, the disclosure relates to controlling a plurality of mobile cameras by a server to obtain video data recorded at a particular time and location.

BACKGROUND

There is an increasing presence and use of cameras in vehicles. Mobile cameras in vehicles, commonly known as“dash cams”, are increasingly used to monitor events occurring in the vicinity of the vehicles in which they are installed. A user may turn on a dash cam when using a vehicle to record footage during the journey of the vehicle. If an event occurs that is of interest to the dash cam user, they can select to store the footage the camera has taken for further use. The footage may be footage of an incident that occurred in the vicinity of the vehicle, such as an accident, a traffic violation, a crime committed on the road, etc. If the footage is not stored within a predetermined amount of time, the footage may be overwritten by newly recorded footage.

If an event occurs, mobile cameras in the vicinity of the event may have recorded footage relevant to the event. However, there is no straightforward way to gain an understanding of whether and what footage might be available of an event. Furthermore, there is no straightforward way to collect such footage from each of the mobile cameras on which it is recorded.

SUMMARY

In one aspect, the present disclosure provides a method of controlling a mobile camera, the method being performed at a server and comprising: receiving a plurality of live video data streams from one or more mobile cameras; storing data indicative of a time and a location at which each live video data stream was recorded; receiving a query for video data, the query specifying a time and a location; identifying a mobile camera that recorded a video data stream at a time and a location satisfying the query; and sending one or more instructions to the identified mobile camera, the instructions being configured to control the identified mobile camera.

The method may further comprise: storing the received plurality of live video data streams, wherein each of the stored video data streams is associated with the data indicative of the time and location at which it was recorded; in response to receiving the query for video data, identifying at least one stored video data stream recorded at a time and a location satisfying the query; and outputting the identified stored video data stream. The query specifying time and location may specify one or more of a period of time and a range of locations.

Various instructions for controlling the identified mobile camera are disclosed. The one or more instructions may comprise an instruction configured to cause the identified mobile camera to transmit video data at a second resolution, the second resolution being higher than a resolution of a live video data stream received from the identified mobile camera. The method may further comprise receiving, in response to the instruction, video data at the second resolution from the identified mobile camera. The one or more instructions may comprise an instruction configured to cause the identified mobile camera to store video data recorded by that camera in a video data file. The video data stored in the video data file may be associated with the time specified in the query. The one or more instructions may comprise an instruction configured to cause the identified mobile camera to transmit a video data file to the server. The method may further comprise receiving, in response to the instruction, the video data file from the identified mobile camera. The one or more instructions may comprise an instruction configured to cause the identified mobile camera to begin transmitting a live video data stream to the server. This instruction may be sent in response to determining that the identified mobile camera is not currently transmitting a live video data stream.

The method may further comprise identifying a plurality of mobile cameras, and sending an instruction to each of the identified mobile cameras to control the identified mobile cameras. Identifying a mobile camera may comprise: determining stored time and location data corresponding to the specified time and location of the query; and determining one or more mobile cameras corresponding to the determined time and location data. Determining the time and location data may comprise identifying time and location data within a predetermined range around the specified time and location data of the query.

The method may further comprise: storing data indicative of one or more image features of the live video data stream. At least some of the one or more image features may be received by the server in response to an instruction from the server. The one or more image features may comprise at least one of number plate data, road sign data, facial feature data, position data of a vehicle, direction data of a vehicle and speed data of a vehicle.

Another aspect of the present disclosure provides a method of controlling a mobile camera, the method being performed at the mobile camera and comprising: transmitting a live video data stream to a server; transmitting, to the server, data indicative of a time and a location at which the live video stream was recorded; receiving an instruction from the server, the instruction being configured to control the mobile camera; and in response to receiving the instruction, performing one or more operations in accordance with the received instruction to control the mobile camera.

The method may further comprise: detecting an event using one or more camera sensors; and in response to detection of the event, performing one or more operations to control the camera. The operations may comprise one or more of: initiating a recording of video data; and/or transmitting a live video data stream at a first resolution; and/or transmitting video data at a second resolution, wherein the second resolution is higher than the first resolution of the live video data stream; and/or storing video data in a file; and/or transmitting a video data file to the server.

The method may further comprise: receiving an instruction from the server to identify one or more image features of the live video data stream. In response to the instruction from the server, the method may further comprise analysing video data corresponding to the live video data stream, and identifying one or more image features of the video data. The method may further comprise transmitting to the server data indicative of one or more image features. The mobile camera may further receive one or more selection criteria from the server. The mobile camera may check the one or more image features against the one or more selection criteria. The mobile camera may transmit to the server data indicative of image features matching the one or more selection criteria. Another aspect of the present disclosure provides a method of providing video data, the method being performed at a server and comprising: receiving a plurality of live video data streams from one or more mobile cameras, wherein each of the live video data streams is associated with data indicative of a location at which it was recorded; receiving a query for video data, the query specifying a location; identifying at least one live video data stream recorded at a location satisfying the query; and outputting the identified video data stream.

Each of the stored video data streams may be associated with the data indicative of the time at which it was recorded, the received query may specify a time, and identifying at least one live video data stream may comprise identifying a live video data stream recorded at a time and a location satisfying the query.

The method may further comprise storing the received plurality of live video data streams, and outputting the identified video data stream may comprise outputting a stored video data stream. Alternatively or additionally, outputting the identified video data stream may comprise outputting a live video data stream.

The query may specify one or more of a period of time and a range of locations.

In other aspects, the present disclosure provides an apparatus configured to perform any of the methods disclosed herein. In particular, the present disclosure provides server and/or a mobile camera configured to perform the respective methods disclosed herein. The present disclosure also provides a system including one or more such servers and one or more such mobile cameras.

Another aspect of the present disclosure provides a processor-readable medium comprising instructions which, when executed by a processor, cause the processor to perform any of the methods disclosed herein.

Another aspect of the present disclosure provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform any of the methods disclosed herein. BRIEF DESCRIPTION OF THE DRAWINGS

Examples of the disclosure will now be described, by way of example only, with reference to the accompanying drawings, in which:

Figure 1 is a schematic representation of a network comprising a server and a plurality of mobile cameras;

Figure 2 is a schematic representation of the server shown in Figure 1 ;

Figure 3 is a schematic representation of one of the mobile cameras shown in Figure 1 ;

Figure 4 is a flow diagram of a method of controlling a mobile camera, as performed by a server;

Figure 5 is a flow diagram of a method of controlling a mobile camera, as performed by the mobile camera;

Figure 6 is a flow diagram of a method of controlling a mobile camera and responding to a query, as performed by a server; and

Figure 7 is a flow diagram of a method of analysing video data and transmitting identified features, as performed by a mobile camera.

DETAILED DESCRIPTION

Generally disclosed herein are methods and apparatuses for controlling a mobile camera. Figure 1 depicts a network 100 comprising one or more servers 200 and a plurality of mobile cameras 300, illustrated as cameras 300(a)-(d), wherein the one or more servers is able to communicate with each of the mobile cameras via a respective communication link 500. For the sake of clarity, only one server 200 is shown in Figure 1 , and the singular term“server” is used throughout the present description; it should, however, be appreciated that multiple servers may be used (e.g. as a load-balanced server cluster, as a distributed cloud server arrangement, or any other suitable configuration). The mobile cameras may send data over communication link 500. Specifically, the mobile cameras 300(a)-(d) may send a live video data stream and/or one or more video data files to the server 200. The mobile cameras 300(a)-(d) may also send, to the server 200, information indicating the time and location at which the live video data stream or video data file was recorded. Optionally, the mobile cameras 300(a)-(d) may also send other sensor information (which is described in more detail below) to the server 200. Video data, or other data, with corresponding location data may be referred to as geotagged data. Video data, or other data, with corresponding time data may be referred to as timestamped data. The server 200 may store the time and location data received from the cameras 300(a)-(d) in a memory. The server 200 may further store the one or more video data files and/or live video data stream in a memory, accessible by the server 200. The server 200 is further configured to be able to receive and process one or more queries 1 10. A query 110 may request video data and may specify a time and location. The server 200 may be configured to search the stored data to find time and location data satisfying the time and location data of the query 110. The server 200 may identify one or more mobile cameras 300 linked to (in other words, associated with) the time and location data satisfying the query 1 10. Having identified a mobile camera 300, the server 200 may send one or more instructions to the identified camera in order to control that mobile camera 300. In response to the instructions, the server 200 may receive further data from mobile camera 300. This further data may also be stored by the server 200. The server 200 may provide a response 120 to the query 1 10. For example, the server 200 may transmit or otherwise provide access to the data stored on the server 200 and/or the further data received from an identified camera 300. An advantage of receiving a plurality of live video data streams and corresponding location and time data is that the server has a live, up-to-date, overview of the location of the plurality of mobile cameras 300 and the video data those cameras are recording. This live data can be used for determining a response 120 to a query 110. Another advantage of this method is that a plurality of cameras can be controlled centrally by the server 200, for example for the collection of video and other data, and does not rely on local control of a mobile camera 300, for example by a user or owner of the mobile camera 300.

Figure 2 depicts the server 200. The server 200 comprises a processor 210 and a memory 220. The processor 200 is able to execute instructions, stored in the memory 220, to cause the server 200 to perform actions that implement methods described herein. Memory 220 may comprise a data store 230 in which data received by the server 200 may be stored. The data store 230 may alternatively or additionally comprise memory which is physically separate from, but connected to, the server 200. This separate memory may be physically remote from the server 200, and may be connected via a wireless and/or wired connection. The server 200 may further comprise a receiver 240 and a transmitter 250 for receiving and transmitting data, respectively. In an example implementation, the server 200 is a cloud based server, and comprises an Elasticsearch search engine to facilitate the search and retrieval of information (such as video data and metadata) stored in the data store 230.

Figure 3 depicts a mobile camera 300, sometimes referred to in this description as a camera 300, which comprises a processor 310 and a memory 320. Within this disclosure, a mobile camera 300 is a camera which may be moved during normal operation of the camera. Specifically, the mobile camera 300 may be moved between different geographical locations, and may record imaging data during this movement. For example, a mobile camera may be placed in a vehicle, such that it can record imaging data while the vehicle is in motion. A dash cam is an example of a mobile camera that can be placed in a vehicle and used to implement the present disclosure. However, it will be appreciated that the present disclosure can also by implemented by other types of mobile camera. A mobile camera 300 may also record imaging data while it is stationary. The processor 310 is able to execute instructions, stored in the memory 320, to cause the mobile camera 300 to perform actions that implement the methods described herein. The mobile camera 300 comprises imaging hardware 370 able to record data, such as video data and image stills. Imaging hardware 370 may comprise optical elements (e.g. a lens and an image sensor), electronic elements, connectivity elements, signal processing elements, other processing elements, and/or software elements for capturing, processing, and transmitting imaging data. Imaging data may comprise for example video data, a live video data stream, or image stills. The mobile camera 300 further comprises a receiver 340 and a transmitter 350, for receiving and transmitting data, respectively. The data may be transmitted and/or received over the communication link 500 and may comprise, for example, imaging data, or instructions for controlling the mobile camera 300.

It will be appreciated that the memory 320 of the mobile camera 300 is not sufficiently large to store imaging data indefinitely. The memory 320 of the mobile camera 300 may store imaging data in a loop buffer 330, which has a predetermined amount of storage space available for storing data. A loop buffer is a type of first-in first-out (FIFO) data structure, and may be known as a circular buffer, circular queue, ring buffer or cyclic buffer. The memory structure in which the loop buffer 330 is stored may use data files, or may be implemented as raw storage on a storage medium. Purely by way of example, the storage medium may comprise a Secure Digital (SD™) card or a flash memory (e.g., a NAND flash memory). Once all storage space in the loop buffer 330 is filled, newly recorded imaging data may be stored in a memory location already storing data, thus overwriting the older stored data. The loop buffer 330 is able to retain a set amount of data history, while overwriting the oldest saved data with newer recorded data. The length of the time period of historical imaging data that can be saved in the loop buffer 330 depends on the size (i.e. data storage capacity) of the loop buffer 330, the resolution of the imaging data saved in the buffer 330, and the frequency or rate at which new imaging data is produced for storage in loop buffer 330, and may further depend on other factors. The size of the loop buffer 330 may be changed (i.e., increased or decreased) by the processor 310. For example, the processor 310 may change the size of the loop buffer in response to a command from the server 200. As another example, the processor 310 may change the size of the loop buffer 330 to allow more or less storage space in the memory 320 to be allocated to a file store 360 (described below). A mobile camera 330 may comprise multiple loop buffers 330, which may store different types of data. For example, a first loop buffer may store video data at a first resolution, and a second loop buffer may store storing video data at a second resolution.

The memory 320 of the camera 300 may further comprise a file store 360, in which video data files may be saved separate from the loop buffer 330. Unlike imaging data stored in the loop buffer 330, video data files stored in the file store 360 are not automatically overwritten by new recorded data.

Purely by way of example, video data may be stored in the loop buffer 330 in MPEG-2 transport stream (“MPEG-TS”) format. Video data files may be stored in the file store 360 in MPEG-4 Part 14 (“MP4”) format. Other suitable formats, including proprietary formats, may also be used to store imaging data in the loop buffer 330 and the file store 360. Imaging data in the loop buffer 330 may be converted into a different format for storage in the file store 360.

A mobile camera 300 may further comprise one or more sensors for detecting or measuring one or more characteristics of the camera 300, the camera’s environment and/or the imaging data it records. The sensors may include one or more of a location sensor, a gyroscopic sensor, an accelerometer, an infrared sensor, a magnetometer, a thermometer, and a barometer, for example. The location sensor is configured to determine the location of mobile camera 300. The location sensor may be, for example, a Global Navigation Satellite System (GNSS) sensor. The GNSS sensor may be configured to use any suitable navigation satellite system including, but not limited to, the Global Positioning System, Galileo, GLONASS and/or BeiDou. The location sensor need not necessarily be a GNSS sensor. For example, the location sensor may be configured to determine the location of the mobile earners 300 using a land-based positioning system, e.g. by triangulating a position based on a plurality of network access points, such as cellular telephone base stations. The location sensor may be used to determine the velocity of the mobile camera 300, by determining the difference between the location of the camera at two or more points in time.

Methods of controlling a mobile camera 300 will now be described with reference to Figures 4 to 6.

Figure 4 is a flow diagram of a method that can be executed by a server 200 to control a mobile camera 300. In operation 402, the server 200 receives a plurality of live video data streams from a plurality of mobile cameras 300, for example cameras 300(a)-(d) shown in Figure 1. Each mobile camera 300 may provide a separate live video data stream independently from the other cameras and/or data streams. Each mobile camera 300 also provides data indicating the time and location at which the live video stream was recorded by that camera 300. The data indicating the time and location may be encoded in the live video stream itself. For example, each frame of the live video stream may include metadata that indicates the time and location at which that frame was recorded. Alternatively, the data indicating the time and location at which the live video stream was recorded may be provided to the server in a communication that is separate from the live video stream itself.

A live video data stream may be a transmission of video data, by a mobile camera 300, occurring at substantially the same time as the video data is recorded. A live stream of video data may be received by the server 200 with a slight delay relative to when the data was recorded, for example a 5 seconds, 10 seconds, 30 seconds, or 1 minute delay. The delay between the video data being recorded and the live video data stream being received by the server 200 may be caused by a plurality of factors. These factors may be caused by the mobile camera 300 and/or the communication link 500. The live video data stream must be prepared for transmission to the server 200 by a mobile camera 300. This process may include signal processing of the recorded video data, for example to change the resolution of the video data and/or to change the data format of the video data. This signal processing of the video data takes a finite (non-zero) amount of time. The video data stream may be transmitted across the communication link 500 in data packets, wherein each data packet may comprise video data of a predetermined length of time, for example 1 , 5, or 10 seconds. The processing of video data may further comprise preparing these data packets of the video stream. The delay caused by preparation of the data packets may depend on the length of time of video data included in a packet. A mobile camera 300 may wait to collect an amount of video data to fill a data packet, before sending the next data packet. The delay caused by the data packet creation may be at least as long as the duration of video data comprised in a data packet. Once the video data is prepared for sending as part of a live video data stream, the live video data stream will take a finite (non-zero) amount of time to be transmitted over the communication link 500. The capacity and data transfer speed of the communication link 500 may influence the delay with which the live video data stream is received by the server 200.

In operation 404, the server 200 stores the time and location data received from the mobile camera 300, in its data store 230. The server 200 also stores data indicating which mobile camera 300 provided the time and location data. The data is stored in such a way that it is searchable by the server 200 based on one or both of location and time. For example, the data store 230 may comprise a database configured to store the time and location data. The database may also be configured to store the live video data streams that were received in operation 402.

In operation 406, the server 200 receives a query 110. The query 110 specifies a time and a location. The time may be expressed as a specific time, multiple specific times, a period of time, or multiple periods of time. The time may optionally comprise a date. A time may be specified as a number of hours and minutes (and, optionally, seconds) on a particular date. It will be appreciated that other suitable methods for specifying a time may be used, such as a Unix™ time stamp. The location may be expressed as a specific location, multiple specific locations, a distance range around one or more specific locations, or may specify an area such as for example a street, a neighbourhood, a postcode, a range of geographic coordinates, etc. A location may be specified by the geographic coordinates of that location, e.g. GNSS coordinates. More complex queries are also possible. In one example, a query can be formulated to request recordings within an area defined by a travel time, a speed of travel, a starting location and a starting time (e.g.“show me all recordings reachable by travelling for W minutes atX kilometres per hour, when starting from location X at time Z”). A query of this form can be used to track persons involved in an incident (e.g. a perpetrator, a victim and/or a witness) by retrieving video footage from cameras located at any point along all of their possible routes away from the incident. As another example, a query may be formulated to request recordings at a particular location at a future time. This type of query can be used to schedule recordings, by causing mobile cameras 300 that would otherwise be inactive to begin recording and/or by causing mobile cameras 300 to record data in a different manner (e.g. at a higher resolution and/or by storing video data files in the file store 360) if they are in that particular location.

In operation 408, the server 200 searches the data store 230 for data satisfying the time and location of the query 110. In order to determine whether time and location data satisfies the query 110, the server 200 may compare the stored data to the specified time and location using ranges and/or limits set by the query 110. Additionally or alternatively, the server 200 may use a predetermined range around the specified time and location to determine whether or not the time and location data are considered to satisfy the query 110. For example, the predetermined range may be 5, 10, 20, or 30 seconds around a specified time and/or 10, 20, 50, or 100 metres around a specified location. The predetermined range may be set during configuration of the server 200, may be set by a user of the server 200, or may be included as part of a received query 1 10.

In operation 410, for a set of time and location data satisfying the query 1 10, the server 200 identifies one or more mobile cameras 300 that recorded a video stream at the time and location data that satisfied the query 1 10. Information identifying the mobile camera 300, for example a camera reference or other unique identifier, may be stored in the data store 230 with the time and location data. Information identifying the mobile camera 300 that recorded the video stream may be saved separately from the time and location data itself. Alternatively or additionally, the camera 300 that recorded the video stream may be determined from the time and location data itself, for example by being encoded in the data structure (e.g. using a particular file format) in which this time and location is stored. The data structure may contain metadata identifying the source of the data, which may be an identifier of the mobile camera 300 that recorded the data. Alternatively or additionally, the mobile camera 300 that recorded a video stream may be determined from the video stream, for example from metadata contained in the video stream, wherein that metadata identifies the camera.

In operation 412, the server 200 sends one or more instructions to the camera 300 identified at operation 410 for controlling the identified camera 300. The nature of the instructions may depend, for example, on the time of the data satisfying the query 1 10, the nature of query 1 10, or one or more properties of identified camera 300. If more than one mobile camera 300 was identified at operation 410, the server 200 may send one or more instructions to each identified camera 300.

As part of the methods disclosed herein, the server 200 may store video data received in the live video data stream in the data store 230. An advantage of storing video data at the server 200 is that the server 200 is able to provide video data in response to a query 110 directly, without requesting further data from a mobile camera 300. The stored video data may be linked to (or, in other words, associated with) the time and location data stored in operation 404. The video data may be stored in the format in which it was received, or may be converted to another format and/or another resolution for storage. Example formats for video data include MP4 or MPEG-TS formats. A query 1 10 may comprise a request for video data related to a time and location specified in the query 1 10. In response to a query 110, the server 200 may identify stored video data that was part of a live video data stream linked to a time and location satisfying the query 1 10. The identified video data may be only a portion of a video data stream spanning a period of time, for example a portion of stored video data corresponding to a period of time within a live video data stream covering a longer time period. The server 200 may output the identified stored video data as part of a response 120 to the query 110. Outputting stored video data may comprise sending a video data file comprising a copy of the stored video data as part of a response 120 to the query 1 10. Alternatively or additionally, outputting stored video data may comprise providing information (e.g., a uniform resource locator and/or login data) in the response 120, whereby the information in the response enables a receiver of the response 120 to access the stored video data in the data store 230. The server 200 is able to control an identified mobile camera 300, which may be in response to a query 110, by sending one or more instructions to the camera 300. These instructions may include an instruction to transmit video data to the server 200, wherein the video data has a second resolution that is higher than the resolution of the live video data stream. The resolution of the live video data stream may be referred to as a first resolution, and may be set by the mobile camera 300 to be sufficiently low so that live streaming over the communication link 500 is possible. Different live video data streams may have different resolutions. A mobile camera 300 may for example record video data at a resolution determined by the imaging hardware 370. This resolution may be equal to or higher than the second resolution mentioned above, for example 1080p resolution, ultra-HD resolution, or 4K resolution. The camera 300 may convert the video data to a lower resolution, for example 340 x 480, 600 x 800, or 720p resolutions. The mobile camera 300 may use the first resolution (i.e. the lower resolution) to transmit a live video data stream to the server 200 over the communication link 500. The server 200 may receive second resolution video data in response to the instruction sent to the camera 300. An advantage of the server 200 having the ability to control the mobile camera 300 to provide video data at a second (i.e. higher) resolution is that the response 120 to query 1 10 can provide more detailed video of an event of interest to a party making the query 1 10, without having to transmit all second resolution data to server 200. The quantity of data transmitted over the communication link 500 can thus be reduced (which may, in turn, reduce the costs of transmitting data), and the quantity of storage space require at the server 200 may also be reduced. Only video data of interest for a query 1 10 is transmitted over link 500 and potentially stored at server 200.

The one or more instructions sent to the identified mobile camera 300 may comprise an instruction to store video data in a file. The instruction may specify the resolution at which the video data is to be stored, or the resolution may be the default resolution at which camera 300 stores recorded video data. The instruction may specify one or more periods of time and/or one or more locations for which corresponding video data is to be stored in the video data file. The video data file may be stored in the file store 360, where it may be protected from being overwritten by newly recorded data (unless overwriting is specifically instructed). The mobile camera 300 may generate a reference to the video data file that can be used to identify the video data file, and link it to the instruction. The reference to the created video data file may be transmitted and provided to the server 200. Time and location data corresponding to the video data may further be stored or otherwise linked to (i.e. associated with) the stored video data file.

The one or more instructions sent to the identified mobile camera 300 may comprise an instruction to transmit one or more video data files to the server 200. The instruction may specify a time and/or a location, or a period/range of one or both of these, to identify one or more video data files to be sent. The instruction may comprise a reference to a video data file to be sent. In response to the instruction, the camera 300 may identify the requested video data file(s), and transmit them over the communication link 500 to the server 200.

The server 200 may determine that an identified camera 300, corresponding to time and location data satisfying query 110, is not currently providing a live video data stream. The server 200 may send an instruction to the identified camera 300, to cause the identified camera 300 to initiate a live video data stream to the server 200. Camera 300 receiving the instruction may be recording, but not transmitting, video data. In response to the instruction, the camera 300 may begin a transmission of a live video data stream to the server 200. Alternatively, the camera 300 may not be recording video data, in which case the instruction can initiate a recording of video data as well as begin a transmission of a live video data stream to the server 200.

In an example query 110, the time and location specified in query 1 10 may be satisfied by a current or recent time and location of an identified camera 300. In this instance, recent can be taken to mean within a period of time for which recorded imaging data is still stored in the memory 320 of the camera 300, for example in the loop buffer 330. If the current time satisfies query 110, the server may send an instruction to the camera 300 to send video data at a second resolution alongside (i.e. at substantially the same time as) the live video data stream. The second resolution may be higher than the first resolution of the live stream, and may not be suitable to be sent as a live stream. Therefore, the camera 300 may implement the instruction by storing the second resolution video data in a file, and transmitting the video data file to server 200 at a rate which may be slower than a live stream. The instruction may be executed by the camera 300 by storing at least a portion of the loop buffer 330 data in a video data file at the second resolution, and sending the stored video data file over the communication link 500. Once the video data is sent, the corresponding video data file at the camera 300 may be deleted or overwritten. The video data file may be separate from the loop buffer 330, and may for example be stored in the file store 360 of the camera 300. Alternatively, the video data may also be sent straight from the loop buffer 330 to the server 200.

If the current time and location satisfy the query 110, the camera may send second resolution video data to the server 200 for a predetermined period of time (which may be specified in the instruction received from the server 200), or may send second resolution video data until an instruction is received to stop doing so. In cases where a recent time and location satisfy the query 1 10, the server 200 may send an instruction to the identified camera 300 to send second resolution video data corresponding to the time and location. The mobile camera 300 may respond to this request by retrieving the video data from the loop buffer 330, storing the video data in a file, and transmitting the file. If the video data corresponding to that time is no longer stored in the loop buffer 330 or elsewhere in the memory 320 of the camera 300, the camera 300 may notify the server 200 that the requested video data is no longer available.

Figure 5 is a flow diagram of a method for controlling a mobile camera 300. The method is performed at a mobile camera 300. The method involves the mobile camera 300 transmitting 502 a live video data stream to the server 200, which may be done over the communication link 500. The camera further transmits 504 to the server 200 data indicative of a time and a location at which a live video data stream was recorded. The data may, optionally, be sent separately to the live video data stream itself. The time and location data sent by the camera 300 may also include further data indicating the identity of the camera that recorded this data. The camera 300 may receive 506 an instruction from server 200 as set out above, and may perform 508 one or more operations to control the camera 300 in response to receipt of the instruction from the server 200.

The mobile camera 300 may comprise one or more sensors for measuring properties relating to the camera 300. A location sensor may be used to determine the location of the mobile camera 300, stored as location data. This location data may be linked to the time at which the location was determined (that is to say, the location data is timestamped), and provided to the server 200. The mobile camera 300 may comprise a gyroscopic sensor, which may be used for determining orientation of the mobile camera 300, stored as orientation data. This orientation data may be provided to the server 200, and may be used to determine what can be seen in imaging data (such as video data) captured by camera 300. Mobile camera 300 may comprise an accelerometer, which may be used to determine acceleration of the mobile camera 300. Mobile camera 300 may further comprise one or more of a thermometer to measure temperature, or a barometer to measure pressure. Mobile camera 300 may further comprise an infrared sensor to measure activity in the vicinity of the camera (for example, to detect an approaching person or vehicle).

One or more of the sensors 380 of the mobile camera 300 may be used for detection and identification of potential incidents. Potential incidents may be identified through detection of events. Examples of events include a sudden acceleration, a sudden deceleration, or a change in angular direction, which may be detected by an accelerometer or a gyroscope. These examples of events may indicate an incident involving an impact affecting movement of mobile camera 300 (e.g. a collision involving a vehicle in which the mobile camera 300 is located). An infrared sensor may be used to detect an approaching individual or vehicle, whose actions and/or movements may be of interest, and may constitute an event. If an event is detected by a sensor 380, this may trigger the camera 300 to begin recording video data. If the camera 300 is already recording during detection of an event, the camera may store the video data recorded in the period of time around this moment in a separate video data file to avoid it being overwritten or deleted by newly recorded footage. Mobile camera 300 may receive and/or store information to allow it to determine when an event detected by a sensor is a notable event that warrants recording and/or storing of video data. The mobile camera 300 may be configured to notify the server 200 if an event is detected. In some implementations, the mobile camera 300 may contact the server only if certain events occur, if more than a threshold number of events occur, or if more than a threshold number of events occur within a predetermined period of time.

The mobile camera 300 may send video data to the server 200 continuously, as a live video data stream. Alternatively, the mobile camera 300 may send video data to the server 200 in response to the detection of an event by a sensor 380 of the mobile camera 300 or in response to an instruction received over the communication link 500 from the server 200. Alternatively or additionally, the mobile camera 300 may send imaging data in the form of a still image, which may be sent periodically. For example, when a mobile camera is not recording video data, it may record a still image periodically, e.g. every 5, 10 or 30 minutes, and provide the still image to the server 200 with corresponding time and location data. This allows the server 200 to remain aware of the presence of mobile camera 300, even when the camera is not sending a live video data stream.

The mobile camera 300 may be located in a vehicle. Specifically, the mobile camera 300 may be a dash cam. The mobile camera 300 may be powered by an external source connected to the vehicle. Alternatively or additionally, the mobile camera 300 may be powered by an internal battery, which may be rechargeable. For example, a mobile camera 300 located in a vehicle may be powered by the power supply of the vehicle while the vehicle engine is turned on, and may be powered by an internal battery while the vehicle engine is turned off. The internal battery may be charged by the vehicle while the engine is on and/or may be charged separately from the vehicle.

The mobile camera 300 may be connected to the server 200 at least in part by a wireless link forming a communication link 500. Such a link may for example comprise Wi-Fi™ (IEEE 802.1 1), GSM, GPRS, 3G, LTE, and/or 5G connectivity. For a mobile camera 300 located in a vehicle, the communication link may also comprise a wired connection, for example an Ethernet (IEEE 802.3) connection, to a connectivity hub of the vehicle. The vehicle connectivity hub may then provide a communication link 500 to the server 200, for example using Wi-Fi™ (IEEE 802.11), GSM, GPRS, 3G, LTE, 5G, or other connectivity channels. The mobile camera 300 may comprise both wired and wireless connections, so that it can form a communication link 500 independently or make use of other available connections.

In some instances, the communication link 500 might fail. For example, the communication link 500 might be unavailable in an area where there is no wireless network connectivity, or due to a hardware failure. In such a case, the mobile camera 300 might be notified internally of an unsuccessful data transmission and may save the first resolution live video data stream data in the loop buffer 330 and/or video data files in the mobile camera memory 320, for sending upon restoration of the communication link 500. The mobile camera 300 may reduce the amount of second (i.e. higher) resolution video data stored in order to increase the amount of first (i.e. lower) resolution data stored by mobile camera 300, to avoid video data loss for a period of time. Mobile camera 300 may continue to timestamp and geotag video data at the time of recording, for transmitting to the server 200 upon restoration of the communication link 500. If the communication link 500 fails for a prolonged period of time, and the mobile camera 300 runs out of memory for storing video data, older video data may be overwritten, and may be lost.

In another example instance, the communication link 500 may be present, but may be unable to transmit the desired amount of data as part of a live video data stream.

The server 200 may communicate over one or more wired or wireless communication links (including, but not necessarily limited to, the communication link 500). For example, the server 200 may communicate using using Ethernet (IEEE 802.3), Wi-Fi™ (IEEE 802.11), GSM, GPRS, 3G, LTE, and/or 5G connections. The data sent over the communication links may be secured using the encryption of the respective connectivity standard. The data transferred over the connection may further be encrypted using encryption data mechanisms known in the art.

The server 200 may receive a query 1 10 from an authorised party. The server 200 may have (or have access to) a list of authorised parties from which it accepts and handles queries. For example, the list of authorised parties may include a law enforcement agency, such as the police. An authorised party may have secure authentication credentials, which can be added to a query 110 so that the server 200 can authenticate the query 1 10. The authentication credentials may further indicate to the server 200 the identity of the party presenting the query 110. The query 110 may be sent by the authorised party to the server 200 over a communication link. The communication link may be the same communication link 500 used by server 200 to communication with mobile cameras 300, or may be a separate communication link, for example a wired communication link. The communication link used for transmitting one or both of queries 1 10 and query responses 120 may be a secure communication link. Authentication credentials may be required to authorise a party. Parties which are not authorised may be prevented from presenting a query 1 10 to the server 200. Alternatively or additionally, a query 1 10 from a non-authorised party may lack authentication credentials, and the server 200 may as a result refuse to handle and respond to such a query 110.

A query 1 10 may further comprise information regarding the nature of the required response 120 to query 110. For example, a query 110 may specify one or more destinations, or the format of the response. Example queries may include a request to send video data corresponding to a specified time and location, a request to initiate and send a recording by a camera 300 in a particular location at a current or future time, a request to retrieve data from server 200, or a request to receive data stored on one or more mobile cameras 300. In addition to time and location data, a query 110 may specify further requirements. Data relating to the specified further requirements may be sent, to the server 200, along with time and location data by a mobile camera 300. For example, a query may specify an orientation of a camera 200 in one direction or a range of directions, which may be determined from data provided by a magnetometer. The instantaneous orientation of the camera 200 may be sent alongside time and location data by the mobile camera 300.

A mobile camera 300 may be owned by a camera-owning party, which may or may not be an authorised party 300. Different mobile cameras 300 may be owned by different parties. A mobile camera 300 may have local controls which may allow the camera to be controlled by a party other than the server 200. For example, a mobile camera 300 may be controlled by the camera itself based on input and information received from one or more sensors 380, or may be controlled by a party owning and/or using mobile camera 300. An authorised party may be separate from a party owning and/or using a mobile camera 300. A party using a mobile camera 300 may be able to control the mobile camera 300 manually. For example, a user may be able to initiate or stop a live video data stream or a video data recording.

Figure 6 illustrates a flow chart of an example query 110 and response 120 by a server 200. Similar to Figure 4, in operation 602, the server receives a plurality of live video data streams from a plurality of mobile cameras, wherein each live video data stream is timestamped and geotagged, thereby providing time and location data. In operations 604 and 606, the server 200 stores the time and location data, and video data of the live video data stream, respectively, in a searchable data store 230. In operation 608, the server 200 receives 610 a query 1 10 from an authorised party. The query 110 specifies a time and location of interest. In response to receipt of the query 110, the server 200 searches the data sore 230 for time and location data corresponding to the specified time and location. The server 200 identifies all entries in data store 230 that have time and location data falling within a predetermined range around the specified time and location data of the query 110. For each identified entry of time and location data, the server transmits 612 the video data corresponding to the time and location data, as part of a response 120 to query 110. In operation 614, the server 200 identifies a mobile camera 300 responsible for obtaining the video data satisfying the query 110. Server 200 may determine that the identified mobile camera 300 may have access to further information relevant to the query 110. For example, the identified mobile camera 300 may have video data at a higher resolution than the video data stored in data store 230. In operation 616, the server 200 sends instructions to the identified mobile camera 300 to control the mobile camera 300 to transmit the further relevant information to the server. In the above example, this may involve the mobile camera 300 storing, in a video data file, the relevant video data at a higher resolution than the streamed video data. The video data file may then be transmitted to the server 200 over a communication link 500, along with corresponding time and location data (for example, by timestamping and geotagging the video data). In operation 618, the server 200 receives the further relevant information from mobile camera 300, which in this example may be the further video data. In operation 620, the further received video data is then stored by server 200 in data store 230. Based on its corresponding time and location data, this further video data may be linked to other data corresponding to this video data, for example the video data of the live video data stream, and its corresponding time and location data. In operation 622, the server 200 sends the further relevant data as part of a response 120 to the query 1 10, to the authorised party and/or to a destination specified in the query 1 10.

In accordance with the present disclosure, a mobile camera 300 may be configured to analyse video data. For example, the processor 310 of the mobile camera 300 may execute instructions that perform video data analysis. Video data analysis may comprise processing one or more images in the video data recorded by the mobile camera 300. Video data analysis can be performed using a suitable computer vision and/or machine learning technique. For example, the processor 310 may implement a trained neural network (e.g., a convolutional neural network) or a support vector machine in order to analyse video data. Analysing video data may comprise identifying one or more image features that are present in the video data. Examples of image features that may be identified through video data analysis include vehicle registration plates, road signs, facial features of people visible in the video data, and speeds of other vehicles. An advantage of analysing data at the mobile camera 300 instead of at the server 200 is that the mobile camera 300 has access to the recorded video data in a form that has not been compressed for transmission to the server 200. The mobile camera 300 may thus be able to identify image features that would not be identifiable following lossy compression of the video data. The image features that are identified by video data analysis at the mobile camera 300 may be sent to the server 200. As mentioned above, a video data stream may comprise metadata, which may comprise data identifying the mobile camera 300 recording the video stream, and data relating to the time and/or location of the captured video data. The metadata may further include information indicative of image features identified by video data analysis. For example, a frame of the video data stream may be associated with metadata that indicates one or more image features identified in that frame. The mobile cameras 300 may thus generate metadata relating to the identified image features, for providing to the server 200.

The server 200 may store the received metadata, including the information indicative of image features identified by video data analysis, in the data store 230. The data store 200 can store the received metadata in such a way that it is searchable (e.g. in a searchable database) in order to allow a specific frame of video data to be retrieved based on the metadata associated with the frame. When the metadata includes the time and/or location of the video data, the image features can be timestamped and/or geotagged. It is thus possible to search the data store 230 to identify the times and/or locations at which a particular image feature was identified in the video data and, optionally, to retrieve frames of the video data containing that image feature.

A server 200 may receive a query 1 10 for video data that specifies one or more image features. The query 1 10 may specify image feature(s) alternatively or additionally to specifying a time and/or location. The one or more specified image features may be used to identify video data that satisfies the query 110. In response to the query 110, the server 200 may send the video data in which the image feature is present and/or metadata relating to the identified image feature. The server 200 may also provide a location and/or a time at which the image feature was identified or recorded, which may be achieved using timestamps and/or geotags of the image feature and related video data. Parameters of the query 110 may be used to determine which data is provided in response to the query 1 10. For example, a query 1 10 may specify a number plate, and request any footage in which this identified feature is present, as well as the time and location of the video footage. The server 200 may send video data (e.g. one or more frames), as well as timestamp data and geotag data corresponding to the image feature in the video data. Analysing video data may comprise analysing a plurality of recorded images separately (i.e. individually) or together. A set of consecutive images may be analysed together to determine changes in the field of view of a mobile camera over time, for example to detect a moving vehicle or pedestrian.

An example of a method of analysing video data will now be described with reference to Figure 7. A server 200 may transmit an instruction to a mobile camera 300, which instructs the mobile camera 300 to analyse the video data that it captures. In operation 702, the mobile camera 300 receives the instruction to analyse video data. For example, the server 200 may transmit an instruction to turn video data analysis at a mobile camera 300 on or off. The instruction may specify a time and/or location at which the mobile camera 300 should analyse data. Analysis may be performed at times specified by the server 200. For example, analysis may be performed continuously, periodically, at a specific time, over a specified time range, etc. Analysis may be performed at locations specified by the server 200. For example, the server 200 may instruct the mobile camera 300 to analyse video data when it is located in a specified area. The instructions sent by the server 200 may specify a time or time range, and a location or area in which video data analysis should be performed by mobile camera 300. Alternatively or additionally, the instructions may instruct the mobile camera 300 to perform one or more specific types of video data analysis, thereby to identify one or more specific types of image feature in the video data. Operation 702 is optional, and the mobile camera 300 may perform video data analysis by default, without the need to be instructed to do so by the server 200.

In operation 704, the mobile camera 300 analyses video data. As a result of analysing the video data, at operation 706 the mobile camera 300 identifies one or more image features present in the video data. Examples of identified image features include a vehicle registration plate, a vehicle make (i.e. the name of the vehicle manufacturer), a vehicle model, a vehicle colour, a facial feature, a road sign, the position of another vehicle, the speed of another vehicle and the direction of travel another vehicle etc. The mobile camera 300 may generate metadata for each of the identified image features.

The server 200 may control the mobile camera 300 regarding the actions taken in relation to identified image features. The control may be achieved by transmitting instructions to the mobile camera 300, for example at operation 702 and/or in a separate request. The server 200 may instruct a mobile camera 300 to perform analysis on video data to identify one or more image features. The server 200 may further instruct the mobile camera 300 to transmit the one or more identified image features to the server 200. The identified image features may be transmitted to the server in the form of metadata, which may be transmitted along with video data with which the metadata is associated.

In some instances, the mobile camera 300 may be instructed to transmit all metadata relating to an identified image feature to the server 200, shown in operation 708. The image feature metadata received by the server 200 may be stored in the data store 230, so that it is linked to the received video data and other received metadata. The server 200 may also process the received metadata relating to an identified image feature. For example, the server 200 may compare the received metadata to one or more image features on a predetermined list. If the identified image feature matches a feature on the list, the metadata and/or video data associated with the metadata may be processed further. Specific implementations of processing identified image features will be described in more detail below.

The server 200 may send one or more selection criteria to the mobile camera 300. The selection criteria allow the server 300 to request information relating to specific image features and/or to request video data in which those image features are present. A selection criterion may include a type of image feature and, optionally, a corresponding value for that feature. For example, a selection criterion may specify that the image feature is a vehicle registration plate, and the value of the vehicle registration plate is “ABC 123”. In operation 710, the mobile camera 300 checks identified image features against the selection criteria. If an identified image feature matches 712 the selection criteria, in operation 714 the mobile camera 300 transmits video data (e.g. one or more frames) in which the identified image feature is present to the server 200. Alternatively or in addition, if an identified image feature matches 712 the selection criteria, the mobile camera 300 transmits metadata comprising a value of the identified image feature to the server 200. If an identified image feature does not match 716 the selection criteria, video data in which the identified image feature is present and/or metadata comprising a value of the identified image feature may be stored locally at the mobile camera 300, in operation 718. Alternatively or in addition to being stored locally, video data and/or metadata not meeting the selection criteria may be deleted, or other actions may be taken (for example, the video data may be sent at lower resolution, or when there is a surplus in transmission capacity).

The mobile camera 300 may send the identified image features to the server 200 together with the video data stream (e.g. as part of the video data stream metadata), or separately from the video data stream. An advantage of sending all identified features without using selection criteria (e.g., at operation 708), is that less computing power is needed at the side of the mobile camera 300. This enables the use of cheaper, less complex mobile camera 300 devices. Another advantage of not using selection criteria at the mobile camera 300 is that latency in providing data to the server 200 may be reduced. On the other hand, an advantage of using selection criteria is that the amount of image feature metadata sent to the server 200 can be reduced. This may reduce the data transfer requirements for transmitting the video data stream and related metadata from the mobile camera 300 to the server 200.

A mobile camera 300 can receive different selection criteria for different types of image features. For example, a mobile camera 300 may be instructed to send all road sign information, but to only send facial feature data that matches a facial feature on a predetermined list provided to the mobile camera 300.

Examples of different types of image features will now be described in more detail.

In a first example of an image feature, the type of image feature identified through video data analysis is a vehicle registration plate, also known as a number plate. This identification process may be referred to as Automatic Number Plate Recognition (ANPR). Identified image features that correspond to number plates may be referred to as identified number plates. The server 200 may instruct the plurality of mobile cameras 300 identify number plates (e.g. at operation 702). The mobile cameras can be configured to identify number plates through the use of known computer vision techniques to detect the presence of a number plate in an image, and to extract alphanumeric characters therefrom. The mobile cameras 300 may return all identified number plates to the server 300 (e.g. at operation 708 or 714), or may return specific number plates matching a selection criterion to the server 300 (e.g. at operation 714). In an exemplary use case of ANPR, the server 200 may use the plurality of mobile cameras 300 to locate one or more predetermined vehicles. For example, the server 200 may have access to a list of vehicles of interest, which may include untaxed vehicles, stolen or missing vehicles, vehicles linked to criminal activities, or otherwise wanted vehicles. The list of vehicles of interest may include the number plate of each vehicle of interest. The server 200 may instruct one or more mobile cameras 300 to analyse video data to identify number plates. The server 200 may instruct the mobile cameras 300 to transmit all identified number plates to the server 200. The server may then check received identified number plates against the predetermined list. The predetermined list may comprise sensitive or confidential data. As a result, it may be advantageous to check the identified number plate against the predetermined list at the server side, so that the predetermined list does not need to be provided to a mobile camera 300.

As an alternative to checking identified number plates at the server 200, the server 200 may provide one or more selection criteria to the plurality of mobile cameras. The selection criteria may, for example, comprise the list of vehicles of interest. The mobile camera 300 may then check identified number plates against this list, and may transmit any identified number plates that match the predetermined list to the server 200. The pre-transmission selection may reduce data transmission and storage capacity requirements. The server 200 may also implement a combination of the two approaches described above, checking identified number plates at the server 200 side for some of the plurality of mobile cameras 300, and providing selection criteria to other mobile cameras 300.

As mentioned above, image features such as identified number plates can be timestamped and/or geotagged. It is thus possible for the server 200 to identify the time and/or location at which a particular vehicle of interest was seen by the mobile cameras 300. The location of a vehicle of interest can be tracked over a prolonged period of time because, even if the vehicle of interest moves out of the field of vision of one mobile camera, it may enter the field of vision of another mobile camera.

Thus, in another exemplary use case of ANPR, the server 200 may be used to track a vehicle of interest using a plurality of mobile cameras 300 in the vicinity of the vehicle of interest. The server 200 may send instructions to the plurality of mobile cameras 300 to detect the vehicle of interest. The instructions may comprise a selection criterion that includes the number plate of the vehicle of interest. The instructions may further comprise a specified location, around a known or suspected location of the vehicle of interest; alternatively or in addition, the instructions may be sent to mobile cameras 300 that are in the known or suspected location of the vehicle of interest. As with the previous use case of ANPR that was described above, checking identified number plates against that of the vehicle of interest may be performed by the mobile camera 300 or by the server 200. The server 200 may send updates to the instructions for tracking the vehicle of interest. Updates to the instructions may include an update to the location of the vehicle of interest, for example based on location information provided by one or more of the mobile cameras 300.

In another exemplary use case of ANPR, the server 200 may use the plurality of mobile cameras 300 to identify cases of vehicle cloning. Vehicle cloning in this context refers to copying of a number plate, such one vehicle illicitly bears the number plate of another vehicle. The server 200 may detect vehicle cloning if a number plate is identified by multiple mobile cameras 300 within a short time period, in multiple locations that would not be reachable by the same vehicle in that time period.

In addition to identifying a number plate of a vehicle, a mobile camera 300 may be equipped to determine one or more further characteristics of a vehicle, such as the model of the vehicle, the make of the vehicle, the colour of the vehicle, or any other visible characteristics of the vehicle (e.g., damage to the vehicle, stickers or decorations on the vehicle, etc.). A mobile camera 300 may determine such further characteristics for all vehicles, or the server 200 may provide one or more selection criteria regarding further characteristics. For example, the server 200 may request further characteristics for the vehicles having identified number plates matching a predetermined list of number plates of interest. As another example, the server 200 may instruct the mobile cameras 300 to identify all white vehicles manufactured by Ford. Through the use of such further characteristics, the network of mobile cameras 300 can be used to locate a vehicle that is suspected of being involved in an incident, even where the number plate of the vehicle is unknown.

The server 200 can use the number plate of a vehicle in combination with the further characteristics of the vehicle to detect when a number plate has been affixed to an incorrect vehicle. For example, the server 200 can query an authoritative data source (such as a database operated by a vehicle licensing or registration authority, or an insurance provider) to determine the make, model and/or colour of the vehicle for a given number plate. The make, model and/or colour provided by the authoritative data source can be compared with the make, model and/or colour identified by a mobile camera 300. A discrepancy between the make, model and/or colour identified by a mobile camera 300 with that provided by the authoritative data source may indicate that the number plate has been affixed to an incorrect vehicle. In the event of a discrepancy, the server 200 can alert a law enforcement authority to the possibility of criminal activity. The alert can include video data (e.g. one or more frames) that includes the vehicle bearing an incorrect number plate, and can optionally further include the time and/or location at which that vehicle was identified.

In a second example of an image feature, the type of image feature identified through video data analysis is a road sign. In this example, a mobile camera 300 is configured to detect the presence of a road sign. The mobile camera 300 may further be configured to interpret the meaning of an identified road sign. For example, the mobile camera 300 may be configured to determine a speed limit indicated by the road sign. As another example, the camera may be configured to determine a warning, prohibition, instruction or other information conveyed by the road sign. Alternatively or additionally, road sign interpretation may be performed by the server 200. Interpretation of road signs can be achieved through the use of known computer vision techniques to analyse the visual content (e.g. alphanumeric characters or drawings) in the road sign.

Data relating to road sign detection and/or interpretation may be collected by a server 200 by sending instructions to one or more mobile cameras 300 to analyse video data to identify road signs. A mobile camera 300 may transmit all identified road sign data to the server 200. Alternatively, server 200 may use one or more selection criteria to set conditions for which road sign information to receive from mobile criteria 300. For example, a server 200 may send a selection criterion to cause the mobile cameras 300 to only transmit road sign data related to speed limits.

A server 200 may control a plurality of mobile cameras 300 to build and maintain an up- to-date road sign library. In one example, the server 200 builds and maintain an up to date road speed library, based on identified speed limit signs. As mentioned above, image features such as road signs can be timestamped and/or geotagged. The geotags can be used to associate an identified road sign with a particular road. The timestamps can be used to update the library, by identifying old road sign information and replacing it with newly-identified road sign information. An advantage of using a plurality of mobile cameras 300 as described herein to detect and interpret road signs is that up-to-date information about road signs can be collected over a large geographical area. Mobile cameras 300 may, for example, capture changes to road signs due to accidents or construction works. Mobile cameras 300 may also capture up-to-date information from smart road systems, such as smart motorways. For example, the mobile cameras 300 may provide frequent updates to the road sign library by interpreting variable message road signs (e.g. road signs setting a variable speed limit) on a smart road system.

A server 200 may determine the frequency and locations at which mobile cameras 300 identify road signs. For example, in locations with variable message road signs, a server 200 may instruct mobile cameras 300 to always analyse road sign data, in order to maintain up to date road sign information. In other locations, server 200 may instruct mobile cameras 300 to analyse data periodically, for example once a day or once a week.

In a third example of an image feature, the type of image feature identified through video data analysis is a facial feature of a person visible in the video data captured by a mobile camera. As used herein, the term“facial feature” refers to any information derived from image analysis of a human face in the video data. For example, a facial feature may include video data (e.g. a frame, or a portion of a frame) in which a face has been detected. As another example, a facial feature may include biometric information that can be used to identify a person whose face is present in the video data.

In this example, a mobile camera 300 is configured to detect the presence of a face in the video, using known computer vision techniques. After detecting a face, the mobile camera 300 may send a frame of video data containing the face (or a portion of the frame that has been cropped to the boundaries of the face) to the server 200 for further analysis of the detected face. Alternatively or in addition, the mobile camera 300 itself may perform further analysis of the detected face. Further analysis of the detected face may include determining biometric information from the image of the face, using known computer vision techniques. As mentioned above, image features such as facial features can be timestamped and/or geotagged, thus allowing the location of a particular person at a particular time to be determined. The server 300 can instruct one or more mobile cameras 200 to identify facial features. The instructions to identify facial features may be location specific and/or time specific. For example, an area-based selection criterion for a particular facial feature may be sent to the plurality of mobile cameras 300. In response, mobile cameras 300 in the specified area may analyse video data and check identified facial features against the particular facial feature specified by the selection criterion. A mobile camera 300 may send identified facial features to the server 200. The selection criteria may also specify a time range in which to analyse video data in an area.

In one example, the server 200 and plurality of cameras 300 may be used to verify or follow up on a reported sighting of specified person in a particular area, e.g. a wanted person or a missing person. A sighting may be reported to the server 200. In response, the server 200 may send instructions to a plurality of mobile cameras 300 to analyse video data for facial features in the particular area, starting immediately, either for a specified duration or until a stop instruction is sent. The plurality of mobile cameras 300 may transmit identified facial features to the server 200. The server 200 may use this data to locate and/or track the specified person.

In a fourth example of an image feature, the type of image feature identified through video data analysis is the speed of other vehicles. In this regard, the term“other vehicle” refers to a different vehicle from that in which a mobile camera is located, although it should be understood that the mobile camera does not need to be located in a vehicle in order to identify the speed of other vehicles. A mobile camera 300 may be configured to infer speed of other vehicles using computer vision techniques. One or more mobile cameras 300 may analyse video data to identify another vehicle, for example using AN PR as described above. The one or more mobile cameras 300 may track the movement of the identified vehicle over time, for example by estimating a position and change in position over time relative to each the one or more mobile cameras 300. The one or more mobile cameras 300 may make use of other data, for example speed and/or location of the vehicle a mobile camera 300 is in, in order to make an estimation of the speed of the identified vehicle. This may, for example, be used to detect potential speeding offences. For example, a mobile camera 300 may send the number plate and the location of a speeding vehicle 200 to the server, whereupon a law enforcement agency can be alert to the offence. The mobile camera 300 may be configured to infer the direction of travel of other vehicles. The direction of travel of other vehicles can be inferred using similar computer vision techniques to those used to infer the speed of other vehicles, in combination with knowledge of the mobile camera’s own direction of travel (which can be derived from changes to its location over a period of time). The mobile camera 300 can send the direction of travel of another vehicle to the server 300 (optionally along with the number plate, location and/or speed of the other vehicle).

It will be appreciated that image features other than those listed above may be identified through video data analysis, and that the methods described above for handling and processing identified features also apply. It will also be appreciated that the various examples of image features described above can be combined.

The above paragraphs have described methods using a network of mobile cameras 300. It should be appreciated that, in addition to mobile cameras 300, the network may also comprise one or more stationary cameras, for example one or more pan-tilt-zoom cameras. A stationary camera can be understood to be a camera that remains substantially in the same location during normal use, but may be moveable, for example to be brought from one location to another.

The methods disclosed herein can be performed by instructions stored on a processor- readable medium. The processor-readable medium may be: a read-only memory (including a PROM, EPROM or EEPROM); random access memory; a flash memory; an electrical, electromagnetic or optical signal; a magnetic, optical or magneto-optical storage medium; one or more registers of a processor; or any other type of processor- readable medium. In alternative embodiments, the present disclosure can be implemented as control logic in hardware, firmware, software or any combination thereof. The apparatuses disclosed herein may be implemented by dedicated hardware, such as one or more application-specific integrated circuits (ASICs) or appropriately connected discrete logic gates. A suitable hardware description language can be used to implement the methods described herein with dedicated hardware.

It will be appreciated by the person skilled in the art that various modifications may be made to the above described embodiments, without departing from the scope of the invention as defined in the appended claims. Features described in relation to various embodiments described above may be combined to form embodiments also covered in the scope of the invention.