Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SELECTING A RECORDING DEVICE OR A CONTENT STREAM DERIVED THEREFROM
Document Type and Number:
WIPO Patent Application WO/2017/081356
Kind Code:
A1
Abstract:
This specification describes a method comprising selecting a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device.

Inventors:
REUNAMÄKI JUKKA (FI)
SALOKANNEL JUHA (FI)
ROWSE GRAHAM (GB)
Application Number:
PCT/FI2015/050771
Publication Date:
May 18, 2017
Filing Date:
November 09, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
H04N7/18; H04N5/225; H04N5/232; H04N5/76
Foreign References:
EP2150057A22010-02-03
US20130300832A12013-11-14
US20110050928A12011-03-03
US20050212918A12005-09-29
EP1480450A22004-11-24
US20030023974A12003-01-30
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (FI)
Download PDF:
Claims:
Claims

1. A method comprising:

selecting a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device. 2. The method of claim 1, further comprising one of:

outputting the selected content stream for provision to a user content playback apparatus;

causing the selected content stream to be sent for storage at a remote device;

causing storage of the selected content stream at a local storage device;

causing receipt of the selected content stream from a remote storage device for provision to a user via content playback apparatus;

causing receipt of the selected content stream from a local storage device for provision to a user via content playback apparatus; and

causing receipt of the selected content stream from the corresponding recording device and causing provision of the selected content stream to a user via content playback apparatus.

3. The method of any preceding claim, wherein each of the plurality of recording devices from which the content streams are derived is associated with a different region of a recording environment and wherein the content stream is selected at a time

corresponding to a time during capture of the content stream at which the trackable device was within or was expected to enter the region associated with the recording device from which the selected content stream is derived. 4. The method of claim 1, further comprising at least one of:

controlling the selected recording device to switch to a first mode of operation in which content is captured by the selected recording device; and

outputting content captured by the selected at least one recording device for consumption by a user.

5. The method of claim 4, wherein content captured by each of the plurality of recording devices is associated with a different region of a recording environment and wherein, when the method comprises controlling the selected recording device to switch to a first mode of operation in which content is captured by the selected recording device, the selected recording device is selected such that the trackable device is within, or is expected to enter, the region associated with the selected recording device.

6. The method of any preceding claim, wherein the sensor data is indicative of a state of at least one of: the trackable device and an asset by which the trackable device is carried. 7. The method of claim 6, comprising selecting the recording device or content stream in response to a determination that the sensor data is indicative of the state of the trackable device and/or the asset satisfying a predetermined condition.

8. The method of claim 7, wherein the condition is that the trackable device is expected to switch from a region associated with a first recording device to a region associated with a second recording device.

9. The method of claim 7, wherein the condition is that a characteristic indicated by the sensor data exceeds a predetermined threshold.

10. The method of either of claims 4 and 5, comprising one of:

controlling the selected recording device to switch to the first mode of operation from an inactive mode in which content is not recorded by the recording device;

controlling the selected recording device to switch to the first mode of operation from a second mode of operation in which content is recorded by the recording device with a lower quality then when in the first mode; and

controlling the selected recording device to switch to the first mode of operation in which the recording device is specifically focussed on the trackable device or an asset carrying the trackable device from a second mode of operation in which the recording device is not specifically focussed on the trackable device or the asset.

11. The method of any preceding claim, the plural recording devices being provided in an array within a single recording unit or being part of a collection of dispersed recording units.

12. The method of any preceding claim wherein the selected recording device or content stream is one of a stereoscopic pair of recording devices or content streams, the method further comprising also selecting the other recording device or content stream of the pair.

13. Apparatus configured to perform a method according to any preceding claim.

14. Computer-readable instructions which when executed by computing apparatus cause the computing apparatus to perform a method as claimed in any of claims 1 to 12.

15. Apparatus comprising:

at least one processor; and

at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus:

to select a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device.

16. The apparatus of claim 15, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform at least one of:

outputting the selected content stream for provision to a user content playback apparatus;

causing the selected content stream to be sent for storage at a remote location; causing storage of the selected content stream at a local storage device;

causing receipt of the selected content stream from a remote storage device for provision to a user via content playback apparatus;

causing receipt of the selected content stream from a local storage device for provision to a user via content playback apparatus; and

causing receipt of the selected content stream from the corresponding recording device and causing provision of the selected content stream to a user via content playback apparatus.

17. The apparatus of claim 15 or claim 16, wherein each of the plurality of recording devices from which the content streams are derived is associated with a different region of a recording environment and wherein the computer program code, when executed by the at least one processor, causes the apparatus to select the content stream at a time corresponding to a time during capture of the content stream at which the trackable device was within or was expected to enter the region associated with the recording device from which the selected content stream is derived. i8. The apparatus of claim 15, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform at least one of:

controlling the selected recording device to switch to a first mode of operation in which content is captured by the selected recording device; and

outputting content captured by the selected at least one recording device for consumption by a user.

19. The apparatus of claim 18, wherein content captured by each of the plurality of recording devices is associated with a different region of a recording environment and wherein the computer program code, when executed by the at least one processor, causes the apparatus to control the selected recording device to switch to a first mode of operation in which content is captured by the selected recording device, the selected recording device being selected such that the trackable device is within, or is expected to enter, the region associated with the selected recording device. 20. The apparatus of any of claims 15 to 19, wherein the sensor data is indicative of a state of at least one of: the trackable device and an asset by which the trackable device is carried.

21. The apparatus of claim 20, wherein the computer program code, when executed by the at least one processor, causes the apparatus to select the recording device or content stream in response to a determination that the sensor data is indicative of the state of the trackable device and/or the asset satisfying a predetermined condition.

22. The apparatus of claim 21, wherein the condition is that the trackable device is expected to switch from a region associated with a first recording device to a region associated with a second recording device.

23. The apparatus of claim 21, wherein the condition is that a characteristic indicated by the sensor data exceeds a predetermined threshold.

24. The apparatus of either of claims 18 and 19, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform one of: controlling the selected recording device to switch to the first mode of operation from an inactive mode in which content is not recorded by the recording device;

controlling the selected recording device to switch to the first mode of operation from a second mode of operation in which content is recorded by the recording device with a lower quality then when in the first mode; and

controlling the selected recording device to switch to the first mode of operation in which the recording device is specifically focussed on the trackable device or an asset carrying the trackable device from a second mode of operation in which the recording device is not specifically focussed on the trackable device or the asset.

25. The apparatus of any of claims 15 to 24, the plural recording devices being provided in an array within a single recording unit or being part of a collection of dispersed recording units.

26. The apparatus of any of claims 15 to 25 wherein the selected recording device or content stream is one of a stereoscopic pair of recording devices or content streams, wherein the computer program code, when executed by the at least one processor, causes the apparatus also to select the other recording device or content stream of the pair.

27. A computer-readable medium having computer-readable code stored thereon, the computer readable code, when executed by a least one processor, cause performance of at least:

selecting a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device.

28. Apparatus comprising:

means for selecting a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device.

Description:
Selecting a Recording Device or a Content Stream Derived Therefrom Field

The specification relates to selecting a recording device or a content stream derived from a recording device.

Background

In the field of audio/video recording and editing, it is often necessary to handle files that are relatively large in terms of data size. A particular issue arises where audio/video content is obtained from an array of recording devices leading to even greater quantities of data. This brings new challenges in relation to managing the large quantities of data in a reliable, efficient and user-friendly manner.

Summary

In a first aspect, this specification describes a method comprising selecting a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device.

The method may further comprise one of: outputting the selected content stream for provision to a user content playback apparatus; causing the selected content stream to be sent for storage at a remote location; causing storage of the selected content stream at a local storage device; causing receipt of the selected content stream from a remote storage device for provision to a user via content playback apparatus; causing receipt of the selected content stream from a local storage device for provision to a user via content playback apparatus; and causing receipt of the selected content stream from the corresponding recording device and causing provision of the selected content stream to a user via content playback apparatus.

Each of the plurality of recording devices from which the content streams are derived may be associated with a different region of a recording environment and the content stream may be selected at a time corresponding to a time during capture of the content stream at which the trackable device was within or was expected to enter the region associated with the recording device from which the selected content stream is derived. The method may comprise at least one of: controlling the selected recording device to switch to a first mode of operation in which content is captured by the selected recording device; and outputting content captured by the selected at least one recording device for consumption by a user. Content captured by each of the plurality of recording devices may be associated with a different region of a recording environment and, when the method comprises controlling the selected recording device to switch to a first mode of operation in which content is captured by the selected recording device, the selected recording device may be selected such that the trackable device is within, or is expected to enter, the region associated with the selected recording device.

The sensor data may be indicative of a state of at least one of: the trackable device and an asset by which the trackable device is carried. The method may further comprise selecting the recording device or content stream in response to a determination that the sensor data is indicative of the state of the trackable device and/or the asset satisfying a

predetermined condition. The condition may be that the trackable device is expected to switch from a region associated with a first recording device to a region associated with a second recording device. Alternatively, the condition may be that a characteristic indicated by the sensor data exceeds a predetermined threshold. In examples in which the method comprises controlling the selected recording device, the method may comprise one of: controlling the selected recording device to switch to the first mode of operation from an inactive mode in which content is not recorded by the recording device; controlling the selected recording device to switch to the first mode of operation from a second mode of operation in which content is recorded by the recording device with a lower quality then when in the first mode; and controlling the selected recording device to switch to the first mode of operation in which the recording device is specifically focussed on the trackable device or an asset carrying the trackable device from a second mode of operation in which the recording device is not specifically focussed on the trackable device or the asset.

The plural recording devices may be provided in an array within a single recording unit or may be part of a collection of dispersed recording units.

The selected recording device or content stream may be one of a stereoscopic pair of recording devices or content streams and the method may further comprise also selecting the other recording device or content stream of the pair. In a second aspect, this specification describes apparatus configured to perform any method as described with reference to the first aspect.

In a third aspect, this specification describes computer-readable instructions which, when executed by computing apparatus, cause the computing apparatus to perform any method as described with reference to the first aspect.

In a fourth aspect, this specification describes apparatus comprising: at least one processor; and at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to select a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform at least one of: outputting the selected content stream for provision to a user content playback apparatus; causing the selected content stream to be sent for storage at a remote location; causing storage of the selected content stream at a local storage device; causing receipt of the selected content stream from a remote storage device for provision to a user via content playback apparatus; causing receipt of the selected content stream from a local storage device for provision to a user via content playback apparatus; and causing receipt of the selected content stream from the corresponding recording device and causing provision of the selected content stream to a user via content playback apparatus.

Each of the plurality of recording devices from which the content streams are derived may be associated with a different region of a recording environment and the computer program code, when executed by the at least one processor, may cause the apparatus to select the content stream at a time corresponding to a time during capture of the content stream at which the trackable device was within or was expected to enter the region associated with the recording device from which the selected content stream is derived. In some examples, the computer program code, when executed by the at least one processor, may cause the apparatus to perform at least one of: controlling the selected recording device to switch to a first mode of operation in which content is captured by the selected recording device; and outputting content captured by the selected at least one recording device for consumption by a user. In such examples, content captured by each of the plurality of recording devices may be associated with a different region of a recording environment and the computer program code, when executed by the at least one processor, may cause the apparatus to control the selected recording device to switch to a first mode of operation in which content is captured by the selected recording device, the selected recording device being selected such that the trackable device is within, or is expected to enter, the region associated with the selected recording device. The sensor data may be indicative of a state of at least one of: the trackable device and an asset by which the trackable device is carried. The computer program code, when executed by the at least one processor, may cause the apparatus to select the recording device or content stream in response to a determination that the sensor data is indicative of the state of the trackable device and/or the asset satisfying a predetermined condition. The condition may be that the trackable device is expected to switch from a region associated with a first recording device to a region associated with a second recording device. The condition may be that a characteristic indicated by the sensor data exceeds a predetermined threshold. In examples in which the selected recording device is controlled, the computer program code when executed by the at least one processor may cause the apparatus to perform one of: controlling the selected recording device to switch to the first mode of operation from an inactive mode in which content is not recorded by the recording device; controlling the selected recording device to switch to the first mode of operation from a second mode of operation in which content is recorded by the recording device with a lower quality then when in the first mode; and controlling the selected recording device to switch to the first mode of operation in which the recording device is specifically focussed on the trackable device or an asset carrying the trackable device from a second mode of operation in which the recording device is not specifically focussed on the trackable device or the asset.

The plural recording devices may be provided in an array within a single recording unit or may be part of a collection of dispersed recording units.

The selected recording device or content stream may be one of a stereoscopic pair of recording devices or content streams, and the computer program code, when executed by the at least one processor, may cause the apparatus also to select the other recording device or content stream of the pair. In a fifth aspect, this specification describes a computer-readable medium having computer-readable code stored thereon, the computer readable code, when executed by a least one processor, cause performance of at least selecting a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device. The computer-readable code stored on the medium of the fifth aspect may further cause performance of any of the operations described with reference to the method of the first aspect.

In a sixth aspect, this specification describes apparatus comprising means for selecting a recording device or a content stream derived therefrom from a plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from a trackable device and sensor data derived from at least one sensor associated with the trackable device, the directional information being indicative of a direction towards the trackable device.

The apparatus of the sixth aspect may further comprise means for causing performance of any of the operations described with reference to method of the first aspect.

Brief Description of the Drawings

For a more complete understanding of the methods, apparatuses and computer-readable instructions described herein, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:

Figures 1 and 2 illustrate examples of a recording environment including recording apparatus comprising a plurality of recording devices;

Figure 3 is a schematic diagram illustrating the recording apparatuses of Figures 1 and 2 and an editing/replay apparatus including recording device selection apparatus;

Figure 4 is a flow chart illustrating various operations which may be performed by the recording device selection apparatus illustrated in Figure 3;

Figure 5 is a schematic diagram illustrating the recording apparatuses of Figures 1 and 2 and an editing/replay apparatus in which the recording device selection apparatus is part of the recording apparatus;

Figure 6 is a flow chart illustrating various operations which may be performed by the recording device selection apparatus illustrated in Figure 5; Figure 7 is a schematic diagram of an example configuration of the recording device selection apparatus;

Figure 8 is a schematic diagram of an example configuration of the locator apparatus; Figure 9 is a schematic diagram of an example configuration of the recording control apparatus; and

Figure 10 is a schematic diagram of an example configuration of one of the recording devices;

Figure 11 is a schematic diagram of an example configuration of one of the trackable devices;

Figure 12 is a schematic diagram of an example configuration of a type of recording apparatus; and

Figure 13 is a schematic diagram of an example configuration of another type of recording.

Detailed Description

In the description and drawings, like reference numerals may refer to like elements throughout.

Figures 1 and 2 illustrate examples of recording environments including recording apparatus 1, 2 for capturing content within the environment. The recording apparatus 1, 2 comprises a plurality of recording devices 11 and recording control apparatus 12 for controlling operation of the plurality of recording devices 11. The recording apparatus 1 includes at least one locator apparatus 13 for receiving wireless signals from at least one trackable device 10 within the environment and for determining and/or enabling determination of directional information indicating a bearing towards the trackable device 10. For reasons that will be clear to the skilled person from the below discussion, the relative locations and orientations of the recording devices 11 and locator apparatus 13 may be fixed.

While referred to as a recording apparatus 1, 2, the apparatus may be configured to capture and relay one or more live content streams to a playback/ editing apparatus, in addition to recording or capturing content for storage at a storage medium. In examples in which the content is stored, it can be stored at the recording apparatus 1 itself, at the content playback/editing apparatus or at remote server apparatus. The playback/editing apparatus (which may be simply referred to as playback or replay apparatus) is configured to replay captured content for consumption by a viewer. In some instances, the playback apparatus may additionally include editing functionality for enabling the captured content to be edited. The recording devices 11 may be configured to capture streams of video and/or audio content. Content stream as used herein may refer to a stream of visual data (e.g. video), a stream of audio data, and/or a stream of audio-visual (AV) data. However, for simplicity, the Figures will be discussed primarily with reference to video content streams although it will be appreciated that, the concepts and principles discussed may equally apply to audio content and/or AV content streams.

As can be seen from Figures ι and 2, the recording apparatus l, 2 may be configured such that the content captured by each of the plurality of recording devices n is associated with a different region of a recording environment. Put another way, the recording apparatus l, 2 is configured such that each of the recording devices n has a different "field of view" within the environment. The plurality of recording devices n which make up the recording apparatus l, 2 may, as shown in the example of Figure l, be provided in a recording device array l (for instance, as part of a single self-contained recording unit).

Alternatively, as shown in the example of Figure 2, the recording devices n which make up the recording apparatus 2 may be provided as plural separate recording units 2A, 2B, 2C, each including one recording device n (or, in some instances, a pair of stereoscopic recording devices). In other examples, the recording devices n may be provided as plural separate recording device array units or may be a combination of one or more recording device array units and one or more recording device units comprising only one or only one stereoscopic pair of recording devices.

Referring now specifically to Figure l, in this example the recording apparatus l comprises plural recording devices n arranged in a recording device array around a single locator apparatus 13, which forms part of the recording control apparatus 12. Each of the recording devices 11 is arranged with its field of view 16 (each represented by two broken lines extending from the respective recording device) in a different direction. In this specific example, the fields of view 16 of the plural recording devices (in this case, first to fourth recording devices 11-1, 11-2, 11-3, 11-4) overlap such that the recording apparatus 1 provides 360-degree coverage within the recording environment.

An array of recording devices which provides such coverage may be referred to as a circular array of recording devices. Although not shown in Figure 1, the plurality of devices may be arranged in a spherical array which provides 360-degree coverage in terms of both elevation and azimuth, i.e. across an entire sphere, which may be termed a video sphere. Such a recording apparatus is illustrated in, and briefly described with reference to, Figure 13. It should also be borne in mind that in alternative embodiments, the array may comprise recording devices covering a hemispherical area or indeed only a section of a spherical area. In the example of Figure 1, the locator apparatus 13 is mounted in the centre of the array of recording devices 11. However, in other examples, the locator apparatus 13 may be a separate unit or system mounted elsewhere. As will be discussed in more detail later, the output of the data received from the locator apparatus 13 is used to select partial view of the whole recording.

In some examples, one of the recording devices 11 of the array of Figure 1 may be a main/default recording device which is intended to follow a main object/asset of interest, while the other recording devices may record the surroundings and other objects.

Alternatively there may be no main/default recording device and instead each recording device may have similar duties.

In contrast to Figure 1, in the example of Figure 2, the recording apparatus 2 comprises plural separate recording units 2A, 2B, 2C each including a single recording device 11. Each of the units 2A, 2B, 2C has separate recording control apparatus 12 including a locator apparatus 13. Each of the recording units 2A, 2B, 2C may be located in a physically separate location. The recording units may be connected to a central unit or server (see e.g. gateway 36 or server 25 in Figure 3B) to allow online control and also forwarding of captured content. The central unit or server may be configured to allow remote commands for causing, for instance, tracking/recording of a particular asset of interest. Alternatively, the recording units may operate independently, in which case the modules may include an accurate clock to facilitate the collation and synchronisation of the content from the different units.

As with the example of Figure 1, in Figure 2 each of the recording devices 11 is arranged with its field of view 16 in a different direction. In this specific example, the fields of view 16 of the plural recording devices are arranged so as to capture a different region of the recording environment, which in this example, is a part of a motor racing track

(particularly a hairpin turn). A first recording device 11-1 is arranged such that its field of view includes a portion of the track entering the turn, a second recording device 11-2 is arranged such that its field of view covers the turn itself and a third recording device 11-3 is arranged such that its field of view covers a portion of the track leaving the turn. In the example of Figure 1, a plurality of trackable devices 10 are located within the recording environment, each carried by an asset (in this example, a person). In the example of Figure 2, a single asset (in this example, a vehicle) including a trackable device IOA is shown within the recording environment. However, the recording apparatus l of Figure l may be employed in an environment having a single trackable device io and the recording apparatus 2 of Figure 2 may be employed in an environment having plural trackable devices io. In both examples, the trackable devices io are configured to transmit a wireless signal, for reception by the one or more locator apparatus 13, which enables the locator apparatus 13 to determine directional information indicative of a direction or bearing from the recipient locator apparatus 13 to the trackable device 10. The wireless signal may be made up of one or more positioning packets for enabling an angle-of-arrival (AoA) of the packet to be determined.

The format of the positioning packets and determination of the directional information by the locator apparatus 13 may be in accordance with the High Accuracy Indoor Positioning solution for example as described at http://www.in-location-alliance.com. HAIP as developed by Nokia is known in the art. Indeed, it is mentioned, and is described in various levels of detail, in (among other publications) the following published PCT patent applications: WO 2014087196A, WO2013179195A, WO2014087198A, WO 2015013904A, WO 2014107869A, WO2014108753A, WO2014087199A and WO 2014087197A. In view of these and other disclosures, the fundamental principles utilised by the trackable devices 10, locator apparatuses 13 to determine the directional information of incoming packets are not described in a great deal of detail in this specification.

The trackable devices 10 may be part of a more complex device, such as a smart watch, a smart phone or even an on-board computer of a vehicle (such as the car of Figure 2). Alternatively, the trackable device 10 may be a simple "tag" having very limited

functionality.

The locator apparatus 13 of the recording apparatus 1, 2 and the trackable devices 10 may be configured to operate using any suitable type of wireless transmission/reception technology. Suitable types of technology include, but are not limited to Bluetooth Basic Rate / Enhanced Data Rate (BR/EDR) and Bluetooth Low Energy (BLE). Bluetooth Low Energy (BLE) is a relatively new wireless communication technology published by the Bluetooth SIG as a component of Bluetooth Core Specification Version 4.0. Other types of suitable technology include for example technologies based on IEEE 802.11 and IEEE 802.15.4. The use of BLE may be particularly useful due to its relatively low energy consumption and because most mobile phones and other portable electronic devices will be capable of communicating using BLE technology.

Whilst embodiments are described herein primarily with reference to BLE messages and HAIP systems, alterative low-power radio technologies (such as IEEE 802.15.4) and positioning schemes may be used.

Although not visible in Figures 1 and 2, each of the trackable devices 10 comprises or is associated with at least one sensor for outputting sensor data from which a state of the trackable device 10 and/or an asset associated with the trackable device 10 can be determined. For instance, the sensor may be configured to output data (e.g. acceleration data) indicative of movement of the trackable device 10 and/or the asset. Such sensors may be referred to as movement sensors and include but are not limited to accelerometers and magnetometers. Other examples of sensors which may be associated with the trackable devices 10 include temperature sensors, heart-rate sensors, light sensors and sound sensors. The sensor data may include information for identification of a user of the trackable device 10, and for determining whether the trackable device 10 is attached to the user.

The one or more sensors may be part of the trackable device 10 or may be external to the trackable device 10. For instance, in the Figure 1, a movement sensor (such as an accelerometer) may form part of the trackable device 10, whereas a heart-rate monitor may be external to the trackable device 10. In the example of Figure 2, a movement sensor may be part of the trackable device or may be part of the vehicle. Sensors which are external to the trackable device may be communicatively coupled to the trackable devices such that data output by the sensor is received by trackable device 10. Alternatively, they may include their own interface for communication of the sensor data to the recording apparatus 1, 2 and/or playback apparatus.

The trackable devices 10 may be configured to transmit the sensor data to the locator device 13 as part of the wireless signal (carried within the positioning packets).

Alternatively, the trackable devices 10 or the separate sensor devices may send the sensor data as part of separate signals. In such instances, the recording control apparatus 12 and/or the locator apparatus 13 may include at least one additional radio interface for receiving the signal carrying the sensor data. The received sensor data may then be passed to the selector 14 for use in selecting the recording device. In other examples, the sensor data may be stored locally (e.g. by the trackable device or the external device which includes the sensor) and may be provided to the selector 14 by some other means. Such an approach may be applicable only to selection of the content captured by recording devices by the playback apparatus and not in real time by the recording device. Regardless of how the sensor data is provided, the recording control apparatus(es) 12 may, using their locator apparatus 13, first collate information (e.g. Device IDs) about the trackable devices 10 in the recording environment. As such, when the sensor data is subsequently provided it can be linked in some way with identified trackable devices 10 in the recording environment.

After the trackable devices in the area have been identified, the sensor data recording control apparatus(es) 12 may begin processing sensor data from sensors associated with the trackable devices 10.

The sensor data may have time information (e.g. one or more timestamp) associated with it. This may enable the selector 14 to accurately associate portions of the sensor data with the content captured by the recording devices at the same time. The time information may be applied to the sensor data by the device in which the sensor is provided (e.g. the trackable device) or by the locator apparatus 13 upon receiving the sensor data as part of the wireless signal.

The directional information may also have time information associated it. This may be provided by the locator apparatus 13 based on a time of receipt of the positioning packets by the locator apparatus 13. As with the sensor data, the time information associated with the sensor data, may enable the selector 14 to accurately associate directional data with content captured by the recording devices 11 at the same time. When the captured content is transferred (for instance, via Wi-Fi) for playback and/or storage, it may be transferred along with the sensor data and directional data. The sensor data and directional data may be stored in separate files or in a single file. One or both of the sensor data and the directional data may, in some examples, be stored in the same file as the captured content. In some examples, the captured content may be stored as separate files, whereby each file contains the output of each respective recording device 11. Figures 1 and 2 also show recording device selection apparatus 14. As will be discussed later, this may form part of the recording apparatus 1 or may form part of the content playback apparatus. The recording device selection apparatus 14 (which may also be referred to as "the selector") is configured to select a recording device 11 or a content stream derived from a particular recording device from the plurality of recording devices or corresponding content streams on the basis of directional information derived from receipt of a wireless signal from one of the trackable devices 10 and sensor data derived from at least one sensor associated with the trackable device 10. To achieve selection of the recording devices 11 or content stream, the selector 14 may store or otherwise have access to data defining a direction (or range of directions) associated with each of the recording devices 11 (e.g. its field of view). The selector 14 may be configured to select the recording device 11 or content stream in response to a determination that sensor data from a particular trackable device is indicative of the state of the trackable device and/or the asset satisfying, or being expected to satisfy, a predetermined condition. For instance, the predetermined condition may be absolute (e.g. that a characteristic indicated by the sensor data - velocity, heart-rate, temperature, noise level, light level - is above or below a certain threshold or is within a certain range) or may be relative to the sensor data output by sensors associated with other trackable devices 10 in the recording environment (e.g. the characteristic is the highest or lowest). The selector 14 may be located at any suitable location. For instance, in some examples, it is located on the recording side of the system where it may control the recording devices and/or select content streams derived therefrom in real-time. In other examples, the selector 14 may be located away from the recording side of the system, for instance on the playback side of the system or even at a remote server (such as a cloud server). Where the selector 14 is located away from the recording side of the system, the content from each of the recording devices may be captured and stored/transmitted in a usual way.

As is discussed in more detail below, the selector may 14 perform at least one of the following in respect of a selected content stream (or a content stream derived from a selected recording device): output the content stream for provision to a user via content playback apparatus (which may be referred to as replay apparatus); send the content stream for storage at a remote location (e.g. a cloud server); store the content stream at a local storage device (e.g. at a storage device forming part of the recording side of the system); receive the content stream from a remote storage device (e.g. at a storage device forming part of the recording side of the system or a cloud server) for provision to a user via content playback apparatus; receive the content stream from a local storage device for provision to a user via content playback apparatus (for instance, when the content stream is already stored on the playback side of the system); and receive the content stream from the corresponding recording device and cause provision of the content stream to a user via content playback apparatus (for instance, when the content is being reviewed in (near) real-time). By automatically a selecting a content stream in respect of which to perform an operation (e.g. playback, storage, retrieval and/or transmission), various benefits may be achieved. For instance, transmitting or storing a selected content stream or content derived from a selected recording device (while not storing and transmitting the other content streams or storing and transmitting the other content streams at a lower resolution) bandwidth and/or memory may be conserved. By automatically playing back the selected content stream or content derived from a selected recording device, the user is not required to search through all the content streams to identify a stream of interest.

Although it is generally described that one content stream or recording device is selected at any one time, it will be appreciated that in some implementations plural content streams or recording devices may be selected simultaneously. For instance, the recording apparatus may comprise stereoscopic pairs (see for instance, the apparatus of Figure 12) of recording devices, in which case both devices of the pair, or content streams derived therefrom, may be selected. The selector 14 may, as discussed above, be configured to cause the selected content stream or data derived from a selected recording device to be output for consumption by a user. In such examples, the content stream or recording device may be selected on the basis of the sensor data such that, at the time of capture of the portion of the content stream which is being output for consumption, a trackable device 10, for which the sensor data satisfies the condition, (which may be referred to as the trackable device of interest) was within or was expected to enter the region/field of view associated with the recording device from which the selected content stream was derived. In this way, the user viewing the content may always be provided with relevant content without having to search through the content from each of the recording devices 11.

In examples (such as those described with reference to Figures 5A, 5B and 6) in which the selector 14 forms part of the recording apparatus 1, 2, the selector 14 may be configured to control the selected recording device 11 to switch to a first mode of operation in which content is captured by the selected recording device 11 from a second mode of operation. The second mode of operation may be, for instance, an inactive mode in which content is not recorded by the recording device. In other examples, the second mode of operation may be an active mode in which content is recorded by the recording device but at a lower quality (e.g. resolution) then when in the first mode. In yet other examples, the second mode of operation may be an active mode in which content is recorded but in which the recording device 11 may be unfocussed or auto-focussed, whereas in the first mode the recording device may be specifically focussed on the trackable device of interest (or the asset carrying the trackable device).

By selecting and controlling the recording devices 11 in real-time, the amount of data which needs to be stored and/or transmitted may be reduced. For instance, if the recording apparatus 1 is controlled such that only selected recording devices are active (with others being deactivated), the total amount of data is reduced while still ensuring that objects of interest are captured. A similar effect is achieved if the selected recording device captures content with a higher resolution than non-selected recording devices.

Alternatively or additionally, in examples in which the selector 14 is located on the recording side of the system, the selector 14 may be configured so as to only transmit to the playback apparatus only content captured by selected recording devices 11, while the content captured by non-selected recording devices may be stored and transferred to the playback apparatus at a later time. In this way, a user (e.g. a director who is in charge of content capture within the recording environment) is able to view a stream of content from the selected recording devices 11 in (near) real-time, without all the data being transferred during capture of the content. The content from non-selected recording devices 11 may be reviewed and utilised later, for instance during an editing process. In some examples, the content captured by the selected recording device may be stored with a first quality or resolution but may additionally be transmitted in near real-time to the playback apparatus at a lower quality.

Two different scenarios will now be described with reference the examples of Figures 1 and 2, although it should be appreciated that these are examples only. In the scenario illustrated by Figure 1, the sensor associated with the trackable devices may be a movement sensor which outputs data indicative of the movement of the device 10. Upon receipt of the sensor data, the selector 14 uses this along with the directional data derived from receipt of the positioning packets to select the recording device 11. For instance, the selector 14 may be configured such that only trackable devices 10 which are moving are of interest. As such, the selector 14 may be configured to respond to a determination, based on the sensor data, that a particular device 10 is moving with a characteristic (e.g. velocity or acceleration) which satisfies a predefined condition (e.g. is above a particular threshold or is the highest of all devices in the environment), by selecting the recording device 11 having the field of view within which that trackable device 10 is located. Selection of the recording device may be include identifying the trackable device 10 whose associated sensor data satisfies the criterion, determining from the directional information a direction to the identified trackable device 10 and comparing the direction with the direction (or range of directions) associated with each recording device 11. The selected recording device is that having the direction or range of directions most closely corresponding to the direction from the trackable device to the locator device 13. In the example of Figure 1, a first trackable device 10A carried by a first person suddenly starts to move with a characteristic which satisfies a pre-defined condition. The selector 14 identifies this based the sensor data and then identifies the fourth recording device 11-4 based on the directional information associated with the first trackable device 10A. The identified (fourth) recording device 11-4 is then selected for real-time control and/or such that its content is played for consumption by a user.

The selector 14 may be configured to continue to select recording devices 11 such that the movement of the identified trackable device 10A is followed (in real-time by controlling the recording devices and/or during playback). For instance, in Figure 1, the person carrying the first trackable device 10A moves from the field of view of the fourth recording device 11-4 to the field of view of the third recording device 11-3. When this is detected by the selector 14 on the basis of the directional data associated with the first trackable device 10A, the selector 14 selects the third recording device for control and/or playback. The first trackable device 10A may continue to be followed until its movement no longer satisfies the condition (e.g. it is no longer moving with a speed/acceleration above a threshold or it is no longer the fastest moving device) or until a predetermined duration has expired. In the example of Figure 1, the person who is carrying the first trackable device 10A stops and another person carrying a second trackable device 10B starts moving with a condition that satisfies the condition. In response to this, the selector 14 selects the recording device 11 within whose field of view the trackable device 10B is located (in this case the second recording device 11-2). Subsequently, as the second trackable device 10B moves from the field of view of the second recording device 11-2 into that of the first recording device 11-1, the selector selects the first recording device 11-1.

To put it another way, in the scenario illustrated in Figure 1, the selector 14 may use the sensor data to identify a trackable device of interest and may use the directional information to determine the recording device in whose field of view the asset is currently present.

In the above described example, if the system were configured such that the content from the selected recording device was output for viewing (either in real time or at a later time), the first content displayed would be that of the fourth recording device 11- 4, followed by that of the third recording device 11-3, followed by that of the second recording device 11-2 and, finally, that of the first recording device 11-1. As such, the user viewing the content would automatically be seeing content showing assets of interest, without the need for any manual control or selection.

In the scenario illustrated by Figure 2, the sensor associated with the trackable device 10A is a movement sensor which outputs data indicative of the movement (e.g. speed or velocity) of the vehicle with which the trackable device 10A is associated. The speeds involved in this scenario are much greater than those associated with the scenario of Figure 1. As such, instead of simply reacting to a determination, based on the derived directional information, that a trackable device 10A of interest has passed from a field of view of a first recording device 11-1 to that of a second recording device 11-2, it may be beneficial to anticipate when the asset is likely to transition between fields of view.

Ordinarily, e.g. at lower speeds, this may be possible by analysing the changing bearing of the trackable device with respect to the locator apparatus 13-1. However, in this instance, the vehicle (and so also the trackable device 10A) is approaching the first recording device (and so also the first locator device 13-1) from head-on. As such, the bearing from the trackable device 10A to the first locator apparatus 13-1 is not changing (see for instance the positions of the vehicle at times ti, t2 and t3). By the time the bearing does start to change, at time t4, it may be too late to recognise this and select the second recording device 11-2, because by the time the selection has taken place, the fast moving vehicle may have already moved out of the field of view of the second recording device 11-2 into the field of view of the third recording device 11-3. As such, the selector 14 utilises the movement sensor data of the trackable device 10A to anticipate when the transition between fields of view is likely to occur and so is able to select the next recording device in sufficient time such that the asset is visible in the content captured by the next recording device. For instance, the selector 14 may be configured to recognise that, when the trackable device 10A is within the first field of view but begins to decelerate (in preparation for the corner), a transition is expected. As such, when the velocity or acceleration reaches a certain level, the selector 14 may respond by selecting the recording device 11-2 having the field of view into which the asset is expected to enter. In this way, at a time ts at which the vehicle is in the field of view of the second recording device 11-2, that recording device is already selected such that the content captured by the selected recording device 11-2 and subsequently played back includes the asset of interest.

The selector 14 may use the same approach for switching to selection of the third recording device 11-3. For instance, the selector 14 may recognise that trackable device is currently within the field of view of the second recording device 11-2 (based on the directional data) and may estimate based on the movement data (e.g. velocity and/or acceleration) when it will transition to the next field of view and may select the third recording device 11-3 accordingly.

To put it another way, in the scenario illustrated in Figure 2, the selector 14 may use the directional information to determine the recording device in whose field of view the asset is now (or at a particular time) located and may use the sensor data to determine when the asset is expected to transition between fields of view and may select the next recording device accordingly. The actual time of selection of the next recording device may be determined based on the movement of the trackable device 10 as derived from the sensor data. In this way, the selector 14 may ensure that, at all times, content including the asset of interest is being played back, without any manual intervention.

Although, in the scenario of Figure 2, the vehicle is approaching the first recording device 11-1, using sensor data to anticipate when a transition between fields of view may occur may also provide benefits when the trackable device 10 is moving across (e.g.

perpendicular) the field of view, particularly at high speeds. As will of course be appreciated, in some instance, sensor data might be used both to determine an asset of interest and to determine when a transition to a new field of view is likely to occur. Figures 3A and 3B are schematic diagrams illustrating the recording apparatuses 1, 2 of Figures 1 and 2 respectively in a content capture/playback system along with a

playback/editing apparatus 30 which includes recording device selection apparatus 14.

On the recording side, Figure 3A depicts a recording apparatus 1 such as that shown in in Figure 1 (including a recording device array unit lA). Figure 3B depicts a recording apparatus 2 such as that described with reference to Figure 2 (plural separate content capture units 2A, 2B, 2C). Features described with reference to either of Figures 3A and 3B may be the same in the other Figure unless explicitly stated otherwise. In Figures 3A and 3B, content captured by each of the recording devices 11 is transferred, in some way, to the playback apparatus 30 for instance via a server 25, which may in some examples form part of the playback apparatus 30. The content from each of the recording devices 11 may be transferred to the server and/or playback apparatus 30 separately or may be collated prior to transfer.

In systems similar to that depicted in Figure 3A, the collation may be performed by the single recording control apparatus 12 of the recording device array unit lA.

In systems similar to that depicted in Figure 3B, the collation may be performed by a gateway device 36, through which the data is routed from the plural recording devices 11 to the server 25 and/or playback apparatus 30. The gateway device 36 may additionally provide control functionality for allowing users to control the recording units 2A, 2B, 2C.

In some examples, the functionality of the gateway device 36 may be performed by one of the recording control apparatuses (e.g. the second recording control apparatus 12-2) of the recording apparatus 2. Put another way, the content and directional data (and sensor data if applicable) from some of the recording units 2A, 2C may be routed to server and/or playback apparatus 30 via another of the recording units 2B.

In systems such as those of Figures 3A and 3B, the playback apparatus 30 may be a computing apparatus comprising the selector 14A (recording device selection apparatus). In such systems, the selector 14A may be configured to provide the functionality described in various examples above in reference to Figures 1 and 2 and also as described below with reference Figure 4. For instance, based on the received sensor data and directional information, the selector 14A may control which content stream (i.e. from which recording device 11) is retrieved from local or remote storage and/or is presented for consumption by the user. Example configurations of the selector 14A are described in more detail towards the end of the specification.

The playback apparatus 30 further comprises an RF transceiver 31 and an RF antenna 32 to enable wireless communication with the recording apparatus 1 and, if applicable, the server apparatus 25. The playback apparatus 30 further includes at least an output interface for presenting the captured content to the user. The content output interface 33 includes a display and/or a speaker for presenting the captured content to the user. The playback apparatus 30 may further include an input interface 34 for enabling a user to provide inputs the playback apparatus 30. The input interface 34 may include by way of non-limiting example, a keyboard and/or a touch sensitive panel, which may be integrated with the display of the output interface 33 to form a touch screen. The input interface 34 and the output interface 33 may together be referred to as a user interface.

A user may view captured content from the recording apparatus 1, 2 as a live stream received from the recording apparatus 10. Alternatively, the video content may be stored at the playback apparatus 30 or the server apparatus 25 before playback.

In addition to the functionality already described, the selector 14A may, when it is part of the playback/editing apparatus 30, act as a controller for controlling the other

components of the playback/editing apparatus 30. Alternatively, the playback/editing apparatus 30 may include a separate controller (not shown). The selector 14A (or the separate controller) may, therefore, be configured to control the output interface 33 to present to the user the captured content by the recording apparatus 1, 2, in particular, the content captured by the currently selected recording device. Figure 4 is a flow chart illustrating various operations which may be performed (or caused) by the selector 14A when it is provided as part of the playback apparatus 30.

In operation S4.1, the selector 14A receives the captured content from the recording apparatus 1, 2. This includes content data captured by each of the plural recording devices 11. In operation S4.2, the selector 14A receives the directional information from the recording apparatus 1, 2. Additionally, the selector 14A receives the sensor data from the recording apparatus 1, 2 or from the trackable device (or another device which includes the sensor data).

The performance of operations S4.1 and S4.2 by the selector 14A assumes that the selector is providing the control functionality for the playback apparatus 30. If however, it is not providing such functionality, operations S4.1 and S4.2 may be performed by the separate playback controller and not the selector.

In operation S4.3, the selector 14A causes content from one of the recording devices 11 to be presented to a user via the output interface.33. The presented content may be selected by default or in any other way, for instance based on directional information from a trackable device located within the field of view of the recording device during content capture.

Next, in operation S4.4, the selector 14A determines if the sensor data from a time corresponding to the capture time of the content currently being presented indicates that a state of a one of the trackable devices 10 and/or the associated asset satisfies a

predetermined condition. Various examples of such conditions were discussed above with reference to Figures 1 and 2, and may include, for instance, a movement characteristic of the asset with which the trackable device 10 is associated being above a threshold or that the asset is expected so move out of a current field of view into another. In response to a positive determination (i.e. that the state of the trackable device 10 or asset does satisfy the predetermined condition) in operation S4.4, the selector 14A proceeds to performance of operation S4.5. If a negative determination is reached, the selector returns to operation S4.3. In operation S4.5, the trackable device 10 having the state satisfying the predetermined condition may be identified by the selector 14A. This may be based on a device ID associated with the sensor data, which identifies the trackable device 10 which is associated with the sensor data. The device ID may be, for instance, included by the trackable device 10 in the positioning packets which carry the sensor data or may be included by the sensor which produces the data. In operation S4.6, a recording device 11 or corresponding content stream is identified based on the directional information associated with the identified trackable device 10. More specifically, the directional information is compared with stored reference directions defining the field of view of each recording device. The identified recording device is that for which the field of view corresponds with direction indicated by the directional information.

In operation S4.7, one of the plurality of recording devices 11/content streams is selected.

The identity of the selected recording device 11/content stream may depend on the condition satisfied in operation S4.4. For instance, if the condition includes a threshold which must be surpassed, the selected content stream might be that having a

corresponding recording device which was directed towards the trackable device 10 (i.e. the recording device identified in operation S4.6). However, if the condition is that the trackable device is expected to pass from a current field of view to a next field of view, the selected content stream may be that having the field of view into which the trackable device is expected to pass.

Following selection of the recording device 11/content stream, in operation S4.8, the selector 14A causes the selected content stream to be presented for consumption by the user, for instance via the output interface 33. Subsequently, the selector 14A returns to operation S4.4.

Figures 5A and 5B are a schematic diagram illustrating the recording apparatuses 1, 2 of Figures 1 and 2 respectively in a content capture/playback system along with a

playback/editing apparatus 40. The systems depicted in Figures 5A and 5B, may be substantially the same as those explained with reference to Figures 3A and 3B except where they are explicitly described as being different.

The main difference between the systems of Figures 5A and 5B and those of Figures 3A and 3B is that, in the system of Figure 5, the recording device selection apparatus 14B is provided on the recording side of the system.

For instance, in Figure 5A, the selector 14B is shown as part of the recording control apparatus 12 of the recording device array apparatus 1. As shown in the example of Figure 5B, the selector 14B may alternatively take the place of the gateway device 36 shown in Figure 3B. However, in some examples the selector 14B may instead be provided as part of the recording control apparatus 12 of one of the separate recording units. The main difference between the playback apparatuses 40 depicted in Figures 5A and 5B and those depicted in Figures 3A and 3B is that a controller 41 takes the place of the selector 14A. The controller 41 is configured to control the other components (including the transceiver 31 and the input and output interfaces 33, 34) of the playback apparatus 40 so as to provide playback functionality. For instance, the controller 41 may control the transceiver 31 to receive data (including at least content but in some instance also directional information and sensor data) from the recording apparatus 1, 2 or the selector 14B, in some instances via the server apparatus 25.

In systems such as those of Figures 5A and 5B, the selector 14B may be configured to provide the functionality described in various examples above in reference to Figures 1 and 2 and also as described below with reference Figure 6. For instance, based on the received sensor data and directional information, the selector 14B may control operation of the recording devices 11 and/or may control which content stream is transmitted (e.g. via Wi-Fi) to the playback apparatus 40 for presentation to the user and/or to a storage device for storage.

Figure 6 is a flow chart illustrating various operations which may be performed (or caused) by the selector 14B when it is provided on the recording side of the content recording/playback system.

In operation S6.1, the selector 14B receives content streams from one or more of the recording devices 11. The one or more of the recording devices 11 which provide content to the selector 14B may be selected by default or in some other way, for instance based on directional information from a trackable device located within the field of view of the recording device during content capture. In other examples, content from each of the recording devices 11 may be received by the selector 14B. In operation S6.2, the selector 14B receives sensor data and directional information from the recording control apparatus 12 for the trackable devices 10 in the recording environment. This may be received in the stream along with the content from the one or more recording devices 11. In operation S6.3, the selector 14B determines if the sensor data indicates that a state of a one of the trackable devices 10 or as an associated asset in the environment satisfies a predetermined condition. Various examples of such conditions were discussed above with reference to Figures 1 and 2, and may include, for instance, a movement characteristic of the asset with which the trackable device 10 is associated being above a threshold or that the asset is expected to move out of a current field of view into another. In response to a positive determination (i.e. that the state of the trackable device does satisfy the predetermined condition) in operation S6.3, the selector 14B proceeds to operation S6.4. If a negative determination is reached, the selector 14B returns to operation S6.2. In operation S6.4, the trackable device 10/asset having the state satisfying the

predetermined condition may be identified by the selector 14B. This may be based on a device ID associated with the sensor data, which identifies the trackable device which is associated with the sensor data. The device ID may be, for instance, included by the trackable device 10 in the positioning packets which carry the sensor data or may be included by the sensor which produces the data.

In operation S6.4, a recording device 11 or content stream is identified based on the directional information associated with the identified device 10. More specifically, the directional information is compared with stored reference directions defining the field of view of each recording device. The identified recording device or content stream is that for which the field of view corresponds with direction indicated by the directional information.

In operation S6.5, one of the plurality of recording devices 11 (or a content stream derived therefrom) is selected. The identity of the selected recording device 11 or content stream may depend on the condition satisfied in operation S6.3. For instance, if the condition includes a threshold which must be surpassed, the selected recording device (or content stream derived therefrom) might be that whose field of view is directed towards the trackable device 10 (i.e. the recording device identified in operation S6.5). However, if the condition is that the trackable device is expected to pass from a current field of view to a next field of view, the selected recording device 11 or content stream may be that having the field of view into which the trackable device 10 is expected to pass.

Following selection of the recording device 11 or content stream, in operation S6.7, the selector 14B may control the selected recording device 11. For instance, the selected recording device may be caused to switch to a first mode of operation from a second mode of operation. The first and second modes may be as described above with reference to Figures 1 and 2, for instance an active mode and an inactive mode. As will be appreciated, in some examples, operation S6.7 may be omitted.

In operation S6.8, the selector 14B causes storage, transmission and/or provision to the playback apparatus 40 of the selected content stream or the content stream from the selected recording device 11. In some examples, in addition to causing transfer to the playback apparatus 40 of the selected content stream/content from the selected recording device 11, the selected content stream/content from the selected recording device 11 may be stored (e.g. at the server apparatus 25) along with content streams from other recording devices 11 for later playback and editing. In this way, a user of the playback apparatus 40 is able to review the selected content stream/content from the selected recording device (which is likely to be of interest) in near-real time while also having access to the content derived from the other recording devices for use in a later editing process.

After causing storage, transmission and/or provision to the playback apparatus 40 of the content from the selected recording device 11/the selected content stream, the selector 14B may return to operation S6.2. Figure 7 is a schematic illustration of an example configuration of the selector 14A, 14B. The selector 14A, 14B comprises a controller 140 for processing incoming data, for instance the received directional data and sensor data. The controller 140 also provides the functionality described above with reference to Figures 1 to 6, including selection of one or more of the recording devices 11. The selector 14A, 14B further comprises one or more input interface for receiving data from the locator apparatus 14. The data may be received from the locator apparatus 13 directly (e.g. in the recording apparatus 1 of Figure 5A) or indirectly (e.g. when the selector 14A, 14B is provided as a gateway device as in Figure 5B or at the playback apparatus 30). The selector 14 may additionally receive content from one or more of the plurality of the recording devices 11. In other examples, it may not receive the content. The selector 14A, 14B may additionally include one or more output interface 142 for outputting sensor data and directional data. In some examples, content may additionally be output via the at least one output interface. In examples in which the selector 14B controls operation of the recording devices, control signals for controlling the recording devices may be output via the at least one output interface 142. The nature of the input and output interfaces 141, 142 may depend on the location in the recording/playback system at which the selector is provided. For instance, where the selector 14A, 14B is internal to another device the interfaces may be wired interfaces, where the selector is a standalone device such as depicted in Figure 5A. In contrast, if the selector 14B is a gateway device, at least one of the output interfaces may be a wireless communications interface. Figure 8 is an example schematic block diagram of the locator apparatus(es) 13 shown above in Figures 1, 2, 3A, 3B, 5A and 5B. The locator apparatus 13 comprises a phased array of antennas 132 connected to a transceiver 134 via an RF switch 133. The RF switch 134 is configured to connect the antennas of the array 132 to the transceiver one at a time and may be controlled to cycle through the antennas 134 while a specific portion of the positioning packet is received. A controller 130 may be configured to estimate directional information based on data (e.g. I & Q data) derived during receipt of the specific portion.

The controller 130 may control the operation of the other components of the locator apparatus 13. The controller 130 may for instance process the directional data e.g. to include time information generated by a clock 135. The controller 130 may additionally process sensor data received in positioning packets so as include time information. The controller may additionally be configured to output the directional information and, if applicable, also the sensor data via an output interface for provision directly or indirectly to the selector 14A, 14B.

The controller 130 may also be configured to determine signal strength information for received packets. This may be used to estimate a proximity of the trackable device to the locator apparatus 13. Figure 9 is a schematic block diagram of the recording control apparatus 12 shown above in Figures 1, 2, 3A, 3B, 5A and 5B. In addition to the locator apparatus 13, the recording control apparatus 12 comprises a controller 120. The controller 120 receives content from the one or more recording device 11 with which it is associated. The controller 120 may then cause at least some of the content to be transferred, via e.g. a transceiver 121 and an antenna 122, to at least one of the server apparatus 25, the selector 14A (e.g. when it is a gateway device as in Figure 5B), the gateway device (when one is present, e.g. as in Figure 4B) and the playback apparatus 30, 40. The controller 120 may also receive the directional information and, in some examples, the sensor data from the locator apparatus 13. The controller 120 may also cause the directional data and, if applicable, the sensor data to be transferred along with the content. In some examples, for instance as described in either of Figures 5A and 5B, the controller 120 may receive control signals from selector 14B. The controller 120 may respond by controlling operation of the one or more recording devices 11 with which it is associated and/or selectively transferring the content to the playback apparatus 40 in dependence on the identity of the recording device 11 selected by the selector 14B.

Figure 10 is a schematic block diagram of an example of one of the recording devices 11. In this example, the recording device 11 comprises a camera module 111. The camera module 111 comprises video camera components that are known in the art including, for example a lens, a CCD array and an image processor. The other components of the recording device may be controlled by a controller 110. In some embodiments, the recording device 11 receives instructions from the recording control apparatus 12 which may, in turn, in some examples, receive instructions from the selector 14. The recording device 11 may also be provided with a clock 113 so that timing information may be applied to the recorded content. The clocks of each of the recording devices 11 may be

synchronised by a master clock, for instance at the selector 14. The recording device 11 may also, or alternatively, comprise a microphone 112 to capture audio content. The recording devices may comprise separate video and audio processors. Alternatively, the video and audio processing capability may be combined in a single multimedia processor or, as shown in Figure 10, the processing functionality of the camera module 111 and the microphone 112 may be performed by the controller 110. The recording device 11 also comprises an output interface 114 via which captured content is provided to the recording control apparatus 12. The recording device may also include an input interface 115 for receiving control signals from the record control apparatus 12.

Figure 11 is a schematic block diagram of one of the trackable devices 10. The trackable device 10 comprises a transceiver 101 for transmitting wireless messages such as BLE advertisement messages via an antenna 102. The trackable device 10 also comprises a controller 100 for controlling the other components device 10 to provide, in particular, functionality described above in reference to the trackable devices 10. As described above, the trackable device 10 may include one or more sensors 103, 104. Alternatively, the trackable device may include an interface (not shown) for receiving sensor data from one or more external sensor devices. Also as described above, in some examples the trackable device 20 may be in the form of a component of a mobile communication device such as a mobile phone, smart watch, electronic glasses etc. In other examples, the trackable device 10 may be in the form of a simple tag. Although not shown in the Figures, external sensor devices including one of the sensors which are associated with an asset or trackable device may have a similar configuration to that shown for the trackable device, although they are unable to transmit positioning packets.

Figure 12 shows a recording apparatus 1000 which may form at least part of the recording apparatuses 1, 2 described above. The recording apparatus 2000 comprises an array of pairs of cameras 2001. Each camera pair 2001 comprises a first camera 2001a and a second camera 2002b. The first camera 2001a is configured to capture a left-eye image and the second camera 2002b is configured to capture a right-eye image. The recording apparatus 2000 comprises a controller 2002 to combine the left-eye image with the right- eye image to form a stereoscopic image. It will be appreciated that the processing involved in forming stereoscopic videos is relatively intensive since images must be captured from both cameras of each pair 2001. Subsequent editing of stereoscopic videos is also relatively intensive in terms of processing. The array of camera pairs may be spherical or circular. For a spherical array both elevational and azimuthal data may be recorded. For a circular array only azimuthal data may be recorded.

Figure 13 is an illustration of a recording device product 1100 which comprises a spherical array of recording devices 11 (in particular, video cameras but in some instances also microphones), which may form at least part of the recording apparatuses 1, 2 described above. The video cameras 11 are arranged to provide video coverage across 360 degrees in terms of both elevation and azimuth, i.e. across an entire sphere, which may be termed a video sphere. Each of the cameras 11 is arranged to capture a section of the three- dimensional space surrounding the camera array 10. The recording apparatus 1100 shown in Figure 13 has six cameras 11-1 to ii-6(camera 11-6 is not visible as it is located on the revers of the recording device product 1100).

Although not visible in the figures, both of the examples of Figure 12 and 13 include recording control apparatus 12 (including locator apparatus 13 provided at a central point between the cameras) similar to that described with reference to the preceding Figures. As the recording device product 1100 of Figure 13 includes a spherical camera array, the locator apparatus in this example may include plural phased arrays of antennas, one for determining the angle of elevation and another for determining the angle of azimuth.

A few non-limiting example implementations of the system discussed above with reference to Figures 1 to 13 will now be described. Example Implementation 1 - Motor Racing

As mentioned with reference to Figure 2, the system described herein may be used for capturing content from a motor race in which each vehicle includes a trackable device and an acceleration sensor, the data from which is transmitted to the recording apparatus in real time as part of, or in addition, to a wireless signal for enabling the determination of the bearing from a recording module of the recording apparatus to the vehicle.

The table below shows the directional information (angles of elevation and azimuth), sensor data (acceleration data) and signal strength information (e.g. RSSI) which may be included along with time information in a file of information corresponding to a particular trackable device identified by a device ID.

As can be seen from the table the acceleration data indicates that the trackable does not accelerate much during the first two samples (rows of the table), but that it subsequently accelerates/decelerates rapidly for a while after which time it stops

accelerating/decelerating. As can be seen from the Time column, the data is stored by the recording apparatus more frequently when the trackable device is

accelerating/decelerating than when it is not.

This data may be generated at a time when, for instance, a vehicle in the race begins to accelerate/decelerate while the other cars don't. This can be recognized from the acceleration data which is transmitted from the vehicle to the recording apparatus 1, 2. The recording apparatus 1, 2 receives the acceleration data and determines the directional information, and the selector 14A, 14B (either in real-time or during later playback) responds. Specifically, the response is triggered by a predefined threshold value for the acceleration data having been exceeded. For example, the acceleration data threshold may be set to 00:10:00:00.

In this example, the selector 14B is monitoring the sensor data in real-time and in response to the acceleration data exceeding the threshold, the selector 14B selects a relevant one or more recording devices 11 and controls them to start recording (enter an active mode) or to focus in the direction indicated by the directional information. As rapid deceleration may indicate a crash, the recording apparatus 1, 2 may thus be controlled to automatically capture and/or playback footage from the crash (or immediately

afterwards).

The threshold may in some examples be based on averaging some subsequent acceleration values to avoid unnecessary triggering that could be caused by short glitches in the acceleration sensor. Also, the recording device may continue recording the

accelerating/decelerating vehicle for a while after the threshold has been exceeded. The duration of the recording may be pre-defined, or may be based on the acceleration data having subsequently decreased below the threshold. Also, it may be possible for the director to manually stop the recording at suitable time, even though the recording was started automatically. In some examples, current or historical acceleration data from other vehicles may be used to define the threshold. For instance, values derived from other vehicles may be used to set a threshold, such that if another vehicle is behaving differently, content including that vehicle is automatically recorded or replayed. For example, if most of the vehicles are driving corner with a roughly similar acceleration curve but one vehicle does something different, for example, due to missing a brake-point, this will be automatically recorded or replayed.

Example Implementation 2 - Ice Hockey Another example implementation of the system is for capturing content at an ice hockey game, in which each player has a trackable device as well as one or more sensors that record for example the heartrate and other physical data, such as acceleration. The sensor data and positional packets are transmitted to the recording apparatus 1, 2. The directional data and the sensor data is passed to the selector 14A, 14B which monitors the sensor data (either in real-time or during playback of the content). For example, if a player is about to start a fight, his body sensor values (e.g. heartrate) may indicate the player's "mood", which may be detected by the selector 14A, 14B. As with the motor race, when a threshold associated with the sensor data is exceeded, the selector may cause (in real-time) a selected recording device to record and/or focus in the direction of the angry player, so that footage of the fight is captured.

Example implementation 3 - wildlife videography

Another implementation is for use in filming wildlife. In this example, the recording apparatus 1, 2 may include for instance eight cameras each having a resolution of 4K, or eight pairs of stereoscopic cameras each having a resolution of 4K. In the first case, there may be eight streams of 4k data from which to select to replay or edit whereas in the second case there are sixteen streams of 4K data. For a system which is configured to process in real time, thereby to provide a live feed of the content being captured, this is a very large amount of data to process.

In this example, the animals may be fitted with trackable devices having an accelerometer. As discussed above, the accelerometer data may be used in conjunction with the directional information to select a particular content stream to be used for the live feed. Moreover, it may be used to select when to switch to a content stream from an adjacent recording device. For example, if an animal is hiding in a cluster of bushes, the directional information may indicate whether the animal will emerge from the right or left and the accelerometer data indicates at what rate it the animal is moving so that the selector can decide if the current camera selection for view/ replay should continue or should switch to another camera into whose field of view the animal is expected to enter.

Example implementation 4 - professional videography

A system as described herein may be utilized for professional videography (e.g. in the movie industry). In such an implementation, bandwidth and/or storage utilization may not be an issue and so all the captured content may be stored locally or in the cloud, such that all content is available for use in later editing. However, in such an implementation, the selection of a recording device or content stream on the basis of directional information and sensor data may allow for instance the director to be provided with a preview based on only the selected content stream/content derived from the selected recording device. This enables the director to determine during a video shoot if the captured content is acceptable.

Other non-limiting examples for sensor data and implementations for the system are temperature of an engine, power meter output of a bicycle, orientation of a speed boat, air pressure of an airplane and compass direction of the orienteer. Some further details of components and features of the above-described apparatuses and devices 10, 11, 12, 13, 14A, 14B, 30, 40 and alternatives for them will now be described.

The controllers 100, 110, 120, 130, 140, 41 of each of the apparatuses or devices 10, 11, 12, 13, 14A, 14B, 30, 40 comprise processing circuitry 1001, 1101, 1201, 1301, 1401, 411 communicatively coupled with memory 1002, 1102, 1202, 1302, 1402, 412. The memory 1002, 1102, 1202, 1302, 1402, 4i2has computer readable instructions 1002A, 1102A, 1202A, 1302A, 1402A, 412A stored thereon, which when executed by the processing circuitry 1001, 1101, 1201, 1301, 1401, 4iicauses the processing circuitry 1001, 1101, 1201, 1301, 1401, 411 to cause performance of various ones of the operations described with reference to Figures 1 to 13. The controllers 100, 110, 120, 130, 140, 41 may in some instance be referred to, in general terms, as "apparatus".

The processing circuitry 1001, 1101, 1201, 1301, 1401, 4iiof any of the apparatuses 10, 11, 12, 13, 14A, 14B, 30, 40 described with reference to Figures 1 to 13 may be of any suitable composition and may include one or more processors 1001A, 1101A, 1201A, 1301A, 1401A, 411A of any suitable type or suitable combination of types. For example, the processing circuitry 1001, 1101, 1201, 1301, 1401, 4iimay be a programmable processor that interprets computer program instructions 1002A, 1102A, 1202A, 1302A, 1402A, 412A and processes data. The processing circuitry 1001, 1101, 1201, 1301, 1401, 411 may include plural programmable processors. Alternatively, the processing circuitry 1001, 1101, 1201, 1301, 1401, 411 may be, for example, programmable hardware with embedded firmware. The processing circuitry 1001, 1101, 1201, 1301, 1401, 411 maybe termed processing means. The processing circuitry 1001, 1101, 1201, 1301, 1401, 411 may alternatively or additionally include one or more Application Specific Integrated Circuits (ASICs). In some instances, processing circuitry 1001, 1101, 1201, 1301, 1401, 411 may be referred to as computing apparatus. The processing circuitry 1001, 1101, 1201, 1301, 1401, 411 is coupled to the respective memory (or one or more storage devices) 1002, 1102, 1202, 1302, 1402, 412 and is operable to read/write data to/from the memory 1002, 1102, 1202, 1302, 1402, 412. The memory 1002, 1102, 1202, 1302, 1402, 412 may comprise a single memory unit or a plurality of memory units, upon which the computer readable instructions (or code) 1002A, 1102A, 1202A, 1302A, 1402A, 412A is stored. For example, the memory 1002, 1102, 1202, 1302, 1402, 412 may comprise both volatile memory 1002-2, 1102-2, 1202-2, 1302-2, 1402-2, 412-2 and non-volatile memory 1002-1, 1102-1, 1202-1, 1302-1, 1402-1, 412-1. For example, the computer readable instructions 1002A, 1102A, 1202A, 1302A, 1402A, 412A may be stored in the non-volatile memory 1002-1, 1102-1, 1202-1, 1302-1,

1402-1, 412-1 and may be executed by the processing circuitry 1001, 1101, 1201, 1301, 1401, 411 using the volatile memory 1002-2, 1102-2, 1202-2, 1302-2, 1402-2, 412-2 for temporary storage of data or data and instructions. Examples of volatile memory include RAM, DRAM, and SDRAM etc. Examples of non-volatile memory include ROM, PROM, EEPROM, flash memory, optical storage, magnetic storage, etc. The memories in general may be referred to as non-transitory computer readable memory media.

The term 'memory', in addition to covering memory comprising both non-volatile memory and volatile memory, may also cover one or more volatile memories only, one or more non-volatile memories only, or one or more volatile memories and one or more nonvolatile memories.

The computer readable instructions 1002A, 1102A, 1202A, 1302A, 1402A, 4i2A may be pre-programmed into the apparatuses 10, 11, 12, 13, 14A, 14B, 30, 40. Alternatively, the computer readable instructions 1002A, 1102A, 1202A, 1302A, 1402A, 4i2A may arrive at the apparatus 10, 11, 12, 13, 14A, 14B, 30, 40 via an electromagnetic carrier signal or may be copied from a physical entity 500 (see Figure 11) such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD. The computer readable instructions 1002A, 1102A, 1202A, 1302A, 1402A, 4i2A may provide the logic and routines that enables the devices/apparatuses 10, 11, 12, 13, 14A, 14B, 30, 40to perform the functionality described above. The combination of computer-readable instructions stored on memory (of any of the types described above) may be referred to as a computer program product. Where applicable, the Bluetooth Low Energy (BLE) capability of the apparatuses 10, 11, 12, 13 may be provided by a single integrated circuit. It may alternatively be provided by a set of integrated circuits (i.e. a chipset). The BLE-capability may alternatively be a hardwired, application-specific integrated circuit (ASIC).

As will be appreciated, the apparatuses 10, 11, 12, 13, 14A, 14B, 30, 4odescribed herein may include various hardware components which have may not been shown in the

Figures. For instance, the trackable device 10 may in some implementations be a portable computing device such as a mobile telephone or a tablet computer and so may contain components commonly included in a device of the specific type. Similarly, the

apparatuses 10, 11, 12, 13, 14A, 14B, 30, 40 may comprise further optional software components which are not described in this specification since they may not have direct interaction to embodiments of the invention. Similarly, although two apparatuses and/or controllers have been depicted and described as separate entities, in some examples, the functionality provided by these entities may be provided by a single entity. For instance, in Figure 9, the recording control apparatus 12 includes its own controller 120 as well as the locator apparatus 13 and the selector 14B. However, the functionality provided by each these entities may instead be provided by a single controller (comprising processing circuitry and memory) or indeed two separate controllers.

Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on memory, or any computer media. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "memory" or "computer-readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.

Reference to, where relevant, "computer-readable storage medium", "computer program product", "tangibly embodied computer program" etc., or a "processor" or "processing circuitry" etc. should be understood to encompass not only computers having differing architectures such as single/multi-processor architectures and sequencers/parallel architectures, but also specialised circuits such as field programmable gate arrays FPGA, application specify circuits ASIC, signal processing devices and other devices. References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device as instructions for a processor or configured or configuration settings for a fixed function device, gate array, programmable logic device, etc.

As used in this application, the term 'circuitry' refers to all of the following: (a) hardware- only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.

This definition of 'circuitry' applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term

"circuitry" would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.

If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above- described functions may be optional or may be combined. Similarly, it will also be appreciated that flow diagrams of Figures 4 and 6 are examples only and that various operations depicted therein may be omitted, reordered and or combined.

Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.

It is also noted herein that while the above describes various examples, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the present invention as defined in the appended claims. For instance, although the above examples have been described with reference to HAIP technology, it will be appreciated that the principles described herein are equally applicable to any positioning system which is capable of utilising another high accuracy positioning protocol.