Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SENSOR ANALYSIS AND VIDEO CREATION
Document Type and Number:
WIPO Patent Application WO/2016/012313
Kind Code:
A1
Abstract:
There is provided a computer-implemented method for enabling a reduced playback of video from video data recorded during a period of time. The method comprises receiving sensor data, the sensor data comprising a record of sensor readings during the period of time; analysing the sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub-period of time within the period of time; and recording time data associated with the or each respective sub-period of time, the time data being useable to enable the reduced playback of video from the video data recorded during the period of time, the reduced playback of video corresponding to the or each respective sub-period of time.

Inventors:
THEOBALD ADAM (GB)
GIRVEN JONATHAN (GB)
Application Number:
PCT/EP2015/066088
Publication Date:
January 28, 2016
Filing Date:
July 14, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TRICK BOOK LTD (GB)
International Classes:
G11B27/031
Domestic Patent References:
WO2014179749A12014-11-06
WO2008023352A22008-02-28
Foreign References:
EP1998554A12008-12-03
US20070120986A12007-05-31
Other References:
EAMONN KEOGH ET AL.: "Segmenting Time Series: A Survey and Novel Approach", DEPARTMENT OF INFORMATION AND COMPUTER SCIENCE
PETITJEAN, F. O.; KETTERLIN, A.; GANCARSKI, P.: "A global averaging method for dynamic time warping, with applications to clustering", PATTEM RECOGNITION, vol. 44, no. 3, 2011, pages 678
AI-NAYMAT, G.; CHAWLA, S.; TAHERI, J., SPARSEDTW: A NOVEL APPROACH TO SPEED UP DYNAMIC TIME WARPING, 2012
PETITJEAN, F. 0.; GANGARSKI, P.: "Summarizing a set of time series by averaging: From Steiner sequence to compact multiple alignment", THEORETICAL COMPUTER SCIENCE, vol. 414, 2012, pages 76
Attorney, Agent or Firm:
SMITH, Jeremy Robert (20 Red Lion Street, London WC1R 4PJ, GB)
Download PDF:
Claims:
Claims

1. A computer-implemented method for enabling a reduced playback of video from video data recorded during a period of time, the method comprising:

receiving sensor data, the sensor data comprising a record of sensor readings during the period of time;

analysing the sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub-period of time within the period of time; and

recording time data associated with the or each respective sub-period of time, the time data being useable to enable the reduced playback of video from the video data recorded during the period of time, the reduced playback of video corresponding to the or each respective sub-period of time.

2. A method as claimed in claim 1 , wherein the reduced playback of video from the video data comprises video of an event associated with the sensor data.

3. A method as claimed in claim 1 or claim 2, wherein the sensor data comprises data indicative of a property of an object associated with the reduced playback of video from the video data.

4. A method as claimed in claim 3, wherein the reduced playback of video from the video data comprises video depicting the object.

5. A method as claimed in claim 3 or claim 4, wherein the reduced playback of video from the video data comprises video recorded from the perspective of the object.

6. A method as claimed in any of claims 3 to 5, wherein the object is a person.

7. A method as claimed in any preceding claim, wherein the sensor data comprises data indicative of at least one of position, movement and orientation.

8. A method as claimed in any preceding claim, wherein the sensor data comprises data indicative of at least one of position, movement and orientation of at least one sensor.

9. A method as claimed in claim 8, wherein the at least one sensor comprises at least one of a gyroscope, an accelerometer, a magnetometer, a GPS sensor, a proximity sensor, a temperature sensor, a gravity sensor, a rotation sensor and a heart-rate sensor.

10. A method as claimed in any preceding claim, wherein the record of sensor readings comprises sensor readings from a plurality of sensors.

1 1 . A method as claimed in claim 10, wherein the plurality of sensors comprises different types of sensors.

12. A method as claimed in claim 10 or claim 1 1 , wherein one or more of the plurality of sensors is associated with a first portable device. 13. A method as claimed in claim 12, wherein one or more of the plurality of sensors is associated with a second portable device separate from the first portable device.

14. A method as claimed in any of claims 10 to 13, wherein analysing the sensor data comprises averaging sensor readings from at least two of the plurality of sensors to form combined sensor data, the combined sensor data being analysed to identify the at least one predetermined characteristic.

15. A method as claimed in claim 14, wherein the combined sensor data comprises a sum of squares of respective sensor readings from the respective sensors.

16. A method as claimed in any preceding claim, wherein the sensor data comprises sensor time data, the sensor time data being usable to identify the sub-period of time.

17. A method as claimed in any preceding claim, wherein the method further comprises:

after recording the time data, receiving additional sensor data, the additional sensor data comprising a record of additional sensor readings during the period of time; analysing the additional sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the additional sensor data corresponding to a respective sub-period of time within the period of time; and updating the recorded time data based on the analysis of the additional sensor data, the updated time data being useable to enable an updated reduced playback of video from the video data recorded during the period of time. 18. A method as claimed in any preceding claim, wherein the video data comprises video time data, the video time data being usable to identify a subset of the video data corresponding to the sub-period of time.

19. A method as claimed in any preceding claim, wherein the method further comprises receiving the video data.

20. A method as claimed in claim 19, wherein the sensor data and the video data are received from the same device. 21 . A method as claimed in claim 19 or claim 20, wherein the method further comprises storing the received video data.

22. A method as claimed in any of claims 19 to 21 , wherein the method further comprises:

after receiving the video data and after recording the time data, receiving additional video data recorded during the period of time, wherein the recorded time data is useable to enable an updated reduced playback of video from the video data and the additional video data. 23. A method as claimed in any preceding claim, wherein the video data comprises data created using at least one camera.

24. A method as claimed in claim 23, wherein the method further comprises sending an initiation command to cause the at least one camera to initiate the creation of the video data.

25. A method as claimed in claim 23 or claim 24, wherein the method further comprises receiving camera sensor data, the camera sensor data comprising a record of sensor readings indicative of at least one property of the at least one camera during the period of time.

26. A method as claimed in claim 25, wherein the record of sensor readings indicative of at least one property of the at least one camera comprises sensor readings from at least one camera sensor, the or each camera sensor being associated with a respective camera of the at least one camera.

27. A method as claimed in claim 25 or claim 26, wherein the at least one property of the at least one camera comprises at least one of the position, orientation, field of view, focus and movement of a respective camera of the at least one camera. 28. A method as claimed in any of claims 25 to 27, wherein the method further comprises analysing the camera sensor data and the sensor data to identify which of the at least one camera captured, during the or each respective sub-period of time, at least one object associated with the respective identified predetermined characteristic. 29. A method as claimed in claim 28, wherein the method further comprises, based on the identification of which of the at least one camera captured, during the or each respective sub-period of time, at least one object associated with the respective identified predetermined characteristic, recording camera identification data identifying the or each respective camera, the camera identification data being usable to enable the reduced playback of video to correspond to video created using the or each respective camera, thereby ensuring that the at least one object associated with the respective identified predetermined characteristic appears in the reduced playback of video.

30. A method as claimed in any of claims 23 to 29, wherein the sensor data comprises data from at least one sensor, and wherein one of the at least one sensor and one of the at least one camera are provided on the same device.

31 . A method as claimed in any preceding claim, wherein the at least one

predetermined characteristic comprises a predetermined signal profile, and wherein identifying the at least one predetermined characteristic comprises identifying a part of the sensor data that matches the predetermined signal profile.

32. A method as claimed in any preceding claim, wherein identifying the at least one predetermined characteristic comprises identifying a part of the sensor data that has a sensor reading of a value that is above or below a predetermined threshold, and/or inside or outside a predetermined range, and/or more than a predetermined amount away from a time-averaged value of the sensor reading during the period of time.

33. A method as claimed in any preceding claim, wherein identifying the at least one predetermined characteristic comprises identifying a part of the sensor data that has a sensor reading indicative of at least one of high speed, high acceleration and high rotational speed relative to a respective time-averaged value of the sensor reading during the period of time, and/or relative to a predetermined value.

34. A method as claimed in any preceding claim, wherein the time data comprises a start time of the sub-period of time.

35. A method as claimed in any preceding claim, wherein the time data comprises an end time of the sub-period of time.

36. A method as claimed in any preceding claim, wherein the time data comprises a duration of the sub-period of time.

37. A method as claimed in any preceding claim, wherein the method further comprises sending the time data to a user device to enable the user device to effect the reduced playback of video.

38. A method as claimed in claim 37, wherein the time data is sent in response to receiving a user request from the user device.

39. A method as claimed in any preceding claim, wherein the method further comprises using the time data to effect the reduced playback of video.

40. A method as claimed in any preceding claim, wherein the reduced playback of video is personalised to comprise video from the video data that is associated with a user profile.

41 . A method as claimed in any preceding claim, wherein the video data is stored on a remote video server, the stored video data being accessible to enable the reduced playback of video. 42. A method as claimed in claim 41 , wherein the method further comprises sending the time data to the remote video server, so that the time data can be used at the remote video server or accessed from the remote video server to enable the reduced playback of video.

43. A method as claimed in any preceding claim, wherein the at least one

predetermined characteristic comprises a plurality of predetermined characteristics.

44. A method as claimed in claim 43, wherein at least two of the plurality of identified predetermined characteristics are the same type of characteristic. 45. A method as claimed in any preceding claim, wherein at least two of each respective sub-period of time overlap.

46. A method as claimed in any preceding claim, wherein the method further comprises applying a visual effect to a part of the video data corresponding to the or each respective sub-period of time to visually affect the reduced playback of video.

47. A method as claimed in any preceding claim, wherein the method further comprises associating the or each identified predetermined characteristic with a respective identifier selected from a list, each entry in the list being associated with a respective associated characteristic, the respective identifier being the identifier from the list having the associated characteristic that most closely matches the respective identified predetermined characteristic.

48. A method as claimed in claim 47, wherein the method further comprises associating the respective identifier with the respective sub-period of time associated with the respective identified predetermined characteristic.

49. A method as claimed in claim 47 or claim 48, wherein the method further comprises associating the respective identifier with the video data.

50. A method as claimed in claim 49, wherein the reduced playback of video is enhanced based on the respective identifier.

51 . A method as claimed in any of claims 47 to 50, wherein the list is a list of actions, such as "tricks" made by a skateboarder, and the associated characteristics comprise characteristic sensor data for each respective action.

52. A computer-readable medium comprising computer-executable instructions to perform the method of any preceding claim

53. A device configured to perform the method of any of claims 1 to 51.

54. A system for enabling a reduced playback of video from video data recorded during a period of time, the system comprising:

means for receiving sensor data, the sensor data comprising a record of sensor readings during the period of time;

means for analysing the sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub-period of time within the period of time; and

means for recording time data associated with the or each respective sub-period of time, the time data being useable to enable the reduced playback of video from the video data recorded during the period of time, the reduced playback of video corresponding to the or each respective sub-period of time.

55. A computer-implemented method for causing a reduced playback of video from video data recorded during a period of time, the method comprising:

receiving a user request to initiate a video capture process;

in response to the user request, initiating the video capture process by causing at least one camera to capture video data, the video data being captured during the period of time;

initiating a sensor reading process by causing at least one sensor to obtain sensor readings, the sensor readings occurring during the period of time;

receiving sensor data, the sensor data comprising a record of the sensor readings during the period of time;

sending the received sensor data to a sensor server configured to analyse the sensor data to identify at least one predetermined characteristic, the or each

predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub-period of time within the period of time, the sensor server being configured to record time data associated with the or each respective sub- period of time;

receiving a user request to cause the reduced playback of video from the video data; and in response to the user request, causing the reduced playback of video from the video data based on the time data, the reduced playback of video corresponding to the or each respective sub-period of time. 56. The method of claim 55, wherein the sensor server and the video server are the same server.

57. The method of claim 55 or 56, wherein the method further comprises receiving the video data.

58. The method of claim 57, wherein the method further comprises sending the received video data to a video server, wherein the video server is accessed to enable the reduced playback of video. 59. A computer-implemented method for causing a reduced playback of video from video data recorded during a period of time, the method comprising:

receiving a user request to initiate a video capture process;

in response to the user request, initiating the video capture process by causing at least one camera to capture video data, the video data being captured during the period of time;

initiating a sensor reading process by causing at least one sensor to obtain sensor readings, the sensor readings occurring during the period of time;

receiving and storing sensor data, the sensor data comprising a record of the sensor readings during the period of time;

receiving and storing the video data;

analysing the sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub-period of time within the period of time;

recording time data associated with the or each respective sub-period of time; receiving a user request to cause the reduced playback of video from the video data; and

in response to the user request, using the time data to cause the reduced playback of video from the stored video data, the reduced playback of video corresponding to the or each respective sub-period of time.

Description:
Sensor analysis and video creation

This disclosure relates to sensor analysis and video creation, and in particular to an 'auto- editing' process whereby sensor data is analysed and correlated with a video.

Typically, the process for editing video footage begins with capturing footage with one or more cameras. The footage is then collected into one place. Once collected, footage is uploaded to a video editor which enables the user to edit the footage. When editing, users may cut and modify the footage and add special effects. This manual process involves identifying preferred parts of the video, and then cutting these out and stitching them back together into the desired sequence. This results in an edited video alongside the raw footage. The edited video would typically then be uploaded to a video platform such as YouTube™ to enable public viewing of the edited video. There are many video editing tools available in the public domain, though typically these tools require a user to sift through video footage and edit it manually. Video editing is a time-consuming process. It takes time to find interesting content in a mass of footage and cut, edit and add in effects. A typical user may not possess the skills or resources, for example time and money, to do this. Furthermore, editing tools are typically very expensive, and therefore not accessible to all. As a result, most footage is not edited or shared.

There are applications on mobile devices which perform basic editing functions and which allow video editing on the move, however these applications still require the user to manually edit footage.

Further, when users edit footage using an editing tool, the results are an edited video and the original raw video. This results in duplicate footage, and the edited footage is the same for every viewer. This leads to inefficient storage of data, with multiple edited copies being stored along with the original raw footage. There is also a risk that a user may accidentally delete portions of the original raw footage, or the raw footage in its entirety, in favour of an edited version.

Alternatively, if only the edited video is kept, and the original video is deleted, data from the original video will be lost. If a different edit is desired at a future time, the different edit would only be able to be a subset of the edited video. l The inventors have recognised that this manual approach to video editing can be laborious, time-consuming, inefficient and expensive, especially to someone who is not an expert in this field. It has been identified that the manual editing process can be bypassed via the use of an automatic editing process. During the video capturing process, sensors are used to capture data. The sensor data is then used to select parts of the video deemed to be of interest, and at least some of these parts are then used to form a composite video. The result is an edited video which has captured interesting moments, and can be viewed on a video platform. In essence, 'highlights' from the footage are captured, and the playback of video contains only these highlights.

Footage can be captured using multiple cameras at the same time. The user can automatically be tagged in footage taken from each camera. Once uploaded, an event can be viewed from multiple angles simultaneously. The stitching together of footage from multiple cameras is an enhanced playback feature.

The present disclosure also relates to a non-destructive editing process. Footage is edited in a non-destructive manner and therefore the original raw video remains the same. A user viewing the video will watch an edited version in playback. The risk of a user deleting raw footage in favour of an edit is removed. As the editing process can happen on the client side, two or more different users may be provided with the playback of a different reduced video from the same original video data, with the playback being determined based on user profile data.

The present disclosure also relates to the distribution and sharing of video content. In the current market, to physically transfer video content a user must transfer the file via a service such as Dropbox or physically transfer the file from one hard drive to another. The inventors have recognised that there are certain disadvantages to this approach. Using sensor data, the platform knows which users are present in the video footage and will automatically share it with those users, enabling automatic distribution to selected parties via a social platform.

Methods described herein make the process of editing faster, cheaper, more enjoyable and accessible to all. Analysis algorithms work in the background to "edit" footage automatically, which makes the process efficient from a time perspective. The automatic nature of the process also means that it is not necessary for a user to edit or transfer video data manually. The process can take place anywhere where there is a network connection, and can even take place offline in some embodiments. An algorithm is used to analyse the sensor data, and hence the analysis is dependent on the input provided by the sensor(s). The video capture may be of a person who is himself carrying or wearing a sensor, which records his movements as he performs an activity. If the person's sensor data for a particular sub-period of time during the recording process is noted by the algorithm, then the relevant part of the video will be included in the edited video.

This intuitive process means that users understand how the editing process is determined on a high level, and so edited videos are less random and fulfil expectations. Methods associated with the present disclosure therefore provide a powerful editing tool for the user. By using a consistent process to edit footage, it makes it easier for a user to produce a video considered to be suitable for publishing. The inventors have recognised that the non-image sensor analysis carried out in the present disclosure has advantages over image analysis from the video data itself. Image analysis can be extremely CPU intensive, which means the process takes a long time to run and therefore damages the user experience and can adversely affect the reliability of the platform. It can also make the process expensive to run. These costs are often passed down to the user.

Processing sensor data is much less CPU intensive than processing image recognition. The disclosure provides a much more efficient way of editing footage, saving time and cost. The cost and time efficiency is beneficial for the business and user. Data captured by the sensors is accurate and rich, and when applied with template matching and cluster analysis techniques (described in greater detail below), an accurate understanding of what has happened within the video arises.

The automatic tagging of movements within the video enables a user to semantically search through the video content (e.g. the user can search for the point in the video in which the sensor detected that a "360" move was carried out). The search engine has a deeper understanding of what actually happens in the video, improving search results and experience. The deeper understanding also benefits advertisers as they can place adverts on videos that are more suitable for their brand.

An additional advantage of the process described herein is that as only raw footage is stored, old footage can be re-edited when there is an improvement to the algorithm. The algorithm can be iterated and as the algorithm becomes smarter with added features, the resulting "edited" video also changes to include new features automatically. This is possible due to the non-destructive method used to edit footage (described in further detail below).

An invention is set out in the independent claims. Optional features are set out in the dependent claims.

According to an aspect, there is provided a computer-implemented method for enabling a reduced playback of video from video data recorded during a period of time, the method comprising: receiving sensor data, the sensor data comprising a record of sensor readings during the period of time; analysing the sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub-period of time within the period of time; and recording time data associated with the or each respective sub-period of time, the time data being useable to enable the reduced playback of video from the video data recorded during the period of time, the reduced playback of video

corresponding to the or each respective sub-period of time. In some embodiments, the reduced playback of video from the video data comprises video of an event associated with the sensor data.

In some embodiments, the sensor data comprises data indicative of a property of an object associated with the reduced playback of video from the video data.

In some embodiments, the reduced playback of video from the video data comprises video depicting the object.

In some embodiments, the reduced playback of video from the video data comprises video recorded from the perspective of the object.

In some embodiments, the object is a person.

In some embodiments, the sensor data comprises data indicative of at least one of position, movement and orientation. In some embodiments, the sensor data comprises data indicative of at least one of position, movement and orientation of at least one sensor.

In some embodiments, the at least one sensor comprises at least one of a gyroscope, an accelerometer, a magnetometer, a GPS sensor, a proximity sensor, a temperature sensor, a gravity sensor, a rotation sensor and a heart-rate sensor.

In some embodiments, the record of sensor readings comprises sensor readings from a plurality of sensors.

In some embodiments, the plurality of sensors comprises different types of sensors.

In some embodiments, one or more of the plurality of sensors is associated with a first portable device.

In some embodiments, one or more of the plurality of sensors is associated with a second portable device separate from the first portable device.

In some embodiments, analysing the sensor data comprises averaging sensor readings from at least two of the plurality of sensors to form combined sensor data, the combined sensor data being analysed to identify the at least one predetermined characteristic.

In some embodiments, the combined sensor data comprises a sum of squares of respective sensor readings from the respective sensors.

In some embodiments, the sensor data comprises sensor time data, the sensor time data being usable to identify the sub-period of time.

In some embodiments, the method further comprises: after recording the time data, receiving additional sensor data, the additional sensor data comprising a record of additional sensor readings during the period of time; analysing the additional sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the additional sensor data corresponding to a respective sub-period of time within the period of time; and updating the recorded time data based on the analysis of the additional sensor data, the updated time data being useable to enable an updated reduced playback of video from the video data recorded during the period of time. In some embodiments, the video data comprises video time data, the video time data being usable to identify a subset of the video data corresponding to the sub-period of time. In some embodiments, the method further comprises receiving the video data.

In some embodiments, the sensor data and the video data are received from the same device. In some embodiments, the method further comprises storing the received video data.

In some embodiments, the method further comprises: after receiving the video data and after recording the time data, receiving additional video data recorded during the period of time, In some embodiments, the recorded time data is useable to enable an updated reduced playback of video from the video data and the additional video data.

In some embodiments, the video data comprises data created using at least one camera.

In some embodiments, the method further comprises sending an initiation command to cause the at least one camera to initiate the creation of the video data.

In some embodiments, the method further comprises receiving camera sensor data, the camera sensor data comprising a record of sensor readings indicative of at least one property of the at least one camera during the period of time.

In some embodiments, the record of sensor readings indicative of at least one property of the at least one camera comprises sensor readings from at least one camera sensor, the or each camera sensor being associated with a respective camera of the at least one camera.

In some embodiments, the at least one property of the at least one camera comprises at least one of the position, orientation, field of view, focus and movement of a respective camera of the at least one camera. In some embodiments, the method further comprises analysing the camera sensor data and the sensor data to identify which of the at least one camera captured, during the or each respective sub-period of time, at least one object associated with the respective identified predetermined characteristic.

In some embodiments, the method further comprises, based on the identification of which of the at least one camera captured, during the or each respective sub-period of time, at least one object associated with the respective identified predetermined characteristic, recording camera identification data identifying the or each respective camera, the camera identification data being usable to enable the reduced playback of video to correspond to video created using the or each respective camera, thereby ensuring that the at least one object associated with the respective identified predetermined characteristic appears in the reduced playback of video.

In some embodiments, the sensor data comprises data from at least one sensor, and one of the at least one sensor and one of the at least one camera are provided on the same device.

In some embodiments, the at least one predetermined characteristic comprises a predetermined signal profile, and In some embodiments, identifying the at least one predetermined characteristic comprises identifying a part of the sensor data that matches the predetermined signal profile.

In some embodiments, identifying the at least one predetermined characteristic comprises identifying a part of the sensor data that has a sensor reading of a value that is above or below a predetermined threshold, and/or inside or outside a predetermined range, and/or more than a predetermined amount away from a time-averaged value of the sensor reading during the period of time.

In some embodiments, identifying the at least one predetermined characteristic comprises identifying a part of the sensor data that has a sensor reading indicative of at least one of high speed, high acceleration and high rotational speed relative to a respective time- averaged value of the sensor reading during the period of time, and/or relative to a predetermined value.

In some embodiments, the time data comprises a start time of the sub-period of time.

In some embodiments, the time data comprises an end time of the sub-period of time. In some embodiments, the time data comprises a duration of the sub-period of time.

In some embodiments, the method further comprises sending the time data to a user device to enable the user device to effect the reduced playback of video.

In some embodiments, the time data is sent in response to receiving a user request from the user device.

In some embodiments, the method further comprises using the time data to effect the reduced playback of video.

In some embodiments, the reduced playback of video is personalised to comprise video from the video data that is associated with a user profile. In some embodiments, the method further comprises causing the reduced playback of video.

In some embodiments, the video data is stored on a remote video server, the stored video data being accessible to enable the reduced playback of video.

In some embodiments, the method further comprises sending the time data to the remote video server, so that the time data can be used at the remote video server or accessed from the remote video server to enable the reduced playback of video. In some embodiments, the at least one predetermined characteristic comprises a plurality of predetermined characteristics.

In some embodiments, at least two of each respective sub-period of time overlap. In some embodiments, at least two of the plurality of identified predetermined

characteristics are the same type of characteristic.

In some embodiments, the method further comprises applying a visual effect to a part of the video data corresponding to the or each respective sub-period of time to visually affect the reduced playback of video. In some embodiments, the method further comprises associating the or each identified predetermined characteristic with a respective identifier selected from a list, each entry in the list being associated with a respective associated characteristic, the respective identifier being the identifier from the list having the associated characteristic that most closely matches the respective identified predetermined characteristic.

In some embodiments, the method further comprises associating the respective identifier with the respective sub-period of time associated with the respective identified

predetermined characteristic.

In some embodiments, the method further comprises associating the respective identifier with the video data.

In some embodiments, the reduced playback of video is enhanced based on the respective identifier.

In some embodiments, the list is a list of actions, such as "tricks" made by a skateboarder, and the associated characteristics comprise characteristic sensor data for each respective action.

In some embodiments, the reduced playback of video does not include video

corresponding to sub-periods of time within the period of time other than the or each respective sub-period of time. According to an aspect, there is provided a computer-readable medium comprising computer-executable instructions to perform the method of any of the above-described aspects/embodiments.

According to an aspect, there is provided a device configured to perform the method of any of the above-described aspects/embodiments.

According to an aspect, there is provided a system for enabling a reduced playback of video from video data recorded during a period of time, the system comprising: means for receiving sensor data, the sensor data comprising a record of sensor readings during the period of time; means for analysing the sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub-period of time within the period of time; and means for recording time data associated with the or each respective sub-period of time, the time data being useable to enable the reduced playback of video from the video data recorded during the period of time, the reduced playback of video corresponding to the or each respective sub-period of time.

According to an aspect, there is provided a computer-implemented method for causing a reduced playback of video from video data recorded during a period of time, the method comprising: receiving a user request to initiate a video capture process; in response to the user request, initiating the video capture process by causing at least one camera to capture video data, the video data being captured during the period of time; initiating a sensor reading process by causing at least one sensor to obtain sensor readings, the sensor readings occurring during the period of time; receiving sensor data, the sensor data comprising a record of the sensor readings during the period of time; sending the received sensor data to a sensor server configured to analyse the sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub- period of time within the period of time, the sensor server being configured to record time data associated with the or each respective sub-period of time; receiving a user request to cause the reduced playback of video from the video data; and in response to the user request, causing the reduced playback of video from the video data based on the time data, the reduced playback of video corresponding to the or each respective sub-period of time is provided

In some embodiments, the sensor server and the video server are the same server.

In some embodiments, the method further comprises receiving the video data.

In some embodiments, the method further comprises sending the received video data to a video server, In some embodiments, the video server is accessed to enable the reduced playback of video.

According to an aspect, there is provided a computer-implemented method for causing a reduced playback of video from video data recorded during a period of time, the method comprising: receiving a user request to initiate a video capture process; in response to the user request, initiating the video capture process by causing at least one camera to capture video data, the video data being captured during the period of time; initiating a sensor reading process by causing at least one sensor to obtain sensor readings, the sensor readings occurring during the period of time; receiving and storing sensor data, the sensor data comprising a record of the sensor readings during the period of time;

receiving and storing the video data; analysing the sensor data to identify at least one predetermined characteristic, the or each predetermined characteristic being contained in a respective subset of the sensor data corresponding to a respective sub-period of time within the period of time; recording time data associated with the or each respective sub- period of time; receiving a user request to cause the reduced playback of video from the video data; and in response to the user request, using the time data to cause the reduced playback of video from the stored video data, the reduced playback of video

corresponding to the or each respective sub-period of time.

Specific embodiments are now described with reference to the drawings, in which:

Figure 1 is a schematic diagram which depicts a process and system according to an embodiment of the present disclosure;

Figure 2 schematically depicts a process according to an embodiment of the present disclosure;

Figure 3 schematically depicts combined data from multiple sensors according to an embodiment of the present disclosure;

Figure 4 schematically depicts multiple cameras recording an event according to an embodiment of the present disclosure; and

Figure 5 is a flowchart depicting matching sensor and video data according to an embodiment of the present disclosure.

Figure 1 schematically depicts a system used in an embodiment of the present disclosure. Components of the system include a processor capable of recording movement data, a camera (such as a GoPro™ camera or a smartphone camera), a web site and a set of server side applications.

A first mobile device has a video camera 102, and is capable of recording video. The first mobile device has a plurality of sensors 104, and is capable of collecting data from the sensors 104. The first mobile device is capable of being connected to a wireless network, for example a WLAN. The first mobile device is capable of uploading information via the wireless network to a data server 106. The first mobile device is similarly capable of uploading information to a video server 108. Data server 106 and video server 108 are capable of streaming media and/or other data to the first mobile device. The first mobile device is also capable of downloading information from the data server 106 and the video server 108. The uploading and downloading processes may be handled by a mobile application 1 10 on the first mobile device.

The data server 106 is capable of uploading information to and downloading information from a data storage unit 1 12. The video server 108 is capable of uploading information to and downloading information from a video storage unit 1 14.

A second mobile device is also capable of connecting to the wireless network. The second mobile device also has a camera 1 13, which is capable of recording video 1 15. The second mobile device is capable of uploading information to the video server 108. The uploading and downloading processes may be handled by a mobile application on the second mobile device. The second mobile device uploads 1 17 full, unprocessed video to the video server 108. The mobile application 1 10 has a user interface which gives the user multiple options, for example to start recording video, to start recording sensor data, to pause or stop recording video, and to pause or stop recording sensor data.

When a user wishes to record a video while he is performing an activity, for example performing 'tricks' on a skateboard, the user uses the first mobile device to record his activity. The user accesses the mobile application 1 10 on the mobile device. The mobile application allows the user to record his activity. The user presses a 'start recording' command icon on the mobile application interface. The recording of video footage is thereby initiated 1 16. The recording of sensor data is also initiated 1 18. The sensor data comes from the plurality of sensors 104 on the first mobile device. The sensors 104 comprise the first mobile device's accelerometer, gyroscope, and GPS sensor. The video is captured by the video camera 102 of the mobile device during the user's activity.

As the user travels on his skateboard, details of his movement are recorded by the sensors 104. The first mobile device's GPS sensor logs the first mobile device's GPS coordinates at predefined intervals. This data allows the user's position at any specified moment to be calculated. This data also allows the user's speed at any given moment in time to be calculated. The gyroscope data allows the orientation of the first mobile device at any given moment to be calculated. If the user increases or decreases his speed, data from the accelerometer records data indicative of the user's acceleration at any given moment in time. When the user wishes to stop recording his activity, he presses a 'stop recording' command key on the interface of the mobile application 1 10. The start and end times of the sensor data are therefore controlled by the user, and a period of time over which the sensor data extends is defined. The sensor data is thus comprised of sensor readings over the period of time. After the user stops recording his activity, the sensor data and video footage are uploaded. The full, unprocessed sensor data is uploaded 120 to the data server 106. The full, unprocessed video is uploaded 122 to the video server 108.

The sensor data is subjected to analysis by at least one analysis algorithm, as seen at step 124 of Figure 1 . The analysis algorithm is run on the sensor data to calculate when sub-periods of time, of the full period of time over which the sensor data extends, contain a predetermined data characteristic. The sub-periods of time during which the sensor data exhibits characteristics which match with a set of predetermined characteristics are identified. The start and end times of these periods are identified by the analysis algorithm. The start and end points are then recorded by the analysis algorithm. The results of the analysis of the sensor data are then correlated with the recorded video. In this way, time data associated with the identified sub-periods is recorded, and the time data is useable to enable reduced playback of the recorded video. The time data is associated with the identified tricks performed by the user. The time data is stored on data storage unit 1 12. This is discussed in more detail later.

Running analysis algorithms to identify sub-periods of time during which the sensor data exhibits one or more predetermined characteristics is advantageous, as the sub-periods of time during which the sensor data exhibits certain characteristics can be correlated with the recorded video. This allows an algorithm to automatically identify parts of the video deemed to be interesting, according to analysis of the sensor data. In the example of the skateboarder, one predetermined characteristic is an accelerometer reading above a certain threshold. Thus, the analysis can identify sub-periods of time in the data which display high accelerometer readings, which may be indicative of a jump or a

skateboarding trick. Identification of the trick 124 is effected by identification of a period of time in the sensor data with a high accelerometer reading.

Footage of the trick can be edited with special effects applied. The adding of these effects can be automated according to the activity identified by the algorithm.

When a user 126 wishes to view a video of the activity recorded by the first mobile device, the tricks identified by the analysis are displayed to the user 126 on either the mobile device or on a web front-end in the form of video feeds. In this way the user views the tricks on the web or on the mobile application 1 10. This is shown at step 128 of Figure 1. The data server 106 or data storage unit 1 12 supplies the web-front end or mobile application 1 10 with the recorded time data, as seen in 130 of Figure 1. The video server 108 supplies the web-front end or mobile application 1 10 with the recorded video from the first mobile device's camera 102 and the second mobile device's camera 1 13. This allows a processor to automatically skip to the identified segments of the video data, allowing the user 126 to view a composite video made up of the tricks performed, in an order determined by the algorithm. The composite video is made up of sub-periods of time of the two videos which correspond to the identified sub-periods of time in the sensor data. The entire video is not actually cut, but is stored as a whole on video server 108 or video storage unit 1 14.

With reference to Figures 1 and 2, we continue with the illustrative example of a user on a skateboard. Figure 2 schematically depicts a process according to an embodiment of the present disclosure. As the user travels on his skateboard, the mobile device 202 records sensor data 204 from an accelerometer, along with other sensors. The sensor data 204 is recorded over a period of time specified by the user. The video camera 206 of the mobile device records video 208 over the same period of time. The date and time is recorded with the measurement from the video camera 206 and the sensors.

Unprocessed sensor data 204 from the accelerometer over the period of time is uploaded to a server 210. The unprocessed video 208 over the period of time is also uploaded to the server 210. The server 210 uploads both data sets to a database 212. The uploads take place automatically after the video and sensor data have finished being recorded. The sensor data and video data are written to respective files and uploaded via file transfer protocol (FTP). If the mobile device is unable to connect to a network when the video and data are due to be uploaded, the video and data are stored on the mobile device's internal memory, to be uploaded to the server or data storage unit at a later time when the mobile device is connected to a network.

An analysis algorithm is run on the unprocessed sensor data 204. The algorithm identifies a sub-period of time 214 in the accelerometer data during which the accelerometer data is indicative of a jump (or other predetermined activity). The start time 216 of the sub-period of time 214 is time-stamped. The end time 218 of the sub-period of time 214 is also time- stamped. The identified time stamps, when applied to the unprocessed video data 208, define a sub-period of time of the video 220. The sub-period of time of the video 220 contains footage of the jump. The time stamp data is also stored on the database 212 or server 210 along with the full unprocessed video 208 and sensor data 204.

A user 126 can now view the identified sub-period of video 220. When the user 126 opts to view the video using a mobile application 1 10, the time stamp data from the database 210 is consulted to determine the sub-period of video 220 to be viewed. The database 212 (or server 210) then streams the sub-period of video 220 to the mobile application 1 10 for the user to view. The video is streamed starting from a non-zero start time of the video, so that the player begins buffering from a point part-way through the video. This point is the start time 216 identified by the analysis algorithm. Playback of the identified sub-period of video is shown at step 222 of Figure 2.

Buffering the video from a non-zero start time is advantageous. Only one version of the video needs to be stored. This one version is the unprocessed video data. This method of "editing" and viewing video is non-destructive, as the original video data is maintained, but users are able to watch a reduced part of the video due to the timestamp information. Many users can view different subsections of the same video, just by starting and ending at different points in the video. One user's views can even overlap with those of other users. As new events are found they also can be included in an updated version of the video without changing or deleting unprocessed video data.

When a predetermined characteristic in the sensor data is identified, an algorithm determines the start time 216 of a sub-period of time from the sensor data. The sub-period of time is set to contain the predetermined characteristic with no additional time before or after the identified predetermined characteristic, or with a pre-set amount of time before and/or after the predetermined characteristic. The identified start time 216 is co-ordinated with the video data to determine an associated start time in the video data. The end time is determined in a similar way. The video and sensor data may be uploaded to a server 210 or other suitable storage unit, and this may occur whilst the video and data is still being recorded. If the mobile device is unable to connect to a network whilst the video and data are being recorded, the video and data can be stored on the mobile device's internal memory, to be uploaded to a server or data storage unit at a later time when the mobile device is connected to a network. In this way, sensor data may be written to a file and uploaded via file transfer protocol (FTP) or streamed to the server side application whilst recording. The sensor data 204 is data recorded from at least one of an accelerometer, a GPS sensor, a gyroscope, a magnetometer, a proximity sensor, a temperature sensor, a gravity sensor, a digital compass, a heart-rate monitor, or any other suitable sensor. Different devices suitable for use as disclosed have different amounts and combinations of these sensors. The gyroscope measures the rate of rotation in radians per second, and about three axes. The accelerometer measures acceleration in m/s A 2 about three axes. The magnetometer measures the magnetic field experienced by the device, in three dimensions. The GPS sensor measures the position of the device using GPS satellites (or other suitable satellites). The gravity sensor measures the direction of gravity with respect to the device. The digital compass measures the orientation of the device with respect to the Earth's magnetic field.

The mobile application 1 10 includes an option to record sensor data only. This allows the user to record sensor data without video data using his mobile device. The user can set up one or more separate video cameras to record his activity, or request that a friend record his activity using a different mobile device. Time data is recorded for each video camera. Identified sub-periods or events in the sensor data are then matched to the video as detailed below. Figure 3 schematically depicts a process of combining data from multiple sensors. Figure 3 depicts averaged accelerometer data 302, averaged gyroscope data 304, and averaged magnetometer data 306 being combined to form combined sensor data 308. First, the individual sensor data is averaged over its three axes using a 'sum of squares' technique. For example, data from the x, y and z axes of the accelerometer data are combined to form averaged accelerometer data 302 using the following sum of squares formula:

Kt) = (_x(tr + y(ty + z(ty) 1 ^ where fit) represents the averaged accelerometer data as a function of time.

Using combined sensor data 308 allows the analysis algorithms to use a combination of sensor data and common variation between sensors to select regions of interest. Sensor data is combined using the same 'sum of squares' technique. For example, in the formula above, the averaged function fit) now represents the combined sensor data as a function of time, and the x, y and z functions represent the averaged accelerometer data 302, averaged gyroscope data 304, and averaged magnetometer data 306, respectively. One problem that can arise due to using data from different sensors is the combination of irregular time intervals between sensor readings. Different sensors take readings at different rates, which can make comparisons between sensors difficult. To solve this, the sensor data is interpolated between sampling times, such that comparisons at a same point in time can be made. The interpolated sensor is then used in the comparison process.

To select a sub-period of the combined sensor data which is associated with predefined characteristics, the algorithm identifies sub-periods of time 310 which have a value that is a predetermined amount (such as a predetermined number of standard deviations) away from a mean value of the data. A time average is taken over the combined sensor data, giving a mean combined sensor data value. A standard deviation of the sensor data is also calculated. A sub-period of time of the combined sensor data is identified, which contains data readings at or above 3 standard deviations from the mean combined sensor data value. After the sub-period of time 310 has been identified, its start and end times are recorded. A "cut-out" 312 of the combined sensor data enclosed by the start time 216 and the end time 218 is taken. This cut-out, along with its associated start and end times 216 and 218, are stored on a server 210 or suitable memory unit. In other embodiments, the number of standard deviations forming the threshold amount is different, such as 1 , 2, 2.5, 4 etc.).

Analysis is carried out on the combined sensor data, or on data from any sensor individually. In this embodiment, a value associated with a time average over all the accelerometer data is calculated. The accelerometer has an associated error margin. A standard deviation for the accelerometer data is calculated. After this, any regions of the accelerometer data which display readings of three standard deviations or more from the mean value are identified. In this way, regions of interest in the sensor data are identified by scanning or searching for regions of the data which differ substantially from the mean value.

Performing analysis on combined sensor data rather than on all the data from individual sensors reduces the processing load due to the analysis algorithm, and reduces the likelihood of anomalous regions of data from a single sensor unduly affecting the analysis algorithm. Several methods of identifying "interesting" regions of sensor data are envisioned. The algorithm is programmed to ignore "spikes" and other anomalies in the sensor data. In this embodiment, a region of interest, i.e. an identified sub-period of time of the sensor data, is identified by first calculating a mean value over the recorded time period. This averaging process is configured to disregard periods in which the data is zero. An accelerometer would have a substantially zero reading whilst the mobile device travels at a constant speed and these regions are disregarded when calculating an average. The data is then scanned for regions of time having data at or above three standard deviations from the mean, whilst disregarding any such periods of time shorter than a predetermined length of time, for example 1 ms. This removes spikes and anomalous peaks from the analysis.

It is advantageous to 'smooth' the collected sensor data before analysis is carried out. One approach is to use the Kalman filter (known to those skilled in the relevant art). The Kalman filter is an algorithm which uses data over a period of time to 'predict' future values. The Kalman filter algorithm uses data containing noise or random fluctuations collected over a period of time and produces estimates of unknown variables in a way that minimises the mean of the squared error. The Kalman filter averages received values with predicted values, and can be used to remove unrealistically short variations. "Waterstate" removal is a method used to remove unrealistically large variations which exist in extended periods of inactivity. Significant events in the data which are surrounded by extended periods of inactivity tend to be spurious. Broad band low pass filters can be used to estimate the general trend in the sensor data. This allows unrealistically large variations in a surrounding low state to be masked. Waterstate removal in combination with a Kalman filter algorithm reduces the chances of anomalous data affecting the region of interest selection algorithm.

A time series segmentation algorithm is run on the combined sensor data. This algorithm takes a time series and breaks it up into chunks of varying size, for which the data within the chunk is as closely represented by the chunk itself as possible. Methods known to those skilled in the art include: bottom up, top down and sliding window segmentation. The algorithm can be used in a multi-variate way, and simultaneously across multiple sensors. Events can be stored as their segmented version, i.e an identified sub-period, as opposed to the entire data set. To illustrate this technique, take a simple underlying data set of 0.1 , 0, 0, 1.1 , 1 , 1 .1 , 0.1 , 0, 0.1 as an example. This could be approximated as: 0, 0, 0, 1 , 1 , 1 , 0, 0, 0. This can be interpreted as three segments, each segment being a straight line defined by the equation y = mx + c. In this example, each segment has m=0, the first and last segments have c=0, and the middle segment has c=1. This

approximation yields a significant improvement for storage and analysis times in large data sets. Time series segmentation will be understood by those skilled in the art, as will be appreciated upon reference to the following publication: Eamonn Keogh et al

"Segmenting Time Series: A Survey and Novel Approach "; Department of Information and Computer Science; University of California, which is incorporated by reference herein.

For example, a region of the data contains a sharp increase in acceleration in one direction, followed by a sharp increase in acceleration in the opposite direction. In the same region of time, data recorded by the gyroscope shows a large peak. Using a combination of time series segmentation and template matching, the full period of time associated with the data is segmented into sub-periods of time. These sub-periods of time may be clustered or 'matched' with other regions of time displaying similar data

characteristics. By comparing the data with a repository of sub-periods, the algorithm can identify that, during this moment of time, the user likely performed a 360° jump. Using the analysis algorithms detailed above, multiple types of events can be clustered together and therefore identified.

There are potential problems associated with the time series segmentation algorithm. For example, consider a scenario where two similar events, which would desirably be classified as the same type of event, have a different value in one dimension. For example, two skateboard tricks of the same classification, one of which is performed quickly whilst the other is performed slowly. These events would have different time data. It would be desirable for the algorithm to identify the two events as the same thing, the same type of "trick", however they cover a different total time. A designer may wish the algorithm to identify and classify all 360° jumps performed by a user. This is done using a time series segmentation algorithm which divides the full period of data into sub-periods containing data which can be compared to a library or repository of sub-periods. However, in the example of the slow and fast 360° jump, the length of time taken to perform the 360° differs.

To solve this problem, an integration of data is used to calculate and perform analysis on a cumulative parameter. For example, gyroscope data (recorded in radians per second) is converted into orientation data (in radians). Using this new data set, a 360° turn is much easier to identify individually, and 'chunks' of time associated with 360° turns are easier to be clustered together and identified. Another technique used in the present disclosure is 'dynamic time warping', which is used for measuring the similarity between two temporal sequences which vary in time or speed. Matching between the two sequences is performed whilst non-linearly "warping" the time dimension. This technique is known in the field, as will be appreciated upon reference to the following publications (which are incorporated by reference herein):

Petitjean, F. O.; Ketterlin, A.; Gangarski, P. (201 1 ). "A global averaging method for dynamic time warping, with applications to clustering". Pattern Recognition 44 (3): 678; Al-Naymat, G., Chawla, S., & Taheri, J. (2012). SparseDTW: A Novel Approach to Speed up Dynamic Time Warping; and

Petitjean, F. O.; Gangarski, P. (2012). "Summarizing a set of time series by averaging: From Steiner sequence to compact multiple alignment". Theoretical Computer Science 414: 76.

In a manner similar to the 360° problem posed above, two tricks of the same type are performed, but one has a large amplitude of jump whereas the other has a relatively low amplitude of jump. Again, it is desired for these two types of event to be classified as the same thing. In this scenario, the data is rescaled, using matching with time warping. The optimal matching algorithm is used to find similar events. There are many ways to assess the degree of similarity between data associated with an identified sub-period and data associated with a stored sub-period, and these are known to those skilled in the art of cluster analysis and time series segmentation. Stored data chunks, against which identified events are compared, are called "cluster solutions". A 'repository' or 'library' of clusters is built up, and data associated with these cluster solutions is stored in a database. These libraries are built up as more data is collected from users. Using the example of a 360° jump, the analysis algorithms, including a time series segmentation algorithm, are performed on the data. When the sub-period of time of the sensor data associated with the 360° jump is identified, it is compared with the library of clusters, and is identified as a 360° jump within an error margin. When an event is identified as being of a particular cluster solution, it is classified with a corresponding tag. Cluster solutions are not assigned if an error margin is too large, i.e. when the recorded event is too dissimilar to any one of the library of clusters.

Using the above-described matching techniques, a combination of multiple temporally nearby events can be identified, and displayed to a user as a singular "combo" event. With reference to Figure 4, in an embodiment of the present disclosure multiple videos are recorded. A user carrying a mobile device 202 travels on a skateboard. The user initiates recording of sensor data 204 on the mobile device 202. The arrows demonstrate the user's path as he travels on the skateboard. At the point marked "Trick" in Figure 4, the user performs a skateboard trick. First, second, third and fourth video cameras (402, 404, 406 and 408 respectively) are recording video while the user's sensor data is being recorded. The first, second and third video cameras (402, 404, 406) are orientated such that their field of vision encompasses the location of the trick. The fourth video camera 408 is orientated such that its field of vision does not encompass the trick.

As the video cameras record video, associated sensors also record sensor data as described above. Each camera has at least one respective sensor forming part of (or coupled to) the camera device. This means that with each video, GPS coordinates over time are recorded. Gyroscope data over time is also recorded. In this example, the sensor data 204 recorded by the mobile device 202carried by the user is indicative of the motion of the user, and therefore the data is indicative of the trick being performed. The sensor data of the remaining cameras is not indicative of the motion of the user on the

skateboard, but is indicative of the orientation and location of the respective video cameras.

The sensor data and video data are uploaded to a server or servers as described above. A video server stores a virtual library of videos. The trick is identified by analysis of sensor data as described above. The sub-period of time of the sensor data associated with the trick is identified, and the start and end time of the trick in the sensor data are recorded.

The location and orientation data of each respective camera allows an algorithm to determine the field of view of each camera. Once a trick has been identified, the sensor data associated with the cameras is analysed to determine which (if any) of the cameras captured the trick, i.e. which of the cameras had the trick within its field of view during the sub-period of time containing the trick. In the embodiment of Figure 4, it is determined that the first, second and third video cameras (202, 204 and 206) captured footage of the trick identified in the sensor data.

The process begins with the algorithm scanning for all videos in the library of videos which contain footage which overlaps in time with the trick shown in Figure 4, i.e. overlaps with the identified sub-period of time in the sensor data of the mobile device. This scanning produces a subset of videos. Uploaded footage from the first, second, third and fourth camera (202, 204, 206 and 208) is part of this subset. Having obtained this subset of the library of videos, the subset is scanned for GPS coordinates which match the GPS coordinates of the mobile device within a certain threshold, for example within 50 metres. This second scan produces a second subset of videos. The videos taken by the first, second, third and fourth video cameras (202, 204, 206 and 208) fall within this second subset. Finally, this second subset of geographically close videos having overlapping time frames is further scanned to ensure the videos were taken by cameras having the orientation that their field of view encompassed the trick. At this stage, footage from the fourth camera is removed from the third subset of videos. Therefore, footage from the fourth camera is not matched to the event. It will be appreciated that this scanning process need not occur in this order, and that the cameras need not have been controlled by the user carrying out the trick.

The sensor data (from the user carrying out the trick) and the video footage are synchronised. The identified sub-portion of the sensor data 214 is correlated with the various videos from the first, second, and third cameras (202, 204 and 206). The start time (216) and end time (218) identified in the sensor data, when applied to the video footage taken from the first, second and third cameras (202, 204 and 206), encloses sub- periods 220 of the various videos associated with the trick performed at point 1 in Figure 4. The sub-periods of time contain footage of the trick from various angles. The start time 216 and end time 218 of the trick is time-stamped in the respective videos. This allows a user to watch the trick from various angles.

Scanning a library of available videos in this manner is advantageous because footage of the same event can be identified. This allows identified sub-periods of time in the sensor data 214 to be correlated with sub-periods of time 220 in multiple videos. In this example, the identified sub-periods of time 220 in the videos are representative of a skateboarding trick. Having identified the trick from the sensor data and identified the portions of the various videos associated with the trick, a user can view a composite video which displays the trick from multiple angles. The analysis algorithms place time stamps at the start and end points in the identified videos associated with the trick.

When a user views the composite video, only small sub-periods of the multiple videos need be loaded. Multiple videos from multiple users can be merged into one seamless viewing experience. If two users record video of the same event, from two different angles, the playback software skips from one angle to another whilst maintaining the same point in the event. This provides a fast and effective method to automatically edit videos.

Figure 5 shows a method of matching video and sensor data associated with the same event, and describes a method of matching events to a video or a group of videos. A method of matching an event to a particular user is also provided. Each user has a list of friends on a social network. This social network is operated within the same mobile application as the application which allows users to record and upload video and sensor data. An event is matched to all videos that satisfy the following requirements: video recorded by the event owner's "friends" in the system, video and event cover the same period of time, video and event are within a set GPS distance of one another, the camera is pointing in the direction of the event. The user can then be tagged within the social network as being associated with a certain event. Figure 5 shows two users. A first user 602 uploads 604 sensor data associated with an activity to a server. A second user 606 uploads 608 a video of the activity to the server. At step 610, an algorithm determines if the two users are connected via a social network. This step involves checking if the users are "friends" on the social network. If not, the process continues to step 618, and the video and sensor data are not matched. If the first user 602 and the second user 606 are connected via the social network, the process continues to step 612, and an algorithm determines if the time frame of the identified subsets of sensor data of the first user 602 and the time frame of the video of the second user 606 at least partially overlap. In the example of the skateboarder, at this step the algorithm checks to see if the time data associated with the tricks overlaps with the time frame of the video recorded by the second user 606. If the time frames do not overlap, the process continues to step 618, and the sensor data and video are not matched. If the time frames do overlap, the algorithm checks, at step 614, that the GPS coordinates match to within a threshold distance. If the GPS coordinates do not match, the process continues to step 618 and the sensor data and the video are not matched. If the GPS coordinates do match, the process continues to step 616. At step 616, an algorithm determines if the camera associated with the video was pointing in the right direction such that its field of view encompassed the GPS coordinates associated with the sensor data. If the camera is determined not to have been pointing in the right direction, the process continues to step 618 and the sensor data and video footage are not matched. If the camera is determined to have been pointing in the right direction, the footage and sensor data are matched at step 620. This process can continue with any number of videos, and any number of sensor data sets. At any stage 610 to 620, an algorithm may compare the full sensor data with the full video, or may just compare the identified sub-periods of time in the data with the video footage.

It will be understood that the above description of specific embodiments is by way of example only and is not intended to limit the scope of the present disclosure. Many modifications of the described embodiments, some of which are now described, are envisaged and intended to be within the scope of the present disclosure.

In some embodiments, any device having at least one sensor capable of taking sensor readings and/or capturing video over a period of time and a means via which to upload the sensor and/or video data to a server is used to collect sensor and/or video data, for example a wrist band, smart glasses or a smart watch.

In some embodiments, the server to which the sensor data is uploaded is the same server as the server to which the video data is uploaded.

In some embodiments, any of a multitude of different motion (or other) sensors is used to collect data. This data is subsequently analysed to identify events deemed to be of interest. In some embodiments, temperature and/or volume are significant to an activity. These parameters are extra dimensions to be factored into the cluster/matching analysis. For example, a user may record video footage of a weather event, for example a storm or a lightning strike. The above disclosure could be used to identify videos of the same event, by searching a library of events for videos taken by video cameras having an appropriate location and orientation. In this example, it is conceivable that a lightning strike could create a loud sound if it strikes an object, and therefore volume could be an important extra dimension. Identifying videos of the lightning strike, and submitting the video and data to analyses as described above, would allow a composite video of the lightning strike from different angles to be produced.

In some embodiments, the command icon to start or stop recording of data and video is not used. Any suitable method of initiating starting and stopping of a recording process may be used.

It will also be appreciated that the unprocessed video data and the unprocessed sensor data need not be recorded over exactly the same time period, it is merely necessary for the time frame of the sensor data to overlap with the time frame of the recorded video. In the period of time where the video and sensor data overlap, analysis can be performed to identify sub-periods of the video associated with predetermined characteristics of the sensor data.

In some embodiments, each respective start time and end time is added to the video data to create a timestamp within the video data.

In some embodiments, the end time is set at a predetermined length of time after the start time, for example ten seconds after the start time. In some embodiments, only the start time of the sub-period of time of the video and sensor data is determined and recorded. Then, when a user wishes to watch the video, the video is loaded starting from the identified start time, and a predetermined length of video, for example ten seconds, is played automatically. This method reduces the amount of data that needs to be stored on the server, and also reduces the load on a processor which runs the analysis algorithm or algorithms.

In some embodiments, an upload stream of video and sensor data begins while the data is still being recorded.

In some embodiments, the sensor data and the video data are not recorded from the same mobile device, or by the same user. Using the date and timeframe data of the unprocessed sensor data and the date and timeframe data of the unprocessed video data, a server can synchronise the sensor data and the video data. This functionality allows a user to record sensor data using a mobile device, whilst another user records video using a separate video camera, such as a GoPro™. This enables a video to be made of a user who is performing a trick and is carrying/wearing a sensor.

In some embodiments, other methods of data analysis are used to select regions of interest from sensor data and video footage, or only one or some of the above-described methods are used. In some embodiments, the analysis is not carried out on an external server. In some embodiments, the analysis is carried out on a processor of the mobile device itself. The analysis algorithms may be updated over time, and as the library of available videos grows larger, the automatic editing process is improvable. The recorded sensor data is stored, and the full version of the video is stored, so the sensor data can be re-analysed at a future time to produce an improved composite video, with more accurate matching.

In some embodiments, significant event parameters are calculated and displayed to the user. In some embodiments, this takes place in real time, for example the user's speed is displayed to him as he travels on his skateboard. In some embodiments, these are displayed to the user upon viewing the trick or event at a later time. Example parameters for display include height, GPS coordinates, rotation and speed. In some embodiments, the video data is analysed to identify a predetermined

characteristic as well as or instead of the sensor data.

In some embodiments, footage of the trick is edited with special effects applied. For example, when producing the reduced playback of video, transition effects are applied between the identified sub-periods of video. These effects may include fading or

'dissolving' one sub-period of the video into the next sub-period of the video, the next sub- period of the video "sliding" or "unrolling" into view from an extremity of the viewing screen, or expanding the next sub-period of video from the centre of the viewing screen. Various transition effects will be known to those skilled in the art. Further special effects that may be applied to the sub-periods of video include colour filters, for example effects which alter the hue and saturation of the displayed video. Other effects that may be applied to the displayed video include the addition of text to the video. Using the example of a skateboarder performing a trick, if the trick has been identified as a 360 degree jump by an algorithm, as described above, then the name of the identified trick can be displayed as the video shows the trick. The text may show interesting aspects from the sensor data, for example the height of a jump or the speed of the user as he travels on his skateboard. The text itself may also have special effects applied to it. The sub-periods of video may also be displayed in slow-motion. For example, when video of the same event from different angles is available, a sub-period of the video from one angle may be displayed at normal speed, whilst a sub-period of video of the same event from another angle is displayed in slow-motion.

The video may also have music applied to it. This music may be chosen by the user, or be applied by an algorithm from a list of predetermined songs. When a slow motion effect is applied to a sub-period of the video, the music may be similarly slowed down for that sub- period of time. Transition between identified sub-periods of video may be accompanied by an audio cue. In some embodiments, the mobile application has an "algorithm overwrite" option. This allows a user to "flag" a sub-period of time from within the total period of time of the video as being of particular interest. For example, the user may wish to record himself introducing the video, but such an introductory segment of the video would typically have relatively small sensor readings throughout, and therefore this segment of the video would not be identified by the algorithms. This would lead to the introductory part of the video being excluded from a composite video and from reduced playback. To solve this problem, the user can select the "algorithm overwrite" command, which is the equivalent of manually entering a start time for a sub-period of time. A second user selection enters an end time of the sub-period of time. This process can consist of two button presses, or a button "hold" throughout the segment. It is also possible for a user to go through the video after it has been recorded and manually enter start and end points. In this way, a user can ensure that an interesting segment is included in the composite video despite relatively low sensor readings in an associated time period. This approach saves time and processing power whilst ensuring all interesting segments are included in a final reduced playback video.

Similarly, in some embodiments, a user may view and edit the start and end times identified by the algorithm. This may be carried out using a mobile application or on a computer. This process allows an algorithm to identify regions of the video considered to be of interest, whilst allowing a user to tailor the video to his own individual taste.




 
Previous Patent: N-TERMINALLY TRUNCATED INTERLEUKIN-38

Next Patent: COATING