Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
EXTRACTING FEATURES FROM SENSOR DATA
Document Type and Number:
WIPO Patent Application WO/2022/157202
Kind Code:
A4
Abstract:
A computer implemented method of training an encoder to extract features from sensor data comprises training a machine learning (ML) system based on a self-supervised loss function applied to a training set, the ML system comprising the encoder. The training set comprises sets of real sensor data and corresponding sets of synthetic sensor data. The encoder extracts features from each set of real and synthetic sensor data, and the self-supervised loss function encourages the ML system to associate each set of real sensor data with its corresponding set of synthetic sensor data based on their respective features.

Inventors:
REDFORD JOHN (GB)
SAMANGOOEI SINA (GB)
SHARMA ANUJ (GB)
DOKANIA PUNEET (GB)
Application Number:
PCT/EP2022/051147
Publication Date:
September 15, 2022
Filing Date:
January 19, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FIVE AI LTD (GB)
International Classes:
G01S13/931; G01S7/41; G06K9/62; G06N3/08
Attorney, Agent or Firm:
THOMAS DUNCAN WOODHOUSE (GB)
Download PDF:
Claims:
AMENDED CLAIMS received by the International Bureau on 13 July 2022 (13.07.2022)

1. A computer implemented method of training an encoder to extract features from sensor data, the method comprising: training a machine learning (ML) system based on a self-supervised loss function applied to a training set, the ML system comprising the encoder; wherein the training set comprises sets of real sensor data and corresponding sets of synthetic sensor data, wherein the encoder extracts features from each set of real and synthetic sensor data, and the self-supervised loss function encourages the ML system to associate each set of real sensor data with its corresponding set of synthetic sensor data based on their respective features.

2. The method of claim 1, wherein each set of real sensor data comprises sensor data of at least one sensor modality, the method comprising: generating the corresponding sets of synthetic sensor data using one or more sensor models for the at least one sensor modality.

3. The method of claim 2, comprising: receiving at least one time-sequence of real sensor data; processing the at least one time-sequence to extract a description of a scenario; and simulating the scenario in a simulator, wherein each set of real sensor data comprises a portion of real sensor data of the at least one timc-sequence, and the corresponding set of synthetic sensor data is derived from a corresponding part of the simulated scenario using the one or more sensor models.

4. The method of claim 3, wherein each set of real sensor data captures a real static scene at a time instant in the real sensor data sequence, and the corresponding set of synthetic sensor data captures a synthetic static scene at a corresponding time instant in the simulation.

5. The method of claim 4, wherein each real and static scene is a discretised 2D image representation of a 3D point cloud.

6. The method of claim 2, wherein for each real set of sensor data the corresponding set of synthetic sensor data is generated via processing of the real set of sensor data,

7. The method of any preceding claim, wherein at least one of the sets of real sensor data comprises a real image, and the corresponding set of synthetic sensor data comprises a corresponding synthetic image derived via image rendering.

8. The method of any preceding claim, wherein at least one of the sets of real sensor data comprises a real lidar or radar point cloud, and the corresponding set of synthetic sensor data comprises a corresponding synthetic point cloud derived via lidar or radar modelling.

9. The method of claim 8, wherein each point cloud is represented in the form of a discretised 2D image.

10. The method of any preceding claim, wherein the ML system comprises a trainable projection component which projects the features from a feature space into a projection space, the self-supervised loss defined on the projected features, wherein the trainable projection component is trained simultaneously with the encoder.

11. The method of any preceding claim, wherein the sets of real sensor data capture real static or dynamic driving scenes, and the corresponding sets of synthetic sensor data capture corresponding synthetic static or dynamic driving scenes.

12. The method of any preceding claim, wherein the self-supervised loss function is a contrastive loss function that encourages similarity of features between positive pair, each positive pair being a set of real sensor data and its corresponding set of synthetic sensor data, whilst discouraging similarity of features between negative pairs of real sensor data and synthetic sensor data that do not correspond to each other.

13. An encoder trained in accordance with any preceding claim.

14. A computer system comprising: the encoder of claim 13; and a perception component; wherein the encoder is configured to receive an input sensor data representation and extract features therefrom, and the perception component is configured to use the extracted features to interpret the input sensor data representation.

15. A training computer program configured, when executed on one or more computer processors, to implement the method of any of claims 1 to 12,