Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IN-CABIN MONITORING METHOD AND RELATED POSE PATTERN CATEGORIZATION METHOD
Document Type and Number:
WIPO Patent Application WO/2022/243062
Kind Code:
A1
Abstract:
The invention provides a computer implemented method for detecting an output pose of interest (20) of a subject in real-time, preferably the subject being inside a vehicle cabin or being in a surrounding environment of a vehicle, the method comprising: a) recording an image frame (14) of the subject using an imaging device (12); b) determining an output pose of interest (20) by processing the image frame (14) using a machine learning model (22) that comprises a rule-based pose inference model (28) and a data-driven pose inference model (26): - with the data-driven pose inference model (26), determining a data-driven pose of interest (30) by processing a single frame of the subject; and - with the rule-based pose inference model (28), determining a rule-based output pose of interest (32) by processing the same frame as in step c); and c) determining as the output pose of interest (20) the rule-based output pose of interest (32), if the rule-based pose inference model (28) is able to determine the rule-based output pose of interest (32) in step b), otherwise determining the data- driven pose of interest (30) as the output pose of interest (20).

Inventors:
LI LEI (SG)
DAS MITHUN (SG)
MEIER MATTHIAS HORST (SG)
THAKUR SUNIL KUMAR (SG)
Application Number:
PCT/EP2022/062239
Publication Date:
November 24, 2022
Filing Date:
May 06, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONTINENTAL AUTOMOTIVE TECH GMBH (DE)
International Classes:
G06T7/73
Foreign References:
US20170046568A12017-02-16
US9904845B22018-02-27
US9165199B22015-10-20
US9690982B22017-06-27
US20200105014A12020-04-02
US10783360B12020-09-22
Other References:
YI YANG ET AL: "Articulated Human Detection with Flexible Mixtures of Parts", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 35, no. 12, 1 December 2013 (2013-12-01), USA, pages 2878 - 2890, XP055348333, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2012.261
GANG HUA ET AL: "Learning to Estimate Human Pose with Data Driven Belief Propagation", PROCEEDINGS / 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2005 : [20 - 25 JUNE 2005, SAN DIEGO, CA], IEEE, PISCATAWAY, NJ, USA, vol. 2, 20 June 2005 (2005-06-20), pages 747 - 754, XP010817528, ISBN: 978-0-7695-2372-9, DOI: 10.1109/CVPR.2005.208
Attorney, Agent or Firm:
CONTINENTAL CORPORATION (DE)
Download PDF:
Claims:
CLAIMS

1. A computer implemented method for detecting an output pose of interest (20) of a subject in real-time, the method comprising: a) recording at least one image frame (14) of the subject using an imaging device

(12); b) determining an output pose of interest (20) by processing the image frame (14) using a machine learning model (22) that comprises a rule-based pose inference model (28) and a data-driven pose inference model (26):

- with the data-driven pose inference model (26), determining a data-driven pose of interest (30) by processing a single image frame (14) of the subject; and

- with the rule-based pose inference model (28), determining a rule-based output pose of interest (32) by processing the same single image frame (14); and c) determining as the output pose of interest (20) the rule-based output pose of interest (32), if the rule-based pose inference model (28) is able to determine the rule-based output pose of interest (32) in step b), otherwise determining the data- driven pose of interest (30) as the output pose of interest (20).

2. The method according to claim 1, characterized in that, in step b) a plurality of human key points (24) is extracted from the image frame (14), and the human key points (24) are processed by the machine learning model (22).

3. The method according any of the preceding claims, characterized in that, in step b) the data-driven pose of interest (30) is determined by determining a probability score for each of at least one predetermined pose of interest and outputting as the data- driven pose of interest (30) that pose among the predetermined poses of interest that has the highest probability score.

4. The method according any of the preceding claims, characterized in that, in step b) the rule-based pose of interest (32) is determined by comparing pose descriptor data with at least one set of pose descriptors that uniquely define a predetermined pose of interest, and outputting as the rule-based pose of interest (32) that pose among the predetermined poses of interest that matches with the pose descriptor data or outputting that no match was found if the pose descriptor data does not match any of the pose descriptors of any predetermined pose of interest.

5. The method according claim 4, characterized in that, the pose descriptor data is obtained by extracting a plurality of human key points (24) from the image frame (14), and at least one of a Euclidean distance (L) and an angle (Q) is determined from the human key points (24).

6. The method according any of the preceding claims, characterized in that, in step c) the output pose of interest (20) is determined by a summation of weighted rule-based poses of interest (32) with the data-driven pose of interest (30), wherein the weight of the rule-based pose of interest (32) that was determined to be in the image frame (14) is set to 1 and the weight of the data-driven pose of interest (30) is set to 0.

7. The method according any of the preceding claims, characterized in that, in step c) no output pose of interest (20) is determined, if the certainty determined for the presence of a predetermined pose of interest in the image frame (14) is below a predetermined threshold.

8. The method according any of the preceding claims, characterized in that, the method comprises a step of: d) with a control unit (34), generating a control signal based on the output pose of interest (20) determined in step c), the control signal being adapted to control a vehicle.

9. The method according any of the preceding claims, characterized in that, in step a) the image frame (14) is recorded from a subject inside a cabin of a vehicle and/or from a subject that is in a surrounding environment of a vehicle.

10. An in-cabin monitoring method for monitoring a subject inside a vehicle cabin, the method comprising the performing of a method according to any of the claims 1 to 9, wherein the imaging device (12) is arranged to image a subject inside a vehicle cabin, and the predetermined poses of interest are chosen to be indicative of abnormal driver behavior.

11. A vehicle environment monitoring method for monitoring a subject that is present in a surrounding of the vehicle, the method comprising the performing of a method according to any of the claims 1 to 9, wherein the imaging device (12) is arranged to image a subject in the surrounding environment of the vehicle, and the predetermined poses of interest are chosen to be indicative of pedestrian behavior.

12. A pose categorization system (10) configured for performing a method according to any of the preceding claims, the pose categorization system (10) comprising an imaging device (12) configured for recording an image frame (14) of a subject and a pose characterization device (16) configured for determining an output pose of interest (20) from a single image frame (14), characterized in that the pose categorization device (16) comprises a data-driven pose inference model (26) that is configured for determining a data-driven pose of interest (30) by processing a single image frame (14) of the subject and a rule-based pose inference model (28) configured for determining a rule-based output pose of interest (32) by processing the same image frame (14), wherein the pose categorization device (16) is configured for determining as the output pose of interest (20) the rule-based output pose of interest (32), if the rule-based pose inference model (28) is able to determine the rule-based output pose of interest (32), otherwise determining the data-driven pose of interest (30) as the output pose of interest (20).

13. A vehicle comprising a pose categorization system (10) according to claim 12.

14. A computer program, or a computer readable storage medium, or a data signal comprising instructions, which upon execution by a data processing device cause the device to perform one, some, or all of the steps of a method according to any of the claims 1 to 12.

Description:
DESCRIPTION

In-cabin monitoring method and related pose pattern categorization method

TECHNICAL FIELD

The invention relates to a pose pattern categorization method and an in-cabin monitoring method.

BACKGROUND

US 2017 / 0 046 568 A1 discloses gesture recognition by use of a time sequence of frames that relate to body movement.

US 9 904 845 B2 and US 9 165 199 B2 discuss a 3D-image as a basis for pose estimation.

US 9 690 982 B2 discloses considering angle and Euclidean distance between human key points or body parts for gesture detection. A class for input gesture data is inferred based on predefined rules by a trained machine learning model. The input gesture data depends on consecutive frames associated with a body movement.

US 2020 / 0 105014 A1 also discloses inferring a class for input pose data based on predefined rules by a trained machine learning model.

US 10 783 360 B1 discloses detecting vehicle operator gestures through in-cabin monitoring based on processing consecutive frames.

SUMMARY OF THE INVENTION

It is the object of the invention to provide improved methods and systems for pose categorization. The object is achieved by the subject-matter of the independent claims. Preferred embodiments are subject-matter of the dependent claims.

The invention provides a computer implemented method for detecting an output pose of interest of a subject in real-time, preferably the subject being inside a vehicle cabin or being in a surrounding environment of a vehicle, the method comprising: a) recording at least one image frame of the subject using an imaging device; b) determining an output pose of interest by processing the image frame using a machine learning model that comprises a rule-based pose inference model and a data-driven pose inference model:

- with the data-driven pose inference model, determining a data-driven pose of interest by processing a single image frame of the subject; and

- with the rule-based pose inference model, determining a rule-based output pose of interest by processing the same single image frame; and c) determining as the output pose of interest the rule-based output pose of interest, if the rule-based pose inference model is able to determine the rule-based output pose of interest in step b), otherwise determining the data-driven pose of interest as the output pose of interest.

Preferably, in step b) a plurality of human key points is extracted from the image frame, and the human key points are processed by the machine learning model.

Preferably, in step b) the data-driven pose of interest is determined by determining a probability score for each of at least one predetermined pose of interest and outputting as the data-driven pose of interest that pose among the predetermined poses of interest that has the highest probability score.

Preferably, in step b) the rule-based pose of interest is determined by comparing pose descriptor data with at least one set of pose descriptors that uniquely define a predetermined pose of interest, and outputting as the rule-based pose of interest that pose among the predetermined poses of interest that matches with the pose descriptor data or outputting that no match was found if the pose descriptor data does not match any of the pose descriptors of any predetermined pose of interest. Preferably, the pose descriptor data is obtained by extracting a plurality of human key points from the image frame, and at least one of a Euclidean distance and an angle is determined from the human key points.

Preferably, in step c) the output pose of interest is determined by a summation of weighted rule-based poses of interest with the data-driven pose of interest, wherein the weight of the rule-based pose of interest that was determined to be in the image frame is set to 1 and the weight of the data-driven pose of interest is set to 0.

Preferably, in step c) no output pose of interest is determined, if the certainty determined for the presence of a predetermined pose of interest in the image frame is below a predetermined threshold.

Preferably, the method comprises a step of: d) with a control unit, generating a control signal based on the output pose of interest determined in step c), the control signal being adapted to control a vehicle.

Preferably, in step a) the image frame is recorded from a subject inside a cabin of a vehicle and/or from a subject that is in a surrounding environment of a vehicle.

The invention provides an in-cabin monitoring method for monitoring a subject, preferable a vehicle driver, inside a vehicle cabin, the method comprising the performing of a preferred method, wherein the imaging device is arranged to image a subject inside a vehicle cabin, and the predetermined poses of interest are chosen to be indicative of abnormal driver behavior.

The invention provides a vehicle environment monitoring method for monitoring a subject that is present in a surrounding of the vehicle, the method comprising the performing of a preferred method, wherein the imaging device is arranged to image a subject in the surrounding environment of the vehicle, and the predetermined poses of interest are chosen to be indicative of pedestrian behavior.

The invention provides a pose categorization system configured for performing a preferred method, the pose categorization system comprising an imaging device configured for recording an image frame of a subject and a pose characterization device configured for determining an output pose of interest from a single image frame, wherein the pose categorization device comprises a data-driven pose inference model that is configured for determining a data-driven pose of interest by processing a single image frame of the subject and a rule-based pose inference model configured for determining a rule-based output pose of interest by processing the same image frame, wherein the pose categorization device is configured for determining as the output pose of interest the rule-based output pose of interest, if the rule-based pose inference model is able to determine the rule-based output pose of interest, otherwise determining the data-driven pose of interest as the output pose of interest.

The invention provides a vehicle comprising a pose categorization system.

The invention provides a computer program, or a computer readable storage medium, or a data signal comprising instructions, which upon execution by a data processing device cause the device to perform one, some, or all of the steps of a preferred method.

The disclosed end-to-end pose pattern categorization typically has three phases:

1) Off-line model building phase;

2) Online inference phase; and

3) Model improve and optimization phase.

As per the detected human key points X and Y coordinates information, the specific angles within any 3 points can be calculated via triangle function, as well as the Euclidian distance between any 2 points, e.g. the right elbow angle Q among right shoulder, elbow and wrist (key points 6, 8, and 10) can be calculated as well as the Euclidian distance L between the person’s or driver’s nose and left hip (key points 0 and 11). Hence, the feature components of human pose patterns can be extracted and pre-defined according to the specific use case. For instance, if a person lays on the ground, the angle between the neck, hip and knee should be greater than a pre defined configurable threshold e.g. 150 degrees; if a person is sitting on the seat, the distance between their shoulder and knee should be smaller than that when they are standing, etc. Those rules (stand, sit, sleep, etc.) can be taken into consideration for the later classification process.

The coordinates X and Y of the key points are another part of the human pose pattern component. In the scenario of a video image of a driver in a vehicle being captured by an internal camera, the driver’s key points can be used to define and infer the pose patterns like hands-on/off steering wheel, head on steering wheel and the like. Hence, abnormal driver behavior can be pre-defined, trained, and inferred accordingly. The entire process includes below key steps:

1. Data collection by usually recording a video in a real scenario with a targeted pose of interest (Pol).

2. Human key points extraction by leveraging computer vision and deep learning techniques to identify and extract pre-defined human key points coordinates.

3. Training a model using a supervised machine learning method based on processed and formulized data.

Instead of depending only on rule-based methods to classify the target pattern class, the solution presented herein incorporates with the pre-defined rules (angles and distance and etc.) and data driven methods which apply relative position of the human key points on the image to train a machine learning model (ML model) and infer a class output.

1) The training of the ML model is done by feeding a large amount of data to the model based on various supervised machine learning techniques including but not limited to tree-based, distance-based modeling, MLP and techniques which are flexible to stack together.

Specific multiple angles Q = (Q1, Q2, ..., qh) and Euclidian distance L = (L1, L2, ..., Ln) among different human key points can be calculated and included as separate features in the training structure tabular dataset. Configurable and flexible weights can be assigned to represent the importance of that feature, so that a comprehensive model that is combined with the knowledge of relative positions of body key points and hidden pose patterns can be trained.

2) The class output is inferred by taking into consideration a combination of pre defined rules and a data-driven model prediction.

The model works as follows:

Define a total number of classes C = (d , c2, ... , cn), a weight of each class W = (w1 , w2, ..., wn), a prediction of the model P = (p1, p2, ..., pn), and pre-defined rules for each class: fn(0, L).

So, the weights of each class are defined as

The overall output t is defined as below tn = w1c1 + w2c2 + ... + wncn + (1 - w1)(1 - w2)...(1 - wn)pn meaning when the condition Q, L meets the definition of the n-th class cn, the overall class output tn will only take the n-th class cn regardless of the model prediction, otherwise the prediction from the model will dominate the overall class output regardless of the pre-defined rules, e.g. first define the rule of the pose pattern “sleeping” as the angle Q among neck, hip, and knee is greater than 150 degrees, if the requirement is met, then the output of the pose will be “sleeping” regardless of the model prediction, otherwise take the prediction as the class output.

The real-time inference task applies the trained model to classify and detect the Pose of Interest (Pol) accordingly. For each input frame, there will be a predicted class and its probability score to present the confidence level which can help to optimize the model. The model is adaptive and flexible as per the specific use cases, meaning different models are trained to solve the pose pattern categorization problem in different scenarios, at the end of evaluation step, more feature engineering approaches and techniques can be introduced to improve and optimize the accuracy to achieve better performance.

Advantageously, this solution does not need a special depth sensor, allows for easier model building, improves flexibility of target pose classes definition, can be integrated into any system straightforward, and improves accuracy based on a better attunement with the input training data.

BRIEF SUMMARY OF THE DRAWINGS

An embodiment of the invention is described in more detail with referenced to the accompanying schematic drawings. Therein:

Fig. 1 depicts an embodiment of a pose categorization system;

Fig. 2 depicts an embodiment of a pose categorization method; and Fig. 3 illustrates key human body points.

DETAILED DESCRIPTION OF EMBODIMENT

Fig. 1 illustrates an embodiment of a pose categorization system 10 as it can be used in a vehicle, e.g. for in-cabin monitoring or environment monitoring of the environment outside the vehicle.

The pose categorization system 10 comprises an imaging device 12. The imaging device 12 preferably includes a video camera. In an imaging step S10 (Fig. 2), the imaging device 12 records an image frame 14 of a subject/person.

The pose categorization system 10 comprises a pose categorization device 16. The pose categorization device 16 is configured to process the image frame 14 from the imaging device 12 and determine an output pose of interest 20. The pose categorization device 16 is configured as a rule-based and data-driven device. The pose categorization device 16 includes a machine learning model 22. The machine learning model 22 is trained to classify a plurality of human key points 24 (Fig. 3) as belonging to a predetermined pose of interest, such as ‘standing’, ‘sitting’, ‘laying down’, etc. The training is done using a supervised machine learning method based on processed and formulized data. The human key points 24 are extracted from a single image frame 14 by the pose categorization device 16 in a extraction step S12 (Fig. 2). The human key points 24 are indicative of important locations of the human body, such as eyes, joints (elbow, knees, hips, etc.), hands and feet, etc.

The machine learning model 22 includes a data-driven pose inference model 26 and a rule-based pose inference model 28.

The data-driven pose inference model 26 is configured to output a data-driven pose of interest 30 by analyzing the human key points 24 and determining a probability for each predetermined pose of interest, which is done in a data-driven step S14 (Fig. 2). The data-driven pose inference model 28 outputs as the data-driven pose of interest 30 the predetermined pose of interest that has scored the highest probability.

The rule-based pose inference model 28 includes a set of pose descriptors each describing one of the predetermined poses of interest. The pose descriptor includes at least a range of Euclidean distances L between two human key points 24 and a range of angles Q between three human key points 24. In a rule-based step S16 (Fig. 2), pose descriptor data are extracted from the human key points 24 and compared with the pose descriptors of each predetermined pose of interest. The rule-based pose inference model 28 outputs as a rule-based pose of interest 32 the predetermined pose of interest that best fits that poses pose descriptors, i.e. has the smallest deviation from them. If none of the extracted pose descriptor data matches the pose descriptors of the predetermined poses of interest 30, then not rule-based pose of interest 32 is determined.

In an output step S18 (Fig. 2), the pose categorization device 16 selects as the output pose of interest 20 either the rule-based pose of interest 32 or, if no rule- based pose of interest 32 can be determined, the data-driven pose of interest 30. The pose categorization device 16 can also include a threshold that allows to determine, whether a predetermined pose is sufficiently well established to be output as the output pose of interest 20.

In other words, if the rule-based categorization in step S16 fails, then the data-driven pose of interest 30 is only output as the output pose of interest 20, if the probability of the data-driven pose of interest 30 was determined to be above the threshold. The threshold can be varied according to factors within the vehicle cabin or the environment. For example, the threshold may be set lower for daytime or light conditions (e.g. between 30 % and 50 %) and higher for nighttime or darkness conditions (e.g. between 70 % and 90 %).

The pose categorization system 10 may further comprise a control unit 34 that is configured to generate a control signal for a vehicle based on the output pose of interest 20, in a control step S20.

For example, the pose categorization system 10 images a driver of a vehicle and classifies the driver’s pose as ‘hands not on steering wheel’, the control unit 34 can cause the vehicle to call for the driver’s attention. Other poses are possible, in particular poses that relate to abnormal driving behavior, e.g. being tired, distracted or under the influence.

In another example, the pose categorization system 10 images the environment of the vehicle and determines the pose of a pedestrian as to be ‘standing’. The control unit 34 may then cause the vehicle to activate further sensors or prepare an emergency breaking procedure etc.

With the measures described herein there is no need for consecutive frames for pose pattern recognition. Therefore, the system and method are better able to tolerate real-time frame losses or noise. Due to the hybrid of rule-based and data-driven analysis, more pose patterns can be recognized with greater accuracy, including fine tune patterns, thereby allowing for a more granular pattern recognition methodology. In addition, this solution is better scalable compared to other solutions. There is also no need for 3D data. The overall light weight approach allows for faster inference for edge-processing embedded systems and devices.

REFERENCE SIGNS

10 pose categorization system 12 imaging device 14 image frame 16 pose categorization device 20 output pose of interest 22 machine learning model 24 human key points 26 data-driven pose inference model 28 rule-based pose inference model 30 data-driven pose of interest 32 rule-based pose of interest 34 control unit

S10 imaging step S12 extraction step S14 data-driven step S16 rule-based step S18 output step S20 control step

L Euclidean distance Q angle