Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IDENTIFICATION SYSTEMS AND METHODS
Document Type and Number:
WIPO Patent Application WO/2018/185745
Kind Code:
A1
Abstract:
An identification system comprises a processing unit configured to : o obtain a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, o obtain data representative of a motion of facial features of the candidate user based on at least part of the plurality of pictures, o compare the obtained data with data representative of motion of facial features of each of a plurality of users comprising a given user, and o identify the candidate user as the given user based at least on this comparison.

Inventors:
BELKIN SHAHAR (IL)
Application Number:
PCT/IL2018/050360
Publication Date:
October 11, 2018
Filing Date:
March 28, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FST21 LTD (IL)
International Classes:
G06K9/00
Domestic Patent References:
WO2017033186A12017-03-02
WO1995025316A11995-09-21
Foreign References:
US20120076368A12012-03-29
Other References:
None
Attorney, Agent or Firm:
REINHOLD COHN AND PARTNERS (IL)
Download PDF:
Claims:
CLAIMS

1. An identification system comprising:

a processing unit configured to :

o obtain a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

o obtain data representative of a motion of facial features of the candidate user based on at least part of the plurality of pictures,

o compare the obtained data with data representative of motion of facial features of each of a plurality of users comprising a given user, and o identify the candidate user as the given user based at least on this comparison.

2, The identification system, of claim .1, wherein the processing unit is configured to: extract, from, each of said at least part of the plurality of pictures, an image comprising the head of the candidate user,

perform an alignment of the extracted images using a common spatial reference so as to obtain a set of images, wherein at least a part of the head of the candidate user is used as common spatial reference, and

obtain data representative of the motion of facial features of the candidate user based at least on said set of images.

3. An identification system comprising:

a processing unit configured to :

o obtain a plurality of pictures of a candidate user and taken by at least one camera operatively associated with the processing unit,

o obtain data representative of a motion of the head of the candidate user based on at least part of the plurality of pictures,

o compare the obtained data with data representative of a motion of the head of each of a plurality of users comprising a given user, and o identify the candidate user as the given user based at least on this comparison.

4. The identification system of claim 3, wherein, for a user, a first motion of the head of said user corresponds to a relative motion of the head of said user with respect to an upper part of the body of said user, wherein the processing unit is configured to: obtain data representative of a first motion of the head of the candidate user based on said at least part of the plurality of pictures,

compare the obtained data with data representative of a first motion of the head of each of a plurality of users comprising a given user, and

identify the candidate user as the given user based at least on this comparison.

5. The identification system of claim 3, wherein, for a user, a second motion of the head of said user is a motion which meets a correlation criterion with respect to a motion of the body of said user, wherein the processing unit is configured to:

obtain data representative of a second motion of the head of the candidate user based on said at least part of the plurality of pictures,

compare the obtained data with data representative of a second motion of the head of each of a plurality of users comprising a given user, and

identify the candidate user as the given user based at least on ti s comparison.

6. The identification system of claim 3, wherein, for a user, a second motion of the head of said user comprises a motion of the head of said user with respect to a reference which is external to the user, wherem the processing unit is configured to: obtain data representative of a second motion of the head of the candidate user based on said at least part of the plurality of pictures,

compare the obtained data with data representative of a second motion of the head of each of a plurality of users comprising a given user, and

identify the candidate user as the given user based at least on this comparison.

7. The identification system, of claim 3, wherein, for a user, a second motion of the head of said user corresponds to a motion of the head of said user with respect to a reference which is external to the user, and from which a first motion of the head of said user has been removed, said first motion corresponding to a relative motion of the head of said user with respect to an upper part of the body of said user, wherein the processing unit is configured to:

obtain data representative of a second motion of the head of the candidate user based on said at least part of the plurality of pictures,

compare the obtained data with data representative of a second motion of the head of each of a plurality of users comprising a given user, and

identify the candidate user as the given user based at least on this comparison

8, A user identification system comprising a processing unit, wherein, for a user, a first motion of a head of said user corresponds to a motion which is different from a second motion of the head of said user,

the processing unit being configured to :

o obtain a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

o obtain, based on at least part of the plurality of pictures:

B data representative of a first motion of a head of the candidate user, and

18 data representative of a second motion of a head of the candidate user,

o compare the data representative of the first motion and of the second motion of the head of the candidate user, with data representative of a first motion and of a second motion of a head of each of a plurality of users comprising a given user, and

o identify the candidate user as the given user based at least on this comparison.

9. The identification system of claim 8, wherein, for a user, the first motion of the head of said user corresponds to a motion which is independent from die motion of the body of said user, or which does not meet a correlation criterion with respect to the motion of the body of said user, and the second moti on of the head of said user corresponds to a motion which is dependent from the motion of the body of said user, or which meets a correlation criterion with respect to the motion of the body of said user.

10. The identification system of claim 8, wherein, for a user, the first motion of the head of said user corresponds to a relative motion of the head of said user with respect to an upper part of the body of said user.

11. The identification system of claim 8, wherein, for a user, the second motion of the head of said user corresponds to a motion of the head of said user with respect to a reference which is external to the user.

12. The identification system of claim 10, wherein, for a user, the second motion of the head of said user corresponds to a motion of the head of said user with respect to a reference which is external to the user, and from which the first motion has been removed.

13. The identification system of claim 8, wherein, for a user, the first motion of the head of said user corresponds to a motion of the head of said user which frequency does not meet a correlation criterion with a frequency of a motion of a body of said user, and a second motion of the head of said user corresponds to a motion of the head of said user which frequency meets a correlation criterion with a f equency of a motion of a body of said user.

14. An identification system comprising:

a processing unit configured to :

o obtain a plurality of pictures of a candidate user acquired by at least one camera operative!}' associated with the processing unit,

o obtain data representative of a motion of the hands of the candidate user in at least part of the plurality of pictures,

o compare the obtained motion with data representative of a motion of the hands of each of a plurality of users comprising a given user, and

o identify the candidate user as the given user based at least on this comparison.

15. An identification system comprising a processing unit configured to :

obtain a plurality of pictures of a candidate user acquired by at least one camera operating in conjunction with the identification system, obtain at least two different motion data of the candidate user based on at least part of the plurality of pictures, wherein, for a user, motion data, comprises at least one of:

o data representative of a motion of the hands of said user,

o data representative of a motion of facial features of said user, o data representative of a motion of the head of said user, and

o data representative of a route of said user,

compare each of the at least two different motion data of the candidate user with motion data of each of a plurality of users comprising a gi ven user, and identify the candidate user as the given user based at least on an aggregation of these comparisons.

16, The identification system of claim 15, wherein, for a user, data representative of the motion of the head of said user comprises data representative of a first motion of the head of said user, said first motion corresponding to one of, or to an aggregation of: a relative motion of the head of said user with respect to an upper part of the body of said u ser,

a motion of the head of said user which is independent from the motion of the body of the given user,

a motion of the head of said user which does not m eet a correlation criterion with respect to the motion of the body of said user, and

a motion of the head of said user which frequency does not meet a correlation criterion with respect to a frequency of a motion of a body of said user, wherein the processing unit is configured to obtain data representative of a first motion of the head of said candidate user,

17. The identification system of claim 15, wherein, for a user, data representative of the m otion of the head of said user comprises data representative of a second motion of the head of said user, said second motion corresponding to one of, or to an aggregation of:

a motion of the head of said user with respect to a reference winch is external to the user,

a motion of the head of said user with respect to a reference which is external to the user, from which another motion of the head of said user has been removed, a motion of the head of said user which is dependent from the motion of the body of said user,

a motion of the head of said user which meets a correlation criterion with respect to the motion of the body of said user, and

a motion of the head of said user which frequency meets a correlation criterion with respect to a frequency of a motion of a body of said user,

wherein the processing unit is configured to obtain data representative of a second motion of the head of said candidate user.

18. An identification method comprising, by a processing unit:

obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

obtaining data representative of a motion of facial features of the candidate user based on at least part of the plurality of pictures,

comparing the obtained data with data representative of a motion of facial features of each of a plurality of users comprising a given user, and

identifying the candidate user as the given user based at least on this comparison.

19. An identification method comprising, by a processing unit:

obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

obtaining data representative of a motion of the head of the candidate user based on at least part of the plurality of pictures,

comparing the obtained data with data representative of a motion of the head of each of a plurality of users comprising a given user, and

identifying the candidate user as the given user based at least on this comparison.

20. The identification method of claim 19, wherein, for a user, a first motion of the head of said user corresponds to one of, or to an aggregation of:

a relative motion of the head of said user with respect to an upper part of the body of said user,

a motion of the head of said user which is independent from a motion of the body of the given user, a motion of the head of said user which does not meet a correlation criterion with respect to a motion of the body of the given user,

a motion of the head of said user which frequency does not meet a correlation criterion with respect to a frequency of a motion of a body of said use , the method comprising:

obtaining data representative of a first motion of the head of the candidate user based on at least part of the plurality of pictures,

comparing the obtained data with data representative of a first motion of the head of each of a plurality of users comprising a given user, and

identifying the candidate user as the given user based at least on this comparison,

21. The identification method of claim 19, wherein, for auser, a second motion of the head of said user corresponds to one of, or to an aggregation of:

a motion of the head of said user with respect to a reference which is external to the user,

a motion of the head of said user with respect to a reference which is external to the user, and from which a first motion of the head of said user has been removed, said first motion corresponding to a relative motion of the head of said user with respect to an upper part of the body of said user,

a motion of the head of said user which is dependent from a motion of the body of said user,

a motion of the head of said user which meets a correlation criterion wi th respect to a motion of the body of said user, and

a motion of the head of said user which frequency meets a correlation criterion with respect to a frequency of a motion of a body of said user,

the method comprising:

obtaining data representative of a second motion of the head of the candidate user based on at least part of the plurality' of pictures,

comparing the obtained data, with data representative of a second motion of the head of each of a plurality of users comprising a given user, and

identifying the candidate user as the given user based at least on this comparison.

22. An identification method comprising, by a processing unit: obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

obtaining data representative of a motion of the hands of the candidate user based on at least part of the plurality of pictures,

comparing the obtained data with data representative of a motion of the hands of each of a plurality of users comprising a given user, and

identifying the candidate user as the given user based at least on this comparison,

23. An identification method comprising, by a processing unit:

obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

obtaining at least two different motion data of the candidate user based on at least part of the plurality of pictures, wherein, for a user, motion data comprises at least one of:

o data representative of a motion of the hands of said user,

o data representative of a motion of facial features of said user, o data representative of a motion of the head of said user, and

o data representative of a route of said user,

comparing each of the at least two different motion data of the candidate user with motion data of each of a plurality of users comprising a given user, and

identifying the candidate user as the given user based at least on an aggregation of an output of each of these comparisons.

24. The identification method of claim 23, wherein, for a user, data representative of a motion of a head of said user comprise data representative of a first motion of the head of said user, said first motion corresponding to one of, or to an aggregation of:

a relative motion of the head of said user with respect to an upper part of the body of said user,

a motion of the head of said user which is independent from a motion of the body of said user,

a motion of the head of said user which does not meet a correlation criterion with respect to a motion of the body of said user,

a motion of the head of said user which frequency does not meet a correlation criteri on with respect to a frequency of a motion of a body of said user. the metliod comprising obtaining data representative of a first motion of the head of said candidate user.

25. The identification method of claim 23, wherein, for a user, data representative of a a head of said user comprise data representativ e of a second motion of the head of said user, said first motion corresponding to one of, or to an aggregation of:

a motion of the head of said user with respect to a reference which is external to the user,

a motion of the head of said user with respect to a reference which is external to the user, and from which a first motion of the head of said user has been removed, said first m otion corresponding to a relative motion of the head of said user with respect to an upper part of the body of said user,

a motion of the head of said user which is dependent from a motion of the body of said user,

a motion of the head of said user which meets a correlation criterion with respect to a motion of the body of said user, and

a motion of the head of said user which frequency meets a correlation criterion with respect to a frequency of a motion of a body of said user,

the metliod comprising ob taining data representative of a second motion of the head of said candidate user.

26. A non -transitory storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform an identification method comprising:

obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

obtaining data representative of a motion of facial features of the candidate user based on at least part of the plurality of pictures,

comparing the obtained data with data representative of a motion of facial features of each of a plurality of users comprising a given user, and

identifying the candidate user as the given user based at least on this comparison.

27. A non-transitory storage device readable by a machine, tangibly embodying a program, of instructions executable by the machine to perform, an identi fication method comprising:

obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

obtaining data representative of a motion of the head of the candidate user based on at least part of the plurality of pictures,

comparing the obtained data with data representative of a motion of the head of each of a plurality of users comprising a given user, and

identifying the candidate user as the given user based at least on this comparison,

28. A non-transitory storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform, an identification method comprising:

- obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

- obtaining data representative of a motion of the hands of the candidate user based on at least part of the plurality of pictures,

- comparing the obtained data with data representative of a motion of the hands of each of a plurality of users comprising a given user, and

- identifying the candidate user as the given user based at least on this comparison ,

29. A non-transitory storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform an identification method comprising:

obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit,

obtaining at least two different motion data of the candidate user based on at least part of the plurality of pictures, wherein, for a user, motion data comprises at least one of:

o data representative of a motion of the hands of said user,

o data representative of a motion of facial features of said user, o data representative of a motion of the head of said user, and

o data representative of a route of said user. comparing each of the at least two different motion data of the candidate user with motion data of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on an aggregation of an output of each of these comparisons.

Description:
IDENTIFICATION SYSTEMS AND METHODS TECHNICAL FIELD

Hie presently disclosed subject matter relates to a solution for identifying users. BACKGROUND

Many applications require the identification of users. For example, airports, or payment systems, require the identification of users in order to ensure security of individuals and/or of transactions.

There exists a need to propose new methods and systems for efficiently identifying users.

GENERAL DESCRIPTION

In accordance with certain aspects of the presently disclosed subject matter, there is provided an identification system comprising a processing unit configured to obtain a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtain data representative of a motion of facial features of the candidate user based on at least part of the plurality of pictures, compare the obtained data with data representative of motion of facial features of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison.

According to some embodiments, the processing unit is configured to extract, from each of said at least part of the plurality of pictures, an image comprising the head of the candidate user, perform, an alignment of the extracted images using a common spatial reference so as to obtain a set of images, wherein at least a part of the head of the candidate user is used as common spatial reference, and obtain data representative of the motion of facial features of the candidate user based at least on said set of images.

According to another aspect of the presently disclosed subject matter there is provided an identification system comprising a processing unit configured to obtain a plurality of pictures of a candidate user and taken by at least one camera operatively associated with the processing unit, obtain data, representative of a motion of the head, of the candidate user based on at least part of the plurality of pictures, compare the obtained data with data, representative of a motion of the head of each of a plurality of users comprising a g ven user, and identify the candidate user as the given user based at least on this comparison.

In addition to the above features, the identification system according to tins aspect of the presently disclosed subject matter can optionally comprise one or more of features (i) to (v) below, in any technically possible combination or permutation:

i. for a user, a first motion of the head of said user corresponds to a relative motion of the head of said user with respect to an upper part of the body of said user, wherein the processing unit is configured to obtain data representative of a first motion of the head of the candidate user based on said at least part of the plurality of pictures, compare the obtained data with data representative of a first motion of the head of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison;

ii. for a user, a second motion of the head of said user is a motion which meets a con-elation criterion with respect to a motion of the body of said user, wherein the processing unit is configured to obtain data representative of a second motion of the head of the candidate user based on said at least part of the plurality of pictures, compare the obtained data with data representative of a second motion of the head of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison.

iii. for a user, a second motion of the head of said user comprises a motion of the head of said user with respect to a reference which is external to the user, wherein the processing unit is configured to obtain data representative of a second motion of the head of the candidate user based on said at least part of the plurality of pictures, compare the obtained data with data representative of a second motion of the head of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison;

iv. for a user, a second motion of the head of said user corresponds to a motion of the head of said user with respect to a reference which is external to the user, and from which a first motion of the head of said user has been removed, said first motion corresponding to a relative motion of the head of said user with respect to an upper part of the body of said user, wherein the processing unit is configured to obtain data representative of a second motion of the head of the candidate user based on said at least part of the plurality of pictures, compare the obtained data with data representative of a second motion of the head of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison;

v. for a user, a second motion of the head of said user corresponds to a motion of the head of said user with respect to a reference which is external to the user, and from which a first motion of the head of said user has been removed, said first motion corresponding to a relative motion of the head of said user with respect to an upper part of the body of said user, wherein the processing unit is configured to obtain data representative of a second motion of the head of the candidate user based on said at least part of the plurality of pictures, compare the obtained data with data representative of a second motion of the head of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison.

According to another aspect of the presently disclosed subject matter there is provided a user identification system comprising a processing unit, wherein, for a user, a first motion of a head of said user corresponds to a motion which is different from a second motion of the head of said user, the processing unit being configured to obtain a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtain, based on at least part of the plurality of pictures, data, representative of a first motion of a head of the candidate user, and data representative of a second motion of a head of the candidate user, compare the data representative of the first motion and of the second motion of the head of the candidate user, with data representative of a first motion and of a second motion of a head of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison.

In addition to the above features, the identification system according to this aspect of the presently disclosed subject matter can optionally comprise one or more of features (vi) to (x) below, in any technically possible combination or permutation: vi. for a user, the first motion of the head of said user corresponds to a motion which is independent from, the motion of the body of said user, or which does not meet a correlation criterion with respect to the motion of the body of said user, and the second motion of the head of said user corresponds to a motion which is dependent from the motion of the body of said user, or which m eets a correlation criterion with respect to the m otion of the body of said user;

vii. for a user, the first motion of the head of said user corresponds to a relative motion of the head of said user with respect to an upper part of the body of said user;

viii. for a user, the second motion of the head of said user corresponds to a motion of the head of said user with respect to a reference which is external to the user;

ix. for a user, the second motion of the head of said user corresponds to a motion of the head of said user with respect to a reference which is external to the user, and from which the first motion has been removed:

x. for a user, the first motion of the head of said user corresponds to a motion of the head of said user which frequency does not meet a correlation criterion with a frequency of a motion of a body of said user, and a second motion of the head of said user corresponds to a motion of the head of said user which frequency meets a correlation criterion with a frequency of a motion of a body of said user.

According to another aspect of the presently disclosed subject matter there is provided an identification system, comprising a processing unit configured to obtain a pluralit 7 of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtain data representative of a motion of the hands of the candidate user in at least part of the plurality of pictures, compare the obtained motion with data representative of a motion of the hands of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison.

According to another aspect of the presently disclosed subject matter there is provided an identification system comprising a processing unit configured to obtain a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtain data representative of a route of the candidate user in ai least part of the plurality of pictures, compare the obtained data with data representative of a route of each of a plurality of users comprising a given user, and identify the candidate user as the given user based at least on this comparison.

According to another aspect of the presently disclosed subject matter there is provided an identification system comprising a processing unit configured to obtain a plurality of pictures of a candidate user acquired by at least one camera operating in conjunction with the identification system, obtain at least two different motion data of the candidate user based on at least part of the plurality of pictures, wherein, for a user, motion data comprises at least one of data representative of a motion of the hands of said user, data representative of a motion of facial features of said user, and data representative of a motion of the head of said user, and data representative of a route of said user, compare each of the at least two different motion data of the candidate user with motion data of each of a plurality of users compri sing a given user, and identify the candidate user as the given user based at least on an aggregation of these comparisons.

In addition to the above features, the identification system according to this aspect of the presently disclosed subject matter can optionally comprise one or more of features (xi) to (xii) below, in any technically possible combination or permutation: xi. for a user, data representative of the motion of the head of said user comprises data representative of a first motion of the head of said user, said first motion corresponding to one of, or to an aggregation of a relative motion of the head of said user with respect to an upper part of the body of said user, a motion of the head of said user which is independent from the motion of the body of the gi ven user, a motion of the head of said user which does not meet a correlation criterion with respect to the motion of the body of said user, and a motion of the head of said user which frequency does not meet a correlation criterion with respect to a frequency of a motion of a body of said user, wherein the processing unit is configured to obtain data representative of a first motion of the head of said candidate user;

xii. for a user, data representative of the motion of the head of said user comprises data representative of a second motion of the head of said user, said second motion corresponding to one of, or to an aggregation of a motion of the head of said user with respect to a reference which is external to the user, a motion of the head of said user with respect to a reference which is external to the user, from which another motion of the head of said user has been removed, a motion of the head of said user which is dependent from the motion of the body of said user, a motion of the head of said user which meets a correlation criterion with respect to the motion of the body of said user, and a motion of the head of said user which frequency meets a correlation criterion with respect to a frequency of a motion of a body of said user, wherein the processing unit is configured to obtain data representative of a second motion of the head of said candidate user.

According to another aspect of the presently disclosed subject matter there is provided an identification method comprising, by a processing unit, obtaining a plurality of pictures of a candidate user acquired by at least one camera operative!}' associated with the processing unit, obtaining data representative of a motion of facial features of the candidate user based on at least part of the plurality of pictures, comparing the obtained data with data representative of a motion of facial features of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison.

According to another aspect of the presently disclosed subject matter there is provided an identification method compri ing, by a processing unit, obtaining a plurality of pictures of a candidate user acquired by at least one camera operative!}' associated with the processing unit, obtaining data representative of a motion of the head of the candidate user based on at least part of the plurality of pictures, comparing the obtained data with data representative of a motion of the head of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison.

In addition to the above features, the identification method according to this aspect of the presently disclosed subject matter can optionally comprise one or more of features (xiii) to (xiv) below, in any technically possible combination or permutation: xiii. for a user, a first motion of the head of said user corresponds to one of, or to an aggregation of a relative motion of the head of said user with respect to an upper part of the body of said user, a motion of the head of said user which is independent from a motion of the body of the given user, a motion of the head of said user which does not meet a correlation criterion with respect to a motion of the body of the given user, a motion of the head of said user which frequency does not meet a correlation criterion with respect to a frequency of a motion of a body of said user, the method comprising obtaining data, representative of a first motion of the head of the candidate user based on at least part of the plurality of pictures, comparing the obtained data with data representative of a first motion of the head of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison;

xiv. for a user, a second motion of the head of said user corresponds to one of, or to an aggregation of a motion of the head of said user with respect to a reference which is external to the user, a motion of the head of said user with respect to a reference which is external to the user, and from which a first motion of the head of said user has been removed, said first motion corresponding to a relative motion of the head of said user with respect to an upper part of the body of said user, a motion of the head of said user which is dependent from a motion of the body of said user, a motion of the head of said user which meets a correlation criterion with respect to a motion of the body of said user, and amotion of the head of said user which frequency meets a correlation criterion with respect to a frequency of a motion of a body of said user, the method comprising obtaining data representative of a second motion of the head of the candidate user based on at least part of the plurality of pictures, comparing the obtained data with data representative of a second motion of the head of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison.

According to another aspect of the presently disclosed subject matter there is provided an identification method comprising, by a processing unit, obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtaining data representative of a motion of the hands of the candidate user based on at least part of the plurality of pictures, comparing the obtained data with data representative of a motion of the hands of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison. According to another aspect of the presently disclosed subject matter there is provided an identification method comprising, by a processing unit, obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtaining at least two different motion data of the candidate user based on at least part of the plurality of pictures, wherein, for a user, motion data comprises at least one of data representative of a motion of the hands of said user, data representative of a motion of facial features of said user, data representative of a motion of the head of said user, and data representative of a route of said user, comparing each of the at least two different motion data of the candidate user with motion data of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on an aggregation of an output of each of these comparisons.

In addition to the above features, the identification method according to tins aspect of the presently disclosed subject matter can optionally comprise one or more of features (xv) to (xvi) below, in any technically possible combination or permutation:

xv. for a user, data representative of a motion of a head of said user comprise data representative of a first motion of the head of said user, said first motion corresponding to one of, or to an aggregation of a relative motion of the head of said user with respect to an upper part of the body of said user, a motion of the head of said user which is independent from a motion of the body of said user, a motion of the head of said user which does not meet a correlation criterion with respect to a motion of the body of said user, a motion of the head of said user which frequency does not meet a correlation criterion with respect to a frequency of a motion of a body of said user, the method comprising obtaining data representative of a first motion of the head of said candidate user.

xvi. for a user, data representative of a a head of said user comprise data representative of a second motion of the head of said user, said first motion corresponding to one of, or to an aggregation of a motion of the head of said user with respect to a reference which is external to the user, a motion of the head of said user with respect to a reference which is external to the user, and from which a first motion of the head of said user has been removed, said first motion corresponding to a relative mo tion of the head of said user with respect to an upper part of the body of said user, a motion of the head of said user which is dependent from a motion of the body of said user, a motion of the head of said user which meets a correlation criterion with respect to a motion of the body of said user, and a motion of the head of said user which frequency meets a con-elation criterion with respect to a frequency of a motion of a body of said user, the method comprising obtaining data representative of a second motion of the head of said candidate user.

According to another aspect of the presently disclosed subject matter there is provided an identification method comprising, by a processing unit, obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated w ith the processing unit, obtaining data representative of a route of the candidate user in at least part of the plurality of pictures, comparing the obtained data with data representative of a route of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison. There is also provided a non- transitor ' storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform this method.

According to another aspect of the presently disclosed subject matter there is provided a non-transitory storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform an identification method comprising obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtaining data representative of a motion of facial features of the candidate user based on at least part of the plurality of pictures, comparing the obtained data with data representative of a motion of facial features of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison.

According to another aspect of the presently disclosed subject matter there is provided a non-transitory storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform an identification method comprising obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtaining data representative of a motion of the head of the candidate user based on at least part of the plurality of pictures, comparing the obtained data with data representative of a motion of the head of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison. n addition to the above features, the identification system according to this aspect of the presently disclosed subject matter can optionally compr se one or more of features (xiii) to (xiv) above.

According to another aspect of the presently disclosed subject matter there is provided a non-transitory storage device readable by a machine, tangibly embodying a program, of instructions executable by the machine to perform an identification method comprising obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtaining data representative of a motion of the hands of the candidate user based on at least part of the plurality of pictures, comparing the obtained data with data representative of a motion of the hands of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on this comparison.

According to another aspect of the presently disclosed subject matter there is provided a non -transitory storage device readable by a machine, tangibly embodying a program of instmctions executable by the machine to perform an identification method comprising obtaining a plurality of pictures of a candidate user acquired by at least one camera operatively associated with the processing unit, obtaining at least two different motion data, of the candidate user based on at least part of the plurality of pictures, wherein, for a user, motion data comprises at least one of data representative of a motion of the hands of said user, data representative of a motion of facial features of said user, data representative of a motion of the head of said user, and data representative of a route of said user, comparing each of the at least two different motion data of the candidate user with motion data of each of a plurality of users comprising a given user, and identifying the candidate user as the given user based at least on an aggregation of an output of each of these comparisons.

In addition to the above features, the identification system according to this aspect of the presently disclosed subject matter can optionally comprise one or more of features (xv) to (xvi) above.

These embodiments can be combined according to any of their possible technical combination.

According to some embodiments, the proposed solution provides an efficient way to identify users.

According to some embodiments, the proposed solution can identify users while they are in motion. According to some embodiments, the proposed solution improves the quality and the reliability of the identification of users.

According to some embodiments, the proposed solution can reduce time for identifying users.

According to some embodiments, the proposed solution relies on an innovative detection of the dynamics of the upper body of the users in order to perform an identification of users.

According to some embodiments, the proposed solution can propose an in- motion identification for many applications, such as identification of users in airports, stores, etc., and for various purposes, such as security of transactions, security in public places, improvement of transactions, improvement of the service provided to users, etc. These examples are however not limitative.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to understand the invention and to see how it can be carried out in practice, embodiments will be described, by way of non-limiting examples, with reference to the accompanying drawings, in which:

Fig. 1 illustrates an embodiment of an identification system;

Fig. 2 depicts an embodiment of user data that can be stored for a plurality of users comprising a given user, in order to identify a candidate user as the given user;

Fig. 3 depicts an embodiment of a method of identifying a candidate user, based at least on facial features motion;

Fig. 4 depicts an embodiment of a method of processing pictures of a candidate user in order to identify data representative of a motion of facial features of the candidate user;

Fig. 5 depicts a non-limiting and pu rely illustrative example of the method of Fig. 4;

Fig. 6 depicts an embodiment of a method of identifying a candidate user, based at least on head motion;

Fig. 7 depicts an embodiment of a method of obtaining data representative of a first motion of the head of a candidate user; Fig. 8 depicts a non-limiting and purely illustrative example of the method of Fig. 7'

Fig. 9 depicts an embodiment of a method of obtaining data representative of a second motion of the head of the candidate user:

Fig. 10 depicts a non-limiting and purely illustrative example of the method of Fig. 9;

Fig. 11 depicts an embodiment of a method of identifying a candidate user, based at least on hands motion;

Fig. 12 depicts an embodiment of a method of identifying a candidate user, based at least on the route of the candidate user;

Fig. 13 depicts an embodiment of a method of obtaining motion data for a given user;

Fig. 14 describes an embodiment of a method of identifying a candidate user based on multiple different motion data.

DETAILED DESCRIPTION

In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the presently disclosed subject matter may be practiced without these specific details. In other instances, well-known methods have not been described in detail so as not to obscure the presently disclosed subject matter.

Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "obtaining", "comparing", "identifying", '"extracting", or the like, refer to the action(s) and/or process(es) of a processing unit that manipulates and/or transfonns data into other data, said data represented as physical, such as electronic, quantities and/or said data representing the physical objects.

The term "processing unit" covers any computing unit or electronic unit with data processing circuitry that may perform tasks based on instructions stored in a memory, such as a computer, a server, a chip, a processor, etc. It encompasses a single processor or multiple processors, which may be located in the same geographical zone or may, at least partially, be located in different zones and may be able to communicate together. The term "non-transitory memory" as used herein should be expansively construed to cover any volatile or non-volatile computer memory suitable to the presently disclosed subject matter.

Embodiments of the presently disclosed subject matter are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the presently disclosed subject matter as described herein.

The invention contemplates a computer program being readable by a computer for executing one or more methods of the invention. The invention further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing one or more methods of the invention.

Fig. 1 illustrates a possible embodiment of an identification system I.

The identification system 1 can comprise a storage unit 3 and a processing unit

4.

The storage unit 3 can comprise at least a non-transitory memory.

Although the elements which are part of the identification system are depicted in Fig. 1 as being located in the same system 1, it is to be understood that, according to some embodiments, these elements can be at least partially located in different zones or in different systems and can communicate together.

The processing unit 4 is operatively associated with a camera 7, or with a plurality of cameras 7. "Operatively associated" means that there is provided a communication link over which data or commands can be communicated between the coupled units (e.g. in at least one direction), for example in the form of messages or data streams. The communication link can for example be continuously activated or be activated on request.

As explained further in the specification, according to some embodiments, the processing unit 4 can obtain, from the camera 7, a picture, and/or a plurality of pictures, and/or a video of a candidate user who has to be identified by the identification system 1.

Although the camera 7 is depicted in Fig. 1 as being external to the identification system 1, it is to be understood that, according to some embodiments, the camera 7 or at least a part of a plurality of the cameras 7 can be part of the identification system 1.

According to some embodiments, the identification system 1 can communicate with a server 8, for example to obtain from this server 8 further relevant data. Communication with the server 8 can be carried out through a communication network such as e.g. Internet, or through any adapted wireless communication network. The server 8 can comprise a cloud in some embodiments.

The identification system I can comprise a communication unit (not depicted), m order to communicate with the server 8 or with other units or systems.

The communication unit can for example compri se an antenna, or any adapted device which can receive data, (and if necessary also send data) through a wireless communication network. Examples of wireless communication networks can include: Bluetooth, Bluetooth Low Energy, Wi-Fi, or a cellular communication network such as 3G or 4G. These examples are however not limitative

If necessary, the identification system 1 can comprise an interface (not depicted) which can allow a user to interact with the identification system 1, for example to enter data or change settings (such as a screen associated with a keyboard, or a virtual interface which can be accessed by e.g. a phone of the user).

Fig. 2 depicts an embodiment of user data 20 that can be stored e.g. in the storage unit 3 of the identification system 1, or in another storage unit which is in communication with the identification system 1.

The user data 20 comprise data for each of a plurality of users. In Fig, 2, user data 20 are stored for N users (User 1 to User N).

The user data 20 can comprise data representative of a motion of facial features of each of the users. Facial features include for example the mouth, the eyes (the corresponding motion can include e.g. eye's motion and/or eye's blinking, etc.), the ears, the eyebrows, the lips, the nose, the cheeks, the forehead, the gaze, etc. This list is however not limitative.

The data representative of a motion of facial features of a given user can comprise data which represent typical motion of facial features of this given user. As explained later in the specification, these data can thus be used to identify a candidate user as a given user. Embodiments for acquiring data representative of a motion of facial features of a user will be described with respect e.g. to Figs. 3, 4, 5 and 13.

These data can comprise, for a given user, e.g.:

a sequence of spatial positions and/or of images which describe a motion of each facial feature of the given user;

data representative of the velocity, acceleration, frequency, amplitude of a motion of each facial feature of the given user; data representative of the rotation and/or of the translation of each facial feature of the given user.

As illustrated in Fig, 2, the user data 20 can further comprise data representative of a motion of the head of each user.

The motion of head of a user can be typical to this user and can thus be used, alone or in combination with other motion data of the user, to identify the user. Embodiments for acquiring data representative of a motion of the head of a user will be described with respect e.g. to Figs. 6 to 10 and 13.

The data can comprise, for a given user, e.g.:

a sequence of spatial positions and/or of images which describe a motion of the head of the given user;

data representative of the velocity, acceleration, frequency, amplitude of a motion of the head of the given user;

data representative of the rotation (e.g. along the three axis, such as pitch, roll and yaw), and/or of the translation of the head of the given user.

According to some embodiments, the motion of the head of each user can be defined as comprising a first motion and a second motion, wherein the second motion is different from the first motion. Data representative of the first motion and/or of the second motion of the head of each user can thus be stored.

Possible definitions of the first motion and of the second motion will be provided later in the specification.

As illustrated in Fig. 2, the user data 20 can further comprise data representative of a motion of the hands (or of at least one hand) of said user.

When a user is walking or standing still, he may have a typical way to move his hands. Thus, the motion of his hands can be used to identify a user, alone or in combination with other data.

The data can comprise, for a given user, e.g.:

a sequence of spatial position s and/or of images which describe a m otion of the hands (or of at least one hand) of the given user;

data representative of the velocity, acceleration, frequency, amplitude of a motion of the hand(s) of the given user;

data, representative of the rotation and/or of the translation of the hand(s) of the given user. According to some embodiments, the user data 20 can further comprise data representative of a trajectory (also called route) of said user. Indeed, the route used by a user in the pictures can be representative of said user. For example, some users tend to walk close to the wails, whereas other users tend to walk in the centre of the pathway, etc.

Other user data can be stored for each user, such as unique identification data (ID) of the user (passport number, etc.), pieture(s) of the users, physical descriptors of the user, etc.

The physical descriptors of the user can comprise data representing the size and/or the proportions of parts of the user's body, height of the user, hair and eye colour, etc. This list is however not limitative.

Other user data can include :

- User's credit card information,

- User's personal data (name, address);

- Data representative of the user in the context of the identification (for example, if the user has to be identified in a store, these data can comprise data representative of his buying preferences);

Attention is now drawn to Fig. 3, which depicts a possible embodiment of a method of identifying a candidate user.

At least some of the steps of this method may be performed by a system comprising a processing unit and a storage unit on which a program, which is configured to execute these steps when executed by the processing unit, is stored.

It is to be noted that this also applies to the methods described with reference to Figs. 4 to 14.

The processing unit and/or the storage unit can be the processing unit and/or the storage unit of the identification system 1, or of another system comprising a processing unit and a storage unit, which can e.g. communicate with at least part of the components of the identification system 1.

The method can comprise a step 30 of obtaining a plurality of pictures of a candidate user acquired by at least one camera. The camera can correspond to the camera 7 described in Fig. 1.

The pictures acquired by the camera can be sent to the identification system 1 or to another system comprising a processing unit and which is in communication with the camera. f the pictures comprise a plurality of users, the candidate user (which corresponds to the user which has to be identified) can be selected among the users present in the pictures using various different techniques.

In some embodiments, the user who appears as the user with the greatest size in the pictures can be considered as the candidate user (because this can mean that this user is the one who is the closest to the camera, and thus is the one who has to be identified by the identification system, for example to allow entrance in a given zone). Other techniques can be used to select a candidate user in the pictures provided by the camera.

According to other embodiments, identification of a plurality of candidate users can be performed based on at least part of the pictures provided by the camera.

The method can comprise a step 31 of obtaining data representative of a motion of facial features of the candidate user based on at least past of the plurality of pictures.

Examples of data representative of a motion of facial features have been described with reference to Fig. 2. A possible embodiment of a method of obtaining data representative of a motion of facial features of the candidate user will be described with reference to F s. 4 and 5.

The obtained data can be stored e.g. in the storage unit 3, or in another storage unit.

The obtained data can be in some embodiments stored in a vector, in which each component of the vector corresponds to a given facial feature. For example, a first component corresponds to the left eye, a second component to the right eye, etc. This is however not limitative.

The method can then comprise a step 32 of comparing the obtained data with data representative of a motion of facial features of each of a plurality of users comprising a given user (such as the data representative of a motion of facial features of each of a plurality of users depicted in Fig. 2).

The comparison can comprise e.g. a statistical comparison, con-elation methods, etc. This is however not limitative.

According to some embodiments, a machine learning algorithm is used to perform this comparison.

According to some embodiments, a deep learning algorithm, such as a convolutional neural network algorithm, can be used to perform this comparison. The method can then comprise a step 33 of identifying the candidate user as the given user based at least on this comparison.

For example, the comparison which provides the best matching between the obtained data and the stored data of a given user can indicate that the candidate user is the given user to which the stored data are associated.

If the comparison does not meet a matching criterion, it may be concluded that the candidate user does not correspond to any of the users for which user data are stored in the storage unit.

According to some embodiments, other comparisons and tests can be used in addition or in combination to provide an identification of the candidate user based on multiple identification data (see e.g. Fig. 14).

According to some embodiments, a probability is provided which indicates a confidence rate that the candidate user corresponds to the given user.

According to some embodiments, the pictures obtained from the camera are pre- processed in order to allow the extraction of the motion of the facial features (as explained e.g. with reference to Fig. 4) and then fed to a machine learning algorithm, which extracts the data representative of the motion of the facial features, compares them with the data, s tored for a plurality of users and provides an identification of the candidate user based at least on this comparison.

Attention is now drawn to Fig. 4 which depicts a possible embodiment of a method of processing the pictures of the candidate user in order to extract data representative of a motion of facial features of the candidate user.

The method can comprise a step 40 of extracting, from each of at least part of the plurality of pictures, an image comprising the head of the candidate user. The image of the head can be obtained by cropping the pictures.

Hie extraction can be based on image processing techniques, which allow identifying in each picture a portion which comprises the head of the candidate user. Image processing techniques include e.g. Open CV Face detection (Cascade of Filters), Face and facial features dlib face detection (Hog SVM), YOLO (AI) (Head detection), etc. These examples are however not limitative.

The method can comprise performing 41 an alignment of the extracted images using a common spatial reference so as to obtain a set of images, wherein at least part of the head of the candidate user is used as a common spatial reference. For example, the extracted images can be processed so that the head of the candidate user is always in the centre of the images, with the same orientation.

This processing allows removing the motion of the facial features which is due to other causes, such as the motion of the head of the candidate user with respect to the body of the user and/or the motion of the body of the candidate user.

If necessary, the extracted images can be resized (or the plurality of pictures themselves, or the set of images), since the head of the candidate user can appear with a different size in the extracted images depending on the relative position of the candidate user with respect to the camera. The resizing can allow a subsequent comparison of the different images between them, in order to extract facial features motion.

The method can then comprise a step 42 of obtaining data representative of the motion of facial features of the candidate user based at least on said set of images.

This step 42 can be performed e.g. by comparing the evolution of the position (or of other mertial data) of each facial feature in the set of images, which is indicative of the motion of said facial feature.

According to some embodiments, a subtraction is performed between image N (extracted from a picture obtained at time N) and image N-1 (extracted from a picture obtained at time N-1) in order to be able to identify the differences in time, which are indicative of the motion of the facial features.

According to some embodiments, the set of images, and/or a set of images built based on the subtraction between image N and image N-1, are provided to a machine learning algorithm, which builds a vector representing the motion of the facial features of the user, based at least on this input.

According to some embodiments, the machine learning algorithm is a deep learning algorithm,

A non-limiting and purely illustrative example of the method of Fig. 4 is provided in Fig. 5.

As shown., three pictures 50, 51 and 52 of a candidate user are obtained from a camera. In these pictures, the body of the candidate user is moving due to the fact that the candidate user is walking. It is desired to extract the motion of the left eye of the candidate user (the left side is considered from the candidate user's perspective). As depicted in Fig. 5, the left eye of the user is moving up and down in the pictures.

As explained with reference of Fig. 4, an image of the head of the candidate user is cropped from each picture, and the images are aligned using the head of the candidate user as a common spatial reference. In particular, the head of the candidate user is aligned and rotated so as to appear in the centre of the images.

Three images 53, 54 and 55 are obtained. The motion of the facial features due to the motion of the body and due to the motion of the head has been removed, and it is now possible to extract the motion which is specific to the facial features.

In this example, by comparing the position of the left eye of the candidate user in the different images 53, 54 and 55, it is possible to obtain data representative of the motion of the left eye of the user.

This example is purely illustrative and various other methods and techniques can be used to extract data, representative of a motion of the facial features of the candidate user.

Attention is now drawn to Fig. 6, which depicts another possible embodiment of a method of identifying a candidate user.

The method can comprise a step 60 of obtaining a plurality of pictures of a candidate user acquired by at least one camera. This step 60 is similar to step 30 described with reference to Fig, 3.

The method can comprise a step 61 of obtaining data representative of a motion of the head of the candidate user, based on at least part of the plurality of pictures.

Examples of methods for obtaining these data are provided with reference to Figs. 7 to 10.

Different types of motion can be obtained for the head of the candidate user.

According to some embodiments, data representative of at least one of a first motion and of a second motion of the head are obtained, wherein the second motion is different from the first motion.

According to some embodiments, step 61 comprises obtaining data representative of a first motion of the head of the candidate user.

A first motion can correspond e.g. to the motion of the head of a user which is not due to the fact that the user is walking, and which can be present even when the user is standing still. Indeed, even when the user is standing still, his head may have atypical motion winch can be used to identify the user. For example, he may have a typical way to wag his head. This is however not limitative.

According to some embodiments, the first motion corresponds to a relative motion of the head of said user with respect to an upper part of the body of said user. Hie upper part of the body can correspond e.g. to the shoulders of the user. This first motion is thus due to the motion of the neck of the use .

According to some embodiments, the first motion corresponds only to a relative motion of the head of said user with respect to an upper part of the body of said user.

According to some embodiments, the first motion corresponds to a motion of the head which is independent (that is to say that a dependency criterion is not met) from, the motion of the body of said user. Indeed, this can reflect the fact that the first motion of the head of the user is not due to the walking of the user, but corresponds to the specific motion of his head.

According to some embodiments, the first motion corresponds to amotion of the head which does not meet a correlation criterion with respect to the motion of the body of said user. This can also reflect the fact that the first motion of the head of the user is not due to the walking of the user, but corresponds to the specific motion of his head.

According to some embodiments, the first motion corresponds to amotion of the head which frequency does not meet a correlation criterion with respect to the frequency of the motion of the body of said user.

According to some embodiments, the fact that the motion has an amplitude below a threshold can also be taken in account in order to identify the first motion. Indeed, the motion of the head of the user which is not due to walking typically comprises micro-movements, which amplitude is below the amplitude of the motion of the head due to the walking of the user.

According to some embodiments, the first motion corresponds to amotion of the head which frequency is above a threshold.

As mentioned above, data representative of a second motion of the head of the candidate user can be obtained, in addition to the data representative of the first motion, or instead of the data representative of the first motion.

The second motion can correspond e.g. to the motion of the head of the user which is due to the fact that the user is walking.

For example, a user who is lumping will have a typical motion of his head, which can be used for identification.

According to some embodiments, the second motion of the head of a user corresponds to a motion of the head with respect to a reference which is external to the user. This reference can be e.g. the frame of the pictures taken by the camera, or a static object in the pictures. According io some embodiments, the second motion of the head of a user corresponds to a motion of the head with respect to a reference which is external to the user, and from which the first motion of the head of said user has been removed.

Indeed, the motion of the head of the user with respect to an external reference can also be due to the first motion. For example, if a user is lumping (second motion) and is also waging his head (first motion), the total motion of his head with respect to an external reference comprises both the first motion and the second motion .

In some embodiments, the influence of the first motion on the motion of the head with respect to an external reference is ignored. In other embodiments, the first motion is subtracted in order to take into account the influence of the first motion.

According to some embodiments, the second motion of the head of a user corresponds to a motion of the head of said user which is dependent from the motion of the body of said user.

According to some embodiments, the second motion of the head of a user corresponds to a motion of the head of said user which meets a correlation criterion with respect to the motion of the body (such as the torso) of said user. Indeed, since the second motion corresponds to a motion due to the walking of the user, it is more correlated to the motion of the body than the first motion.

According to some embodiments, the second motion corresponds to a motion of the head which frequency meets a correlation criterion with respect to the frequency of the motion of the body of said user.

According to some embodiments, the second motion corresponds to amotion of the head which frequency is below a threshold.

According to some embodiments, the fact that the motion has an amplitude above a threshold can also be taken in account in order to identify the second motion. Indeed, the motion of the head of the user which is due to walking typically comprises movements of larger amplitude than the micro-movements of the head which correspond to the first motion.

According to some embodiments, the second motion corresponds to the motion of the body (that is to say the upper part of the body such as the torso) in the pictures, with respect to a reference which is external to the user.

The method can then comprise a step 62 of comparing the obtained data with data representative of motion of the head of each of a plurality of users comprising a given user (such as die data representative of a motion of the head of each of a plurality of users depicted in Fig. 2).

The comparison can comprise e.g. a statistical comparison, correlation methods, etc.

According to some embodiments, a machine learning algorithm is used to perform this comparison.

According to some embodiments, a deep learning algorithm, such as a convolutional neural network algorithm, can be used.

The method can the comprise a step 63 of identifying the candidate user as the given user based at least on this comparison.

For example, the comparison which provides the best matching between the obtained data and the stored data of a given user can indicate that the candidate user is the given user to which the stored data are associated.

According to some embodiments, other comparison and tests can be used in addition or in combination to provide an identification of the candidate user based on multiple identification data.

According to some embodiments, a probability is provided which indicates a confidence rate that the candidate user corresponds to the given user.

If the comparison does not meet a matching criterion, it may be concluded that the candidate user does not correspond to any of the users for which data are stored in the storage unit.

According to some embodiments, the pictures of the camera are pre-processed in order to allow the extraction of the motion of the head of the candidate user (as explained e.g. with reference to Fig. 7) and then fed to a machine learning algorithm., which extracts the data representative of the motion of the head of the user, compares them with the data stored for a plurality of users and provides an identification of the candidate user based at least on this comparison.

Attention is now drawn to Fig. 7 which depicts a possible embodiment of a method of obtaining data representative of the first motion of the head of the candidate user.

The method can comprise a step 70 of extracting, from each of at least part of the plurality of pictures, an image comprising the head and an upper part of the body (such as an upper part of the torso, e.g. the shoulders) of the candidate user. This extraction can be based on image processing techniques (non-limiting examples have been provided above), which allow identifying in each picture a portion which comprises the head and the upper part of the body of the candidate user.

The method can comprise performing 71 an alignment of the extracted images using a common spatial reference so as to obtain a set of images, wherein at least a part of the upper part of the body of the candidate user is used as a common spatial reference.

For example, the extracted images can be processed so as the upper part of the body of the candidate user is always at a given location in the images, and with the same orientation.

This processing allows removing the motion of the head which is due to other causes, such as the motion of the body in the pictures, due e.g. to the walking of the candidate user.

If necessary, the extracted images can be resized (or the plurality of pictures themselves, or the set of images), since the head and the upper past of the body of the candidate user can appear with a different size in the extracted images depending on the relative position of the candidate user with respect to the camera.

According to some embodiments, a subtraction is performed in the set of images between image N (extracted from a picture obtained at time N) and image N-l (extracted from a picture obtained at time N- 1) in order to be able to identify the differences in time, which are indicative of the first motion of the head.

The method can then comprise a step 72 of obtaining data representative of a first motion of the candidate user based at least on said set of images.

This step 72 can be performed e.g. by comparing die evolution of the relative position of the head of the candidate user with respect to the upper part of the body of the candidate user, which can be indicative of the first motion of his head.

According to some embodiments, the set of images (obtained at step 71 - as mentioned this set of images can be processed to perform a subtraction between subsequent images) can be provided to a machine learning algoritlim, which can then provide as an output data representative of the first motion.

According to some embodiments, the machine learning algorithm is a deep learning algorithm.

A non-limiting and purely illustrative example of the method of Fig. 7 is provided in Fig, 8.

As shown, three pictures 80, 81 and 82 of a candidate user are obtained. In these pictures, the body of the candidate user is moving. It is desired to extract data representative of the first motion of the head of the user. As visible in the pictures, the candidate user is tilting his head with respect to his body, from right to left (the right and left sides are considered in the candidate user's perspective).

As explained with reference of Fig. 7, the pictures are processed so as to extract the head and the shoulders of the candidate user, v> herein the shoulders are used as a common spatial reference.

Three images 83, 84 and 85 are obtained, in which the shoulders all appear with the same position and orientation. The motion of the head due to the motion of the body of the candidate user has been remo ved, and it is now possible to extract the first motion.

In this example, by comparing the position/inclination/rotation (or other inertial data) of the head in the different images 83, 84 and 85, it is possible to obtain data, representative of the first motion of the head of the user.

Attention is now drawn to Fig. 9 which depicts a possible embodiment of a method of obtaining data representative of the second motion of the head of the candidate user.

The method can comprise a step 90 of identifying the head of the candidate user in the pictures obtained from the camera. Image processing techniques can be used (non-limiting examples have been provided above).

The method can comprise a step 91 of determining the position of the head of the candidate user with respect to a reference which is external to the candidate user (that is to say that this reference does not depend from the motion of the candidate user - this reference can be fixed or can have a known motion).

For example, the position of the head of the candidate user can be calculated with respect to the frame of the pictures. This can be used e.g. when the camera is a fixed camera (this is however not mandator}'). Other references can include e.g. an object having a fixed position in the pictures.

Data representative of the second motion of the head of the user can correspond to the different positions (or to other inertia! data) determined at step 91.

According to some embodiments, a first motion of the head can be removed from the second motion.

Indeed, the evolution of the position of the head in the pictures with respect to an externa] reference is mainly due to the motion of the body. However, as mentioned above, a first motion of the head, which can correspond to the relative motion of the head with respect to the upper part of the body, has also an impact on the evolution of the position of the head in the pictures.

In some embodiments, an approximation of the second motion can be used, and the first motion is not removed from the second motion. Indeed, the first motion generally corresponds to a motion of much smaller amplitude than the amplitude of the second motion.

In other embodiments, the first motion is removed from the second motion. This can comprise performing a subtraction between the data representative of the second motion and data representative of the first motion, in order to obtain updated data representative of the second motion.

A non-limiting example of the method of Fig, 9 is depicted in Fig, 10.

As shown, three pictures 100, 101 and 102 of the candidate user are obtained. The head is identified in the three pictures and the position (x, y) of the head is calculated for each picture, with respect to the frame 104 (external reference) of the pictures.

The evolution of the position (x, y) as calculated from the pictures can be defined as the data representative of the second motion of the head of the candidate user.

According to othe embodiments, data representative of the first motion and/or of the second motion of the head of the candidate user can be obtained using different methods.

As mentioned above, according to some embodiments, the first motion can be defined as a motion which does not meet a correlation criterion with respect to the motion of the body of said user.

In particular, a method of obtaining data representative of the first motion can comprise extracting data representative of the motion of the body (hereinafter "first data", such as the frequency at which the body is moving in the pictures) and extracting comparable data for the head (that is to say data representative of the motion of the head, hereinafter "second data", such as the frequency at which the head is moving in the pictures).

The motion of the head can be viewed as the sum of two components: a first component corresponds to the first motion, and a second component corresponds to the second motion, mainly due to the motion of the body itself.

The method can comprise performing a correlation of the first data with the second data in order to determine their level of correlation. For example, if a frequency analysis is performed, some frequencies of the second data will be more correlated to the first data than other frequencies of the second data.

The components of the second data which are the less correlated to the first data can be defined as the data representati ve of the first motion.

Indeed, a low level of correlation between the first data and components of the second data can indicate that the motion associated to these components corresponds to a first motion of the head.

To the contrary, a high level of correlation between the first data and components of the second data can indicate that the motion associated to these components does not correspond to the first motion of the head, but rather to the second motion of the head.

For example, if the first and the second data correspond to frequency, the frequencies of the second data that are correlated (according to a correlation criterion defining a given threshold of correlation) to the frequency of the motion of the body- can correspond to the second motion, and the frequencies of the second data that are not (according to a correlation criterion defining a given threshold of correlation) correlated to the frequency of the motion of the body can correspond to the first motion.

According to other embodiments, a level of dependency is computed between the first data and the second data, in order to identify the first motion and the second motion. The components of the second data which have a level of dependency higher than a threshold with the first data can be identified as the second motion, and the components of the second data which have a level of dependency lower than a threshold with the first data can be identified as the first motion . This embodiment can also be based e.g. on a frequency analysis.

According to other embodiments, the amplitude of the motion of the head can be also be taken into account for identifying the first and the second motions, since the first motion generally corresponds to a motion with an amplitude lower than a threshold and the second motion generally corresponds to a motion with an amplitude higher than a threshold.

According to some embodiments, the frequencies of the motion of the head can be taken in account alone or in combination with other data, in order to identify the first motion and the second motion. Indeed, the first motion is generally of higher frequency than the second motion. Thus, identifying the first motion can comprise identifying the motion which has a frequency which is above a threshold, and identifying the second motion can comprise identifying the motion which has a frequency which is below a threshold.

Attention is now drawn to Fig, 11, which depicts a possible embodiment of a method of identifying a candidate user.

The method can comprise a step 110 of obtaining a plurality of pictures of a candidate user acquired by at least one camera. This step is identical to step 30, and is not described again.

The method can comprise a step 111 of obtaining data representative of amotion of the hands (or of at least one hand) of the candidate user based on at least part of the plurality of pictures.

The method can then comprise a step 112 of comparing the obtained data with data representative of the motion of the hands of each of a plurality of users comprising a given user (such as the data representative of motion of the hands of each of a plurality of users depicted in Fig. 2).

The comparison can comprise e.g. a statistical comparison, correlation methods, etc.

According to some embodiments, a machine learning algorithm is used to perform this comparison.

According to some embodiments, a deep learning algorithm, such as a convolutional neural network algorithm, can be used.

The obtained data can be in some embodiments stored in a vector, in which each component of the vector corresponds to a motion of one of the hands. For example, a first component corresponds to the left hand and a second component corresponds to the right hand.

The method can the comprise a step 113 of identifying the candidate user as the given user based at least on this comparison.

For example, the comparison which provides the best matching between the obtained data and the stored data of a given user can indicate that the candidate user is the given user to which the stored data are associated.

According to some embodiments, other comparisons and tests can be used in addition or in combination to provide an identification of the candidate user based on multiple identification data. According to some embodiments, a probability is provided which indicates a confidence rate that the candidate user corresponds indeed to the given user.

If the comparison does not meet a matching criterion, it may be concluded that the candidate user does not correspond to any of the users for which data are stored in the storage unit.

According to some embodiments, the determination of the data representative of the motion of the hand(s) can be performed as following.

The hands of the candidate user can be identified in the pictures using an image processing algorithm (non-limiting examples have been provided above).

The position (x, y) of each hand is calculated for each picture, with respect to a reference which is external to the user, such as the frame of the pictures, or any other relevant external reference (see e.g. the examples provided with reference to Fig. 9).

The evolution of the position (x, y) of the hand(s) can be defined as the data representative of the motion of the hands of the candidate user. Other data can include velocity, rotation, etc. of the hands in the pictures, as explained e.g. with reference to Fig. 2,

In some embodiments, data representative of the motion of the hands of the user are extracted from pictures obtained from a single camera (that is to say that is not necessary to use a plurality of cameras with different orientations).

It is to be noted that according to some embodiments, a single camera s used, and only a 2D motion (in the plane of the frame of the pictures) is determined for the hands.

According to some embodiments, frontal pictures of the candidate user are obtained, and thus frontal pictures of the hands of the user are used to identify the candidate user as the given user.

According to some embodiments, the me thod of Fig. 11 can be applied to identify the candidate user, based at least on the motion of at least part of the arm(s) of the candidate user. Similar steps can be used as described with reference to Fig. 11.

Attention is now drawn to Fig. 12, which depicts another possible embodiment of a method of identifying a candidate user.

Some candidate users can be identified by the trajectory or route in the images. As mentioned above with respect to Fig. 2, some people tend to walk close to the walls, whereas other people tend to walk in the centre of the pathway. The method can comprise a step 120 of obtaining a plurality of pictures of a candidate user acquired by at least one camera 1. This step is identical to step 30, and is not described again.

The method can comprise a step 121 of obtaining data representative of a trajectory of the candidate user based on at least part of the plurality of pictures.

This trajectory can be calculated e.g. by identifying the position of the body of the user, and/or of the head of the user, with respect to a reference external to the user (such as the frame of the image, or a fixed object).

According to some embodiments, the orientation of the ey es in the pictures can be extracted and used to determine the direction along which the candidate user is walking. Indeed, a candidate user generally looks in front of him while he is walking.

According to some embodiments, the method comprises analysing the evolution of the size of at least part of the body and/or of the head of the candidate user, and determining data representative of the route of the candidate user based on this evolution. Indeed, when a candidate user is approaching the camera, the size of his body and/or of his head is growing, thereby providing an indication of the relative position of the user with respect to the cam era.

The method can then comprise a step 122 of comparing the obtained data with data representative of the trajectoiy or route of each of a plurality of users comprising a given user.

The comparison can comprise e.g. a statistical comparison, correlation methods, etc.

According to some embodiments, a machine learning algorithm is used to perform this comparison.

According to some embodiments, a deep learning algorithm, such as a convolutional neural network algorithm, can be used.

The method can the comprise a step 123 of identifying the candidate user as the given user based at least on this comparison.

For example, the comparison which provides the best matching between the obtained data and the stored data of a gi ven user can indicate that the candidate user is the given user to which the stored data are associated.

It is to be noted that at least according to some embodiments described above, the candidate user can be identified by the identification system while he is in motion with respect to the camera (in particular, according to at least some of the embodiments, he does not have to stand still at an identification point or zone).

Attention is now drawn to Fig. 13.

In various methods described above, a candidate user is identified as the given user by comparing data extracted from pictures of the candidate user with comparable stored data of a plurality of users comprising a given user, as explained with reference to Fig. 2.

A possible embodiment of a method of obtaining these stored data is illustrated in Fig. 13.

Generally, for each ne user, a first phase can comprise determining a first set of motion data, associated to this user. This first phase can be performed as described in Fig. 13.

In addition, even for known users, the method can comprise updating at least part of the motion data of the known user at each identification of the user by the identification system, based at least on data that were extracted from the pictures to identify this user during this identification step.

The method of Fig. 13 can comprise a step 130 of identifying a candidate user.

Since in this example the candidate user can be a new user which does not have yet motion data, (see motion data 20 in Fig. 2) that are stored and associated to him in a storage unit, this step 13Θ can comprise using an identification method which does not necessarily involve these motion data, such as (but not limited to) a face recognition method.

A face recognition method generally comprises comparing a picture of the candidate user with a plurality of pictures of a plurality of users including a given user. Other identification methods can be used.

During step 130, pictures of the candidate user are acquired by a camera (such as camera 7 described above).

Once the candidate user has been identified as a given user, it is now known that these pictures are pictures of the given user.

At step 13 , once the candidate user has been identified as the given user, the pictures acquired by the camera at step 130 are processed to extract user data (such as motion data) representative of the giv en user.

Step 131 can comprise e.g. extracting data representative of a motion of facial features of the given user. Embodiments of metiiods for performing this extraction were described e.g. with respect to Figs. 3 to 5. Step 131 can comprise e.g. extracting data representative of a motion of the head of the given user.

Embodiments of methods for performing this extraction were described e.g. with respect to Figs. 6 to 1Θ.

Step 131 can comprise e.g. extracting data representative of a motion of the hands of the given user.

Embodiments of methods for perfom ng this extraction were described e.g. with respect to Fig. 11.

Step 131 can comprise e.g. extracting data representative of a trajectory or route of the given user.

Embodiments of methods for performing this extraction were described e.g. with respect to Fig. 12.

According to some embodiments, steps 130 and 131 can be performed repeatedly for each given user (see arrow 132), for example N times, until a sufficient amount of data are collected for each given user.

For each given user, motion data representing the given user (motion of his head, motion of his hands, motion of his facial features, motion representing his route, etc.) can be extracted each time he has been identified.

For each motion data, an aggregation of the collected data can be performed to obtain an aggregated set of data for the given user, which are representative of said given user. This aggregation can comprise e.g. averaging the motion data over time, or using other aggregation algorithms.

According to some embodiments, each time a candidate user is identified as a given user, pictures of the given user that were taken by the camera during identification are stored in a database (on a storage unit, such as of the identification system). Thus, if the candidate user has been identified M times as the given user, and N pictures of the candidate user have been acquired by the camera at each identification, a total of N*M pictures is obtained.

Then, these N*M pictures can be pre-processed (as explained e.g. with reference to Figs. 4 and 7) and fed to a machine learning algorithm and/or they can be fed as such to the machine algorithm.

The machine learning algorithm can then provide the required aggregated motion data representative of the given user, which include e.g. data representative of a motion of facial features of the given user, data representative of a motion of the head of the given user, data representative of a motion of the hands of the given user, data representative of a route of the given user etc.

The machine learning algorithm can include e.g. a deep learning algorithm.

Once motion data have been obtained for a new given user, such as by using the method described with reference to Fig. 13 or by using another method, it has to be noted that at least part of these motion data can be updated during time.

Indeed, each time a candidate user is identified as a given user, new motion data are collected during the identification process. These new motion data can be used to update or modify the motion data that were stored until now in the storage unit for this given user.

A machine learning algorithm can be used in order to decide to what extent the new motion data can be used for updating the motion data of this given user. Other methods and techniques can be used to continuously update the motion data of the given user based on the new motion data.

Attention is now drawn to Fig. 14.

As mentioned in the different embodiments described above, various comparisons can be performed between motion data of a candidate user extracted from the obtained pictures and motion data of a pluralit 7 of users, in order to identify a candidate user as a given user.

According to some embodiments, at least two different motion data, can be obtained for the candidate user (step 141) based on the obtained pictures (obtained at step 140), which can be then compared (step 142) to comparable pre-stored motion data of each of a plurality of users (see e.g. the motion data described with reference to Fig. 2).

For example, a first motion data of the candidate user can correspond e.g. to data, representative of the motion of facial features of the candidate user, which are compared to data representative of the motion of facial features of a plurality of users.

A second motion data of the candidate user can correspond e.g. to data representative of the motion of the head of the candidate user, which are compared to data representative of the motion of the head of a plurality of users.

This example is however not limitative, and a different number of motion data, and different kinds of motion data can be used. Various examples of motion data representative of the candidate user have already been provided in the embodiments described above. The identification of the candidate user (step 143) as the given user can comprise performing an aggregation of an output of a plurality of these different comparisons, in order to identify the user based on multiple motion data.

The output of tlie aggregation can provide that the candidate user is the given user with an aggregated confidence rate (such as an aggregated probability) based on a plurality of different motion data obtained for the candidate user.

The aggregation can be performed using various different methods and algorithms, and can rely in some embodiments on weights which are associated to each motion data.

According to some embodiments, fixed weights are used to aggregate tlie different comparisons: a first weight is attributed to a first comparison associated to a first motion data, a second weight is attributed to a second comparison associated to a second motion data, etc.

According to some embodiments, the aggregation can be performed by a machine learning algorithm, such as a deep learning algorithm.

It has to be noted that the various embodiments described above can be combined with known identification techniques. For example, tlie results of tlie identification metliod described in the various embodiments above and based on motion data can be aggregated with other identification methods, such as face recognition methods, fingerprint identification methods, etc., in order to provide an aggregated decision on the identity of the candidate user based on multiple identification data.

It is to be noted that the various features described in the various embodiments may be combined according to all possible technical combinations.

It is to be understood that the invention is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Hence, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for designing other structures, methods, and systems for carrying out the several purposes of the presently disclosed subject matter.

Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention as hereinbefore described without departing from its scope, defined in and by the appended claims.