Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LOCALISED, LOOP-BASED SELF-LEARNING FOR RECOGNISING INDIVIDUALS AT LOCATIONS
Document Type and Number:
WIPO Patent Application WO/2020/234737
Kind Code:
A1
Abstract:
A method for recognising individuals at a location, the method comprising: locally capturing images of individuals at the location; locally recognising individuals in the images by a local recogniser trained with local training data for individuals previously recognised, or expected to be, at the location; for individuals that initially cannot be locally recognised, retrieving additional training data from a remote recogniser using query data extracted from the images by the local recogniser; updating the local training data with the additional training data; retraining the local recogniser with the updated local training data to locally recognise the individuals that initially could not be locally recognised.

Inventors:
SAVILL DAVID (AU)
Application Number:
PCT/IB2020/054669
Publication Date:
November 26, 2020
Filing Date:
May 18, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LOOPLEARN PTY LTD (AU)
International Classes:
G07C9/30; G06N20/00
Foreign References:
KR20160116678A2016-10-10
JP2018163524A2018-10-18
KR20180074565A2018-07-03
US20030198368A12003-10-23
JP2011150497A2011-08-04
Attorney, Agent or Firm:
MILLS OAKLEY (AU)
Download PDF:
Claims:
Claims

1 . A method for recognising individuals at a location, the method comprising:

locally capturing images of individuals at the location;

locally recognising individuals in the images by a local recogniser trained with local training data for individuals previously recognised, or expected to be, at the location; for individuals that initially cannot be locally recognised, retrieving additional training data from a remote recogniser using query data extracted from the images by the local recogniser;

updating the local training data with the additional training data;

retraining the local recogniser with the updated local training data to locally recognise the individuals that initially could not be locally recognised.

2. The method of claim 1 , wherein the local recogniser self-learns to locally recognise individuals in the images at the location using an iterative loop that updates and refines the local training data with additional training data from the remote recogniser.

3. The method of claim 1 , wherein the local recogniser may self-update and self-refine the local training data with additional training data for individuals that are routinely locally recognised in the images at the location.

4. The method of claim 1 , wherein the local training data is periodically updated with additional training data from the remote recogniser based on calendar, timetable or scheduling data for individuals expected to be at the location.

5. The method of claim 1 , further comprising periodically writing over and refreshing the local training data so that only newest or most recent local training data for individuals expected to be at the location is retained.

6. The method of claim 1 , wherein the local training data, query data and additional training data may comprise embeddings or object recognition data extracted from images of individuals.

7. The method of claim 1 , wherein the local recogniser locally recognises individuals in the images by one or both of embedding-based recognition and object recognition.

8. The method of claim 7, wherein the local recogniser initially performs embedding- based recognition of individuals in the images until object recognition of the individuals can be performed.

9. The method of claim 1 , wherein the local recogniser and remote recogniser comprise convolutional neural networks (CNNs).

10. The method of claim 1 , wherein the images of the individuals are locally captured at the location in zones representing same or similar contexts, distances, angles, or lighting conditions.

1 1 . The method of claim 10, wherein the local training data is context specific to the location.

12. The method of claim 1 1 , wherein the local recogniser performs context aware local recognition of individuals at the location using the context specific local training data.

13. The method of claim 1 , further comprising monitoring attendance of individuals at the location using the local recogniser.

14. The method of claim 1 , wherein the images of the individuals are locally captured by a local image capture device located at the location.

15. The method of claim 14, wherein the local recogniser is locally executed by a local processor located at the location.

16. The method of claim 15, wherein the local training data is locally stored in local storage accessible by the local processor.

17. The method of claim 16, wherein the local image capture device, local processor and local storage are integrated in a local device that has a single form factor, and which is physically located at the location.

18. The method of claim 17, wherein the local device is selected from a group comprising a wall sensor, a portal sensor, a self-serve kiosk, and an unattended kiosk.

19. The method of claim 17, further comprising automatically deleting, overwriting or disabling the local training data if the local device is powered off or interfered with.

20. The method of claim 1 , wherein the location comprises an indoor or outdoor location.

21 . The method of claim 20, wherein the indoor location is selected from a group comprising school or college classrooms, resident’s rooms, communal recreation and/or learning spaces, lounges, dining halls, and auditoriums.

22. The method of claim 1 , wherein the images comprise whole or part body images of the individuals at the location.

23. The method of claim 1 , wherein the images are captured continuously.

24. The method of claim 14, wherein the local image capture device tracks movement of the individuals at the location so that only images suitable for local recognition are captured.

25. A system for recognising individuals at a location, the system comprising one or more processing devices and one or more storage devices storing instructions that, when executed by the one or more processing devices, cause the one or more processing devices to:

locally recognise individuals in locally captured images of the location by a local recogniser trained with local training data for individuals previously recognised, or expected to be, at the location;

for individuals that initially cannot be locally recognised, retrieve additional training data from a remote recogniser using query data extracted from the images by the local recogniser;

update the local training data with the additional training data; retrain the local recogniser with the updated local training data to locally recognise the individuals that initially could not be locally recognised.

Description:
LOCALISED, LOOP-BASED SELF-LEARNING FOR RECOGNISING INDIVIDUALS

AT LOCATIONS

Field

[0001 ] The present invention relates to localised, loop-based self-learning for recognising individuals at locations for applications such as real-time attendance monitoring, access control, people counting, etc.

Background

[0002] Attendance monitoring, access control, and people counting of individuals at educational, health care, aged care, child care, commercial, and public locations are usually performed manually. Client-server computer systems for recognising individuals at locations, such as cloud-based biometric or facial recognition systems, have also recently been proposed.

[0003] Conventional manual and computerised approaches to recognising individuals for attendance monitoring, access control, and people counting suffer from several drawbacks. Manual attendance tracking is labour-intensive, time-consuming, and prone to circumvention and inaccuracy. Cloud-based attendance tracking systems have attracted concerns about cost, privacy, and data security.

[0004] In view of this background, there is an unmet need for improved solutions for recognising individuals at locations.

Summary

[0005] According to the present invention, there is provided a method for recognising individuals at a location, the method comprising:

locally capturing images of individuals at the location;

locally recognising individuals in the images by a local recogniser trained with local training data for individuals previously recognised, or expected to be, at the location; for individuals that initially cannot be locally recognised, retrieving additional training data from a remote recogniser using query data extracted from the images by the local recogniser;

updating the local training data with the additional training data;

retraining the local recogniser with the updated local training data to locally recognise the individuals that initially could not be locally recognised.

[0006] The local recogniser may self-learn to locally recognise individuals in the images at the location using an iterative loop that updates and refines the local training data with additional training data from the remote recogniser.

[0007] The local recogniser may self-update and self-refine the local training data with additional training data for individuals that are routinely locally recognised in the images at the location.

[0008] The local training data may be periodically updated with additional training data from the remote recogniser based on calendar, timetable or scheduling data for individuals expected to be at the location.

[0009] The method may further comprise periodically writing over and refreshing the local training data so that only newest or most recent local training data for individuals expected to be at the location is retained.

[0010] The local training data, query data and additional training data may comprise embeddings or object recognition data extracted from images of the individuals.

[001 1 ] The local recogniser may locally recognise individuals in the images by one or both of embedding-based recognition and object recognition. For example, the local recogniser may initially perform embedding-based recognition of individuals in the images until object recognition of the individuals can be performed.

[0012] The local recogniser and remote recogniser may comprise convolutional neural networks (CNNs). [0013] The images of the individuals may be captured at the location in zones representing same or similar contexts, distances, angles, or lighting conditions.

[0014] The local training data may be context specific to the location.

[0015] The local recogniser may perform context aware local recognition of individuals at the location using the context specific local training data.

[0016] The method may further comprise monitoring attendance of individuals at the location using the local recogniser.

[0017] The images of the individuals may be locally captured by a local image capture device located at the location.

[0018] The local recogniser may be locally executed by a local processor located at the location.

[0019] The local training data may be locally stored in local storage accessible by the local processor.

[0020] The local image capture device, local processor and local storage may be integrated in a local device that has a single form factor, and which is physically located at the location.

[0021 ] The local device may be selected from a group comprising a wall sensor, a portal sensor, a self-serve kiosk, and an unattended kiosk.

[0022] The method may further comprise automatically deleting, overwriting or disabling the local training data if the local device is powered off or interfered with.

[0023] The location may comprise an indoor or outdoor location. The indoor location may, for example, be selected from a group comprising school or college classrooms, residents’ rooms, communal recreation and/or learning spaces, lounges, dining halls, and auditoriums. [0024] The images may comprise whole or part body images of the individuals at the location.

[0025] The images may be captured continuously.

[0026] The local image capture device may track movement of the individuals at the location so that only images suitable for local recognition are captured.

[0027] The present invention also provides a system for recognising individuals at a location, the system comprising one or more processing devices and one or more storage devices storing instructions that, when executed by the one or more processing devices, cause the one or more processing devices to:

locally recognise individuals in locally captured images of the location by a local recogniser trained with local training data for individuals previously recognised, or expected to be, at the location;

for individuals that initially cannot be locally recognised, retrieve additional training data from a remote recogniser using query data extracted from the images by the local recogniser;

update the local training data with the additional training data; retrain the local recogniser with the updated local training data to locally recognise the individuals that initially could not be locally recognised.

Brief Description of Drawings

[0028] Embodiments of the invention will now be described by way of example only with reference to the accompanying drawings, in which:

Figure 1 is a flow diagram of a method of recognising individuals at a location according to an embodiment of the present invention; and

Figure 2 is a system architecture and dataflow diagram of a system for performing the method of Figure 1 . Description of Embodiments

[0029] Referring to Figure 1 , a method 100 for recognising individuals at a location according to an embodiment of the invention may start at step 1 10 by locally capturing images of individuals at the location.

[0030] Next, at step 120, individuals in the images may be locally recognised by a local recogniser trained with local training data for individuals previously recognised, or expected to be, at the location.

[0031 ] For individuals that initially cannot be locally recognised, the method 100 may move to step 130 by retrieving additional training data from a remote recogniser using query data extracted from the images by the local recogniser. The local recogniser and remote recogniser may comprise fully CNNs.

[0032] At step 140, the local training data may be updated with the additional training data. The method 100 may end at step 150 by retraining the local recogniser with the updated local training data to locally recognise the individuals that initially could not be locally recognised.

[0033] The local training data, query data and additional training data may comprise embeddings or object recognition data extracted from images of the individuals. “Embeddings” may comprise mathematical vectors representing features extracted from parts of the images. “Object recognition data” may comprise data representing objects extracted from the images. The local recogniser may locally recognise individuals in the images by one or both of embedding-based recognition and object recognition. For example, the local recogniser may initially perform embedding-based recognition of individuals in the images until object recognition of the individuals can be performed with a predetermined sufficient accuracy. Along with being able to identify generic objects, such as face or body objects, object recognition may also identify unique individuals.

[0034] The local recogniser may use real time object recognition models using fully CNNs. The object recognition models may also provide the identity of a person it has been trained on. The geometric features of faces or bodies may be subsequently extracted from the images using, for example, a machine learning algorithm such as a CNN where its parameters are trained using a loss function such as additive angular margin loss. The features of faces or bodies extracted from the images may, for example, be selected from a group comprising facial features, pose features, gait features, age features, activity/movement features, standing position features, sitting position features, and seating location features. The detection and extraction of non-face features may open the possibility that face recognition can be augmented by other systems. For example, if a face cannot be recognised, it may be possible to recognise an individual based on where they sit (historically or habitually), their gait, their posture or other identifying features of the individual’s body. In addition, the ability to detect and extract features of faces or bodies from the images relating to age of individuals is advantageous in applications such as child care where an important legal requirement is that a correct number of adults be present at the location for a given number of children present. This may also be used in aged care where it is important to know how recently a resident has been visited by a staff member.

[0001 ] The local recogniser may self-learn to locally recognise individuals in the images at the location using an iterative loop that updates and refines the local training data with additional training data and/or reinforcements from the remote recogniser. The local recogniser may also self-update and self-refine the local training data with additional training data for individuals that are routinely locally recognised in the images at the location. Such updates may be self-determined by the local recogniser where it meets a threshold for an individual it routinely observes who were not locally recognised by the local recogniser. The training of the local object recognition model may be performed by a local machine learning training engine if the local recogniser is idle, or the training may be performed by a remote machine learning training engine and the result returned to the local recogniser.

[0002] The local recogniser may be configured to locally recognise the individuals in the images of the location in a machine-learning, self-learning or loop-based mode based on the periodic updates of the local training data, and periodic updated training of the local recogniser. The images of the individuals may be captured at the location in zones representing same or similar contexts, distances, angles, or lighting conditions. The local training data may therefore be context specific to the location. The local recogniser may perform context aware local recognition of individuals at the location using the context specific local training data. [0003] The local recogniser may therefore self-learn individuals in the context of the location, such as a room, in which the individuals are seated, thereby allowing for high recognition accuracy by the local recogniser. In addition, the local object recognition data or local embeddings generated by the local recogniser for each individual may be unique to the local recogniser taking into consideration all local environmental factors. The local recogniser may therefore self-learn individuals in the context of the location, such as a room, which it sits in, allowing for high local recognition accuracy. The method 100 may, for example, further comprise monitoring attendance of individuals at the location using the local recogniser. The local embeddings of the individuals may be clustered in a local datastore by zone information, tracking information, and from reinforcement information provided from the remote recogniser so that embeddings in the local datastore maintain maximum separation.

[0004] The local training data may be periodically updated with additional training data from the remote recogniser based on calendar, timetable or scheduling data for individuals expected to be at the location. For example, the local training data may be updated with additional training data based on a known schedule of which individuals might be in the room where the local recogniser is located. This may be a school timetable, a visitation schedule, an outlook calendar, or any other scheduling information. For example, someone could invite an external guest to a meeting, and the local training data may be updated with additional training data for the guest by a local recogniser which handles visitor sign in at reception.

[0005] The images of the individuals may be locally captured by a local image capture device, for example one or more cameras or image sensors, located at the location. The local recogniser may be locally executed by a local processor located at the location. The local training data may be locally stored in local storage accessible by the local processor. The local image capture device, local processor and local storage may be integrated in a local device that has a single form factor, and which is physically located at the location. The local device may be selected from a group comprising a wall sensor, a portal sensor, a self-serve kiosk, and an unattended kiosk. The method 100 may further comprise automatically deleting, overwriting or disabling the local training data if the local device is powered off or interfered with. [0006] The location may comprise an indoor or outdoor location. The indoor location may, for example, be selected from a group comprising school or college classrooms, residents’ rooms, communal recreation and/or learning spaces, lounges, dining halls, and auditoriums.

[0007] The images may comprise whole or part body images of the individuals at the location. The images may be captured continuously. The image capture device may track movement of the individuals at the location so that only images suitable for local recognition are captured. In other words, the individuals may be tracked during the detection phase and if an image is found but is unsuitable for recognition (eg, due to low image quality or the face or body is not in a suitable pose), the local device may track the individual until a recognition event is possible. This tracking capability may address situations where the image quality was poor when the local device might have seen an individual for the first time. Instead, the local device may now track an individual and only send the image of their face to the local recogniser or remote recogniser when a good enough version of the individual has been seen. For example, an individual person may first be seen in profile (or side on), and then turn towards the camera for a moment. Tracking allows the individual to be followed until this moment when a high-quality recognition event can be performed.

[0008] The method 100 may further comprise periodically writing over and refreshing the local training data so that only newest or most recent local training data for individuals expected to be at the location is retained in local storage on the local device.

[0009] Figure 2 illustrates an embodiment of a computer system 200 for performing the method 100. The system 200 may generally comprise a local device 210, a remote server 220, and a remote datastore 230. The remote server 220 may comprise a cloud server, and the remote datastore may comprise a cloud datastore. The cloud server 220 may, for example, be operated by a provider of the local image capture and processing device 210, and the cloud datastore may, for example, be operated by a customer of the local device 210 who controls access to, or operation of, the location. The local device 21 0 may be configured to periodically perform one or more of the above steps of the method 100 in batches. [0010] As described above, the local device 210 may comprise one or more of the following:

• wall sensors - a small device on a wall or in a corner which sees people in an entire room;

• portal sensors - a small device on a wall or door frame, near a portal, which sees people entering or exiting that portal, (eg, a doorway);

• self-serve kiosks - a touch screen interactive device where the person provides information about why they are here; and

• unattended kiosks - a large screen device which shows to the person them

being detected but they do not need to interact with the screen/system (ie, this is essentially a visual version of the portal sensor).

[001 1 ] Wall sensors may be configured to sit on a wall in a room and observe the entire room and report back individuals who have been present. As such, they may replace what would have traditionally been multiple security cameras feeding back to a server or a manual attendance reporting process. The wall sensor may be constantly detecting individuals in the room and analysing them periodically in batches that may be up to 10 minutes long as selected by a user. Each batch may be processed as follows.

1 . The local device 210 may have received a list of embeddings for faces it is likely to encounter based on scheduling information for that room.

2. Multiple images of the room are taken every second (ie, enough to see the entire room).

3. Each image is processed and faces and bodies are detected by object recognition.

4. The local recogniser executed by the local device 210 may provide an identification of the person.

5. Features of each face and/or body may be extracted from the image.

6. Embeddings representing the face and body geometric features are extracted for each person.

7. The local recogniser identities and embeddings are tracked against object recognition identities and embeddings created from previous images taken for the same room to determine if the face and/or body is the same person that has been seen previously. 8. Embeddings, along with their metadata, are grouped into vertical zones, which represent groupings of individuals at the same distance from the sensor.

9. The object recognition process may have identified the person if that person were trained into the detection model, also, each face is identified against a local, on-device, in-memory datastore of embeddings.

10. If the individual cannot be identified locally, then a request is made of a remote recogniser engine as to who the person is:

a. if known, the local database of known individuals is updated; or b. if unknown, the image is catalogued to the cloud and local database with an unknown identity (to be confirmed later by a user).

1 1 . Known and unknown representative images of each person are stored in memory until the end of the batch.

12. Steps 2-1 1 are repeated until the batch ends.

13. The local device 210 sends the batch processed data to the cloud.

[0012] Portal sensors may be configured to operate in a similar manner to wall sensors, except that:

a. their batch times may be lowered to around 1 minute making the reporting to the cloud on who they have seen quicker; and

b. they only have one camera which is directed to a portal entry way (eg, a door).

The portal sensors may identify and catalogue individuals in the same way as the wall sensor.

[0013] Self-serve kiosks may be used for visitor registration, event registration, and for staff/student sign in/out. They may be used when the person in question needs to provide information after they have been identified. The self-serve kiosk may identify and catalogue people in the same way as the wall sensor except that there is no batch mode. Instead, identities of individuals may be transmitted immediately along with the data the person is prompted to enter. A self-serve kiosk may also be interfaced with other external physical devices to provide access control, such as controlling the magnetic lock of a door.

[0014] Unattended kiosks may comprise a hybrid between portal sensors and self-serve kiosks. They provide the same functionality as the portal sensors as well as providing a visual key or cue when a successful identification is made. For example, this may be provided by drawing red, yellow and green boxes around people’s faces as they walk by the kiosk indicating they have been seen, are being processed, and have finally been identified. The unattended kiosk may identify and catalogue people in the same way as the wall sensor. Again, a kiosk may also be interfaced with other external physical devices to provide access control, such as controlling the magnetic lock of a door.

[0015] Embodiments of the present invention provide a hybrid on-device and cloud-based (or hybrid local and centralised computing) method and system that is both generally and specifically useful for determining presence or absence of individuals at educational, aged care, commercial, and public locations. Although some embodiments of the invention have been described above in the context of applications for attendance monitoring at educational locations, such as school classroom roll taking, it will be appreciated that other embodiments of the invention may be implemented for alternative applications, such as access control, people counting, etc, for commercial and public locations. Embodiments of the invention may also be directly suited to attendance taking in locations other than schools, such as child care and aged care facilities.

[0016] Embodiments of the local device of the present invention may advantageously “self-learn” people they come across. Consequently, they do not need to be pre-populated with large databases of people, and they do not need any permanent local data storage making them very secure. The local device may only keep the data (in memory) for people it regularly sees, and in the context in which it seems them. People that are new or that it has “forgotten” may be sent to the remote recogniser to be remotely recognised as needed. Thus, there is no need to store data for people who are not commonly seen by the device. The local device may keep all data in memory, not on permanent storage, which means if it is ever powered off or tampered with, all sensitive information may be wiped automatically. In addition, the local datastore may be routinely written over such that only newest entries are maintained. In addition, the hybrid edge and centralised computing model ensures that processing by the local device remains fast and computational efficient, as it only needs to remember a small group of people. It also makes it easier to enrol identities of individuals as the local device does not need to be pre-populated.

[0017] Embodiments of the present invention provide a method and system that are both generally and specifically useful for localised, loop-based self-learning for recognising individuals at locations for applications such as real-time attendance monitoring, access control, people counting, etc.

[0018] For the purpose of this specification, the word“comprising” means“including but not limited to,” and the word“comprises” has a corresponding meaning.

[0019] The above embodiments have been described by way of example only and modifications are possible within the scope of the claims that follow.