Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR DETERMINING PARTICIPANT BEHAVIOR DURING A REAL TIME INTERACTIVE EVENT
Document Type and Number:
WIPO Patent Application WO/2021/105778
Kind Code:
A1
Abstract:
Systems and methods are provided for determination of a participant behavior during a real time interactive event. A set of data packets related to video signals and audio signals are received by a facial recognition engine in real time from a computing device associated with the participant. A first set of data packets pertaining to one or more attributes of a facial image of the participant are selected from the received set of data packets. The extracted set of data packets are compared with a preconfigured dataset comprising information related to a plurality of facial expressions to determine the participant behavior while participating in the real time interactive event.

Inventors:
RAMAN ABISHEK SUNDAR (IN)
Application Number:
PCT/IB2020/057857
Publication Date:
June 03, 2021
Filing Date:
August 21, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RAMAN ABISHEK SUNDAR (IN)
International Classes:
H04L12/18; G06Q10/10; H04L29/06; H04N7/15
Foreign References:
US20130124623A12013-05-16
EP3162052B12019-05-01
Attorney, Agent or Firm:
KHURANA & KHURANA, ADVOCATES & IP ATTORNEYS (IN)
Download PDF:
Claims:
aim:

1. A method for determining a participant behavior during a real time interactive event, said method comprising: receiving, by one or more processors of a facial recognition engine, in real time a set of unsynchronized data packets across non-overlapping or partially overlapping data channels from a computing device associated with the participant, the set of unsynchronized data packets pertains to any or a combination of video signals and image signals; selecting, by the one or more processors, through a common data channel, a first set of data packets from the received set of unsynchronized data packets, the first set of data packets pertains to one or more attributes of a facial image of the participant; matching, by the one or more processors, through the common data channel, the extracted first set of data packets with a preconfigured dataset comprising information related to a plurality of facial expressions; and determining, responsive to the matching, by the one or more processors, a facial match of the selected facial image from the plurality of the predefined facial expressions to identify the participant behavior during the real time interactive event.

2. The method as claimed in claim 1, wherein the one or more attributes of the facial image corresponds to at least one of a head-pose feature, an eyebrows feature, an eyelids feature, a gaze feature, a nose feature, and a mouth-contour feature of the participant during the real time interactive event.

3. The method as claimed in claim 1, wherein the participant behavior during the real time interactive event pertains to at least one of happy, confused, focused, bored, attentive and understood.

4. The method as claimed in claim 3, wherein the facial recognition engine, during the real time interactive event, determines a level of attention of the participant based on the one or more attributes of the facial image.

5. The method as claimed in claim 4, wherein a message representing the determined level of attention is provided to a computing device associated with the participant.

6. The method as claimed in claim 1, wherein the facial recognition engine authenticates the participant by comparing the selected first set of data packets with a second dataset comprising information related to registration details of the participant. 7. The method as claimed in claim 1, wherein the real time interactive event is a real time training being conducted over an electronic medium, and the participant is a student accessing the real time training via the electronic medium.

8. The method as claimed in claim 7, wherein the student accessing the real time interactive event is marked as present in an attendance sheet maintained at the facial recognition engine.

9. The method as claimed in claim 7, wherein the real time training is conducted in any or a combination of a form of video, audio, and animation·

10. A system for determining a participant behavior during a real time interactive event, said system comprising: a processor operatively coupled to a memory, the memory storing a set of instructions executed by the processor to: receive, by a facial recognition engine, in real time a set of unsynchronized data packets across non-overlapping or partially overlapping data channels from a computing device associated with the participant, the set of unsynchronized data packets pertains to any or a combination of video signals and image signals; select, through a common data channel, a first set of data packets (a region of interest) from the received set of unsynchronized data packets, the first set of data packets pertains to a one or more attributes of a facial image of the participant; match, through the common data channel, the extracted first set of data packets with a preconfigured dataset comprising information related to a plurality of facial expressions; and determine, responsive to the match, a facial match of the extracted facial image from the plurality of the predefined facial expressions to identify the participant behavior during the real time interactive event.

Description:
SYSTEM AND METHOD FOR DETERMINING PARTICIPANT BEHAVIOR DURING A REAL TIME INTERACTIVE EVENT

FIELD OF THE INVENTION

[0001] The present disclosure generally relates to determining participant behavior. In particular, the present disclosure provides systems and methods to facilitate capturing facial expression of the participant to determine the participant behavior.

BACKGROUND OF THE INVENTION

[0002] The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.

[0003] Typically, a virtual training session, for conducting online courses, is a simulated training session via internet, which provides a convenient communication environment for distance learners just like traditional face-to-face training session. The virtual training session allows learners to attend the training session from anywhere in the world and aims to provide a learning experience that is similar to a real training session. Further, the virtual training session enables to bring learners from around the world together online in a highly interactive virtual training session while greatly reducing travel, time, and expense of on-site teaching/training programs.

[0004] However, a significant drawback to the virtual training, or other large interactive online events, is the difficulty of a teacher or a presenter to determine if a participant/ a student is paying attention to the training. In a small classroom, for example, the teacher is capable of seeing each participant’s face/body to gauge the participant’s or participants' attentiveness. A confused facial expression on multiple participants could mean that subject matter, topic, or delivery technique, may not be working effectively and the teacher may augment or modify their approach accordingly. In an online classroom, however, an ability to see each participant’s face may not be possible. Additionally, in the small classroom - attendance or presence of the participant may be taken manually that may be susceptible to inconsistent tracking.

[0005] Thus, it would be beneficial for there to be systems and methods that allow the teacher or the presenter for the virtual training session to accurately gauge a level of attentiveness, interest, presence, and/or comprehension of the participants to aid in effectively delivering the training.

OBJECTS OF THE INVENTION

[0006] A general object of this disclosure is to provide a participant behavior determination system that captures participant facial expressions to determine level of attentiveness during a real time interactive event.

[0007] An object of the present disclosure is to provide a mechanism to capture a participant behavior during a participant being logged in the system.

[0008] An object of the present disclosure is to provide a mechanism to determine a participant presence during a participant being logged in the system for real time interactive event.

[0009] An object of the present disclosure is to facilitate determining and recording the participant deviation during participation in a real time interactive event.

[0010] An object of the present disclosure is to facilitate updating and upgrading content being presented during the real time interactive event based on the participant behavior.

[0011] Yet another object of the present disclosure is to facilitate providing lower operating costs per head for small and mid-size shippers, especially when shipping less than a truckload.

SUMMARY

[0012] The present disclosure generally relates to determining participant behavior. In particular, the present disclosure provides systems and methods to facilitate capturing facial expression of the participant to determine the participant behavior.

[0013] An aspect of the present disclosure pertains to a method for determining a participant behavior during a real time interactive event, said method comprising: receiving, by one or more processors of a facial recognition engine, in real time a set of unsynchronized data packets across non-overlapping or partially overlapping data channels from a computing device associated with the participant, the set of unsynchronized data packets pertains to any or a combination of video signals and image signals; selecting, by the one or more processors, through a common data channel, a first set of data packets from the received set of unsynchronized data packets, the first set of data packets pertains to one or more attributes of a facial image of the participant; matching, by the one or more processors, through the common data channel, the extracted first set of data packets with a preconfigured dataset comprising information related to a plurality of facial expressions; and determining, responsive to the matching, by the one or more processors, a facial match of the selected facial image from the plurality of the predefined facial expressions to identify the participant behavior during the real time interactive event.

[0014] According to an embodiment, the one or more attributes of the facial image corresponds to at least one of a head-pose feature, an eyebrows feature, an eyelids feature, a gaze feature, a nose feature, and a mouth-contour feature of the participant during the real time interactive event.

[0015] According to an embodiment, the participant behavior during the real time interactive event pertains to at least one of happy, confused, focused, bored, attentive and understood.

[0016] According to an embodiment, the facial recognition engine, during the real time interactive event, determines a level of attention of the participant based on the one or more attributes of the facial image.

[0017] According to an embodiment, a message representing the determined level of attention is provided to a computing device associated with the participant.

[0018] According to an embodiment, the facial recognition engine authenticates the participant by comparing the selected first set of data packets with a second dataset comprising information related to registration details of the participant.

[0019] According to an embodiment, the real time interactive event is a real time training being conducted over an electronic medium, and the participant is a student accessing the real time training via the electronic medium.

[0020] According to an embodiment, the student accessing the real time interactive event is marked as present in an attendance sheet maintained at the facial recognition engine. [0021] According to an embodiment, the real time training is conducted in any or a combination of a form of video, audio, and animation.

[0022] Another aspect of the present disclosure relates to a system for determining a participant behavior during a real time interactive event, said system comprising: a processor operatively coupled to a memory, the memory storing a set of instructions executed by the processor to: receive, by a facial recognition engine, in real time a set of unsynchronized data packets across non-overlapping or partially overlapping data channels from a computing device associated with the participant, the set of unsynchronized data packets pertains to any or a combination of video signals and image signals; select, through a common data channel, a first set of data packets (a region of interest) from the received set of unsynchronized data packets, the first set of data packets pertains to a one or more attributes of a facial image of the participant; match, through the common data channel, the extracted first set of data packets with a preconfigured dataset comprising information related to a plurality of facial expressions; and determine, responsive to the match, a facial match of the extracted facial image from the plurality of the predefined facial expressions to identify the participant behavior during the real time interactive event.

[0023] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.

BRIEF DESCRIPTION OF THE DRAWINGS

[0024] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.

[0025] FIG. 1 indicates a network implementation of a participant behavior determination system, in accordance with an embodiment of the present disclosure.

[0026] FIG. 2 illustrates exemplary functional components of the participant behavior determination system in accordance with an embodiment of the present disclosure.

[0027] FIG. 3 illustrates exemplary representation of the participant behavior during the real time interactive event in accordance with an embodiment of the present disclosure. [0028] FIG. 4 illustrates an exemplary method for determination of participant behavior during the real time interactive event in accordance with an embodiment of the present disclosure.

[0029] FIG. 5 is an exemplary computer system in which or with which embodiments of the present invention may be utilized.

DETAILED DESCRIPTION

[0030] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details. [0031] Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine -readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine -readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).

[0032] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product. [0033] If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.

[0034] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.

[0035] Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).

[0036] The present disclosure generally relates to determining participant behavior. In particular, the present disclosure provides systems and methods to facilitate capturing facial expression of the participant to determine the participant behavior.

[0037] An aspect of the present disclosure pertains to a method for determining a participant behavior during a real time interactive event, said method comprising: receiving, by one or more processors of a facial recognition engine, in real time a set of unsynchronized data packets across non-overlapping or partially overlapping data channels from a computing device associated with the participant, the set of unsynchronized data packets pertains to any or a combination of video signals and image signals; selecting, by the one or more processors, through a common data channel, a first set of data packets from the received set of unsynchronized data packets, the first set of data packets pertains to one or more attributes of a facial image of the participant; matching, by the one or more processors, through the common data channel, the extracted first set of data packets with a preconfigured dataset comprising information related to a plurality of facial expressions; and determining, responsive to the matching, by the one or more processors, a facial match of the selected facial image from the plurality of the predefined facial expressions to identify the participant behavior during the real time interactive event.

[0038] According to an embodiment, the one or more attributes of the facial image corresponds to at least one of a head-pose feature, an eyebrows feature, an eyelids feature, a gaze feature, a nose feature, and a mouth-contour feature of the participant during the real time interactive event.

[0039] According to an embodiment, the participant behavior during the real time interactive event pertains to at least one of happy, confused, focused, bored, attentive and understood.

[0040] According to an embodiment, the facial recognition engine, during the real time interactive event, determines a level of attention of the participant based on the one or more attributes of the facial image.

[0041] According to an embodiment, a message representing the determined level of attention is provided to a computing device associated with the participant.

[0042] According to an embodiment, the facial recognition engine authenticates the participant by comparing the selected first set of data packets with a second dataset comprising information related to registration details of the participant. [0043] According to an embodiment, the real time interactive event is a real time training being conducted over an electronic medium, and the participant is a student accessing the real time training via the electronic medium.

[0044] According to an embodiment, the student accessing the real time interactive event is marked as present in an attendance sheet maintained at the facial recognition engine. [0045] According to an embodiment, the real time training is conducted in any or a combination of a form of video, audio, and animation.

[0046] Another aspect of the present disclosure relates to a system for determining a participant behavior during a real time interactive event, said system comprising: a processor operatively coupled to a memory, the memory storing a set of instructions executed by the processor to: receive, by a facial recognition engine, in real time a set of unsynchronized data packets across non-overlapping or partially overlapping data channels from a computing device associated with the participant, the set of unsynchronized data packets pertains to any or a combination of video signals and image signals; select, through a common data channel, a first set of data packets (a region of interest) from the received set of unsynchronized data packets, the first set of data packets pertains to a one or more attributes of a facial image of the participant; match, through the common data channel, the extracted first set of data packets with a preconfigured dataset comprising information related to a plurality of facial expressions; and determine, responsive to the match, a facial match of the extracted facial image from the plurality of the predefined facial expressions to identify the participant behavior during the real time interactive event.

[0047] FIG. 1 indicates a network implementation 100 of a participant behavior determination system, in accordance with an embodiment of the present disclosure.

[0048] According to an embodiment of the present disclosure the participant behavior determination system (also referred to as the system 102, hereinafter) can facilitate an entity (also referred to as instructor or a teacher or a host entity) to determine a level of participation and attentiveness of an entity (also referred to as a student, a participant or a guest entity) attending an real time interactive event (e.g. an online class, online training session, online demonstration and so forth).

[0049] The system 102 implemented in any computing device can be configured/operatively connected with a server 110. As illustrated, the system 102 can be communicatively coupled with one or more guest entity devices 106-1, 106-2,.., 106-N (individually referred to as the entity device 106 and collectively referred to as the guest entity devices 106, hereinafter) through a network 104. The one or more guest entity devices 106 are connected to the living subjects/ users / guest entities/participants 108-1, 108-2,..., 108N (individually referred to as the entity 108 and collectively referred to as the entities 108, hereinafter). The entity devices 106 can include a variety of computing systems, including but not limited to, a laptop computer, a desktop computer, a notebook, a workstation, a portable computer, a personal digital assistant, a handheld device and a mobile device. In an embodiment, the system 102 can be implemented using any or a combination of hardware components and software components such as a cloud, a server, a computing system, a computing device, a network device and the like. Further, the system 102 can interact with any of the guest entity devices 106 through a website or an application that can reside in the guest entity devices 106. In an implementation, the system 102 can be accessed by website or application that can be configured with any operating system, including but not limited to, Android™, iOS™, and the like. Examples of the guest entity devices 106 can include, but are not limited to, a computing device associated with industrial equipment or an industrial equipment based asset, a smart camera, a smart phone, a portable computer, a personal digital assistant, a handheld device and the like.

[0050] Further, the network 104 can be a wireless network, a wired network or a combination thereof that can be implemented as one of the different types of networks, such as Intranet, Local Area Network (LAN), Wide Area Network (WAN), Internet, and the like. Further, the network 104 can either be a dedicated network or a shared network. The shared network can represent an association of the different types of networks that can use variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like. [0051] In an embodiment, the system 102 facilitates providing an online training session (e.g., online classroom) to the guest entities 108 over the guest entities computing devices 106. The training sessions can be provided as for example, a video session, an audio session or a multimedia session. The system monitors the guest entity’s facial expressions, and determines one or more behavioral attributes that the guest entity portrays while attending the online training session.

[0052] In an embodiment, the system 102 can communicate with the entity devices via a low point-to-point communication protocol such as Bluetooth®. In other embodiments, the system may also communicate via other various protocols and technologies such as WiFi®, WiMax®, iBeacon®, and near field communication (NFC). In other embodiments, the system 102 may connect in a wired manner to entity devices. Examples of the entity devices may include but are not limited to, computer monitors, television sets, light-emitting diodes (LEDs), and liquid crystal displays (LCDs).

[0053] Although in various embodiments, the implementation of system 102 is explained with regard to the server 110, those skilled in the art would appreciate that, the system 102 can fully or partially be implemented in other computing devices operatively coupled with network 104 such as entity devices 106 with minor modifications, without departing from the scope of the present disclosure.

[0054] FIG. 2 illustrates exemplary functional components 200 of the participant behavior determination system in accordance with an embodiment of the present disclosure. [0055] In an aspect, the system 102 may comprise one or more processor(s) 202. The one or more processor(s) 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 202 are configured to fetch and execute computer-readable instructions stored in a memory 204 of the system 102. The memory 204 may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 204 may comprise any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.

[0056] The system 102 may also comprise an interface(s) 206. The interface(s) 206 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 206 may facilitate communication of the system 102 with various devices coupled to the system 102 such as an input unit and an output unit. The interface(s) 206 may also provide a communication pathway for one or more components of the computing device 102. Examples of such components include, but are not limited to, processing engine(s) 208 and database 210.

[0057] The processing engine(s) 208 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 208 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 208 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine -readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 208. In such examples, the computing device 102 may comprise the machine -readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 102 and the processing resource. In other examples, the processing engine(s) 208 may be implemented by electronic circuitry. The database 210 may comprise data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) 208.

[0058] In an exemplary embodiment, the processing engine(s) 208 may comprise a participant data determination unit 212, a facial image extraction unit 214, a participant behavior determination unit 216, and other units(s) 218.

[0059] In an embodiment, the system 102 may include the participant data determination unit 212. The participant data can be captured at the time the participant registers himself/herself for participation in the real time interactive event. In an embodiment, the real time interactive event can be an online classroom, an online training session and the like. It should also be noted that, as used herein, the terms “class”, “classroom”, and/or “event” are not limited to be related to an educational process. In an embodiment, the participant data can include details such as name, age, gender, address, course details, course duration, course fees, education background, and so forth. Further, the participant data determination unit can include details such as reasons why the guest entity wants to enrol in the online event, number of courses already registered for by the guest entity, attendance details, and so forth. [0060] In an embodiment, the system 102 may include the facial image extraction unit 214. The system 102 may include a camera. The camera may correspond to any image capturing component capable of capturing images and/or videos. For example, the camera may capture photographs, sequences of photographs, rapid shots, videos, or any other type of image, or any combination thereof. In some embodiments, the system may include one or more instances of the camera. For example, the system may facilitate providing a front-facing camera and a rear-facing camera. In some embodiments, the guest device 106 may include the camera. The camera may correspond to any image capturing component capable of capturing images, audio, and/or videos. For example, the camera may capture photographs, sequences of photographs, rapid shots, videos, or any other type of image, or any combination thereof covering the facial features or face of the guest entity.

[0061] For example, the guest device 106 may include a front-facing camera and a rear facing camera. Although only one camera is discussed, persons of ordinary skill in the art will recognize that any number of the cameras and any camera type may be included. Additionally, persons of ordinary skill in the art will recognize that any device that can capture images and/or video may be used. Furthermore, in some embodiments, the camera may be located external to the guest device 106.

[0062] In another embodiment, the camera may include an image sensor. The image sensor may be, for example, an array of sensors. Sensors in the sensor array may include, but not be limited to, charge coupled device (CCD) and/or complementary metal oxide semiconductor (CMOS) sensor elements to capture infrared images (IR) or other non- visible electromagnetic radiation. In some embodiments, the camera may include more than one image sensor to capture multiple types of images. For example, the camera may include both IR sensors and RGB (red, green, and blue) sensors. In certain embodiments, the camera may include illuminators or illuminating surfaces (or subjects) with the different types of light detected by the image sensor. For example, the camera may include an illuminator for visible light (e.g., a “flash illuminator) and/or illuminators for infrared light (e.g., a flood IR source and a speckle pattern projector).

[0063] In an embodiment, the images captured by the camera may include the images with the guest entity’s face (e.g., the guest entity’s face is included in the images). An image with the guest entity’s face may include any digital image with the guest entity’s face shown within the frame of the image. Such an image may include just the guest entity’s face or may include the guest entity’s face in a smaller part or portion of the image. The guest entity’s face may be captured with sufficient resolution in the image to allow image processing of one or more features of the guest entity’s face in the image.

[0064] In some embodiments, the facial image extraction unit 214 may capture the guest entity’s face and body features to determine the guest entity’s emotion. For example, the guest entity who is slouching may correspond to the guest entity, which is not paying attention, whereas the guest entity sitting upright may correspond to the guest entity fully engaged in the presentation.

[0065] In an embodiment, the system 102 may include a facial recognition engine that may facilitate to determine that the image or images captured using the camera includes the guest entity who is smiling, nodding, furrowing their brow, crying, or displaying any other type of emotion. The various facial expressions determined to be present within the image or images may then be stored in database. The database may, in some embodiments, be used to compare the captured images with pre-defined facial expressions to determine what a particular guest entity’s facial expression is. For example, a received image of the guest entity may be compared against a set of pre-defined images of the guest entity smiling, nodding, furrowing their brow, crying, etc., to determine whether the guest entity within the received image is expressing any one of these emotions. In some embodiments, the facial image corresponds to at least one of a head-pose feature, an eyebrows feature, an eyelids feature, a gaze feature, a nose feature, and a mouth-contour feature of the participant during the real time interactive event. In some embodiments, the database may also learn or receive new expressions, or update how a particular expression is characterized based on the received images.

[0066] In some embodiments, by capturing the image/images the system 102 can mark the participant present in the real time interactive event. This automatic marking of the participant present may facilitate the system to overcome limitations such as false presence marking, proxy presence marking and the like. The system also facilitates determining and capturing presence information of the host entity (e.g., teacher or instructor) during the real time interactive event. This facilitates to speed up and bring consistency in gathering the participant data. Additionally, the system 102 facilitates avoiding getting fake admissions via bribe ranking mechanisms.

[0067] Further, the system facilitates using the image processing technique to determine if the participant who has logged into the guest entity device is the one present in front of the guest participant device. This facilitates removing any ambiguity in terms of the participant who has actually logged into the real time interactive event and the one who is attending the training.

[0068] In some embodiments, the system 102 may include the participant behavior determination unit 216. The facial recognition engine of the system 102 may also analyse the determined facial expressions and generate data signifying an emotion or a level of attention of the participant within the captured image/images. For example, a teacher or a host entity viewing the captured image/images of the guest entity participating within an online classroom may determine, based on the facial expressions of the participant seen and captured, a level of attention for the participants. In this way, the host entity may be better able to gauge whether the participants are taking interest or understanding material being presented in real-time, and appropriately modify or augment their presentation to enhance the learning experience of the participants. In an additional embodiment, the participant behavior during the real time interactive event may be any or a combination of happy, confused, focused, bored, attentive and understood. [0069] In another embodiment, the system 102 can record behavior patterns of the guest entity’s behavior during a course of time (e.g., during lectures of a particular subject, of a particular host entity and the like.)

[0070] In an embodiment, the participant behavior determination unit 216may determine using the captured facial features that the guest entity is not sitting idle and is paying attention to during the real time interactive event where both the guest entity and the host entity are present and are participating (e.g., during a live classroom session, training session and the like.)

[0071] FIG. 3 illustrates exemplary representation 300 of the participant behavior during the real time interactive event in accordance with an embodiment of the present disclosure.

[0072] In an exemplary embodiment, as illustrated in FIG. 3 is a set of emotions and behavior patterns that are captured and determined from the captured facial images of the guest entity. As shown at 302 is the guest entity with the determined behavior pattern while attending the real time interactive event of being happy. For example, the guest entity may be happy when the real time interactive event presents a topic of interest or for example when the guest entity has performed well on an evaluation being conducted during the event.

[0073] As an example, at 302 and at 316 excited and scared behavior patterns respectively of the guest entity are determined and illustrated. For example, the guest entity may be shocked or scared on receiving some instruction or training material. Or else the guest entity may be involved with some other computing device and may feel scared using the same. Additionally, the guest entity may be feeling excited on receiving some information from the host entity during the real time interactive event.

[0074] In an exemplary embodiment, at 304 the guest entity may show patterns of being proud. For example, the guest entity may be shown some training material that encourages him for better future performance and hence may show signs of being proud. [0075] In an exemplary embodiment, at 306 the guest entity may show patterns of being happy and at 314 of being hopeful. For example the material shown during the training session may be for evaluation of the participant and the participant may be hopeful of receiving a positive evaluation pattern from the host participant (e.g., the teacher or the instructor).

[0076] In an exemplary embodiment, at 308 the guest entity may show patterns of being worried. For example, the entity may be worried when the interactive event shows material related to the training as of concern or worry to the guest entity. [0077] In an exemplary embodiment, at 310 the guest entity may show patterns of being disappointed and at 316 of being sad. For example, the guest participant may not being attentive and may have been lost in his pre and after thoughts, making him feeling sad and dull.

[0078] In an exemplary embodiment, at 310 the guest entity may show patterns of being angry. For example, the entity may be not attentive while attending the real time interactive event and shows signs of anger due to interaction with one or his friend or while using other computing device.

[0079] While the above elements of the guest entity’s level of attention being captured from the facial features are presented as exemplary embodiment, however they are not being listed so as to limit the scope of invention and shall be construed in broadest form possible.

[0080] In an embodiment, the guest entity’s level of attention participating within the real time interactive event can be determined from the captured image or a set of images. In an embodiment, the system 102 may comprise an image processing module at the host entity’s device. The image processing module may include components for detecting edges within a two dimensional image, detecting the entity’s outline in a two dimensional image frame, detecting various facial expressions, for example dilated pupils, a smile, a surprise expression, an anger expression and like expressions.

[0081] In an embodiment, the captured image/images may be converted into a two dimensional image that is then typically converted into a bitmap. An algorithm may be applied to the bitmap, for example, for (1) edge detection to detect edges of features, and (2) detection of human faces or human outlines. The algorithm may be resident at the host entity’s device. The facial expressions can be detected from the captured images, by (1) detecting a set of edges, and then (2) comparing the set of detected edges, or points on those edges, with a predetermined library (for example, set) of edges or points corresponding to the guest entity’s smiling or laughing facial expression.

[0082] In an exemplary embodiment, the captured facial image may be assigned a value based on comparison with the predefined set of facial expressions. Processing the assigned values for the guest entity may be accomplished by adding each of the value together. Therefore, the sum of the assigned values (e.g., the total value) may be used to determine the level of attention that may be used to gauge the overall emotion of the participants of the event. For example, if the combined value is greater than a certain level, a certain emotion may be attributed to the participants. As an example, a sum greater than 30 (predefined value) may correspond to the participants being “happy” and the sum less than 30 may correspond to the participant being “sad” or “angry”. While only a sum of the assigned values is shown, persons having ordinary skill in the art would understand that any number of algorithms or methods can be used to process the assigned values.

[0083] In an exemplary embodiment, the system 102 may facilitate displaying a message within a user interface of the guest entity’s device 106. The message may indicate an overall status, or emotion, of the participant accessing the interactive real time event. For instance, the message may state: “PARTICIPANT IS HAPPY,” or “ATTENTIVENESS OF THE PARTICIPANT IS GOOD.” Further, the message may change as the facial expressions of one or more of the participants change. Further, in an embodiment, the message may be displayed along with the change in screen color at the guest entity’s device.

[0084] An advantage of the system is in terms of determining the participant behavior in terms of attention the participant is paying to the session being conducted during the real time interactive event. The processors may receive data packets over non-over lapping or partially overlapping data channels and may select a first set of the data packets through a common data channel. The system may facilitate increasing speed of transmission (e.g., sending and receiving) of the data packets over the communication channel. This may lead to faster execution of the communication and hence processing at the system 102.

[0085] FIG. 4 illustrates an exemplary method for determination of participant behavior during the real time interactive event in accordance with an embodiment of the present disclosure.

[0086] In an embodiment, at block 402 a set of unsynchronized data packets are received by a facial recognition engine across non-overlapping or partially overlapping data channels from a computing device associated with the participant. The set of unsynchronized data packets may pertain to any or a combination of video signals and image signals.

[0087] In an embodiment, at block 404 through a common data channel, a first set of data packets are selected from the received set of unsynchronized data packets, the first set of data packets pertains to one or more attributes of a facial image of the participant.

[0088] In an embodiment, at block 406, through the common data channel, the extracted first set of data packets are matched with a preconfigured dataset comprising information related to a plurality of facial expressions, and at block 408 responsive to the matching, a facial match of the selected facial image from the plurality of the predefined facial expressions is determined to identify the participant behavior during the real time interactive event. [0089] FIG. 5 illustrates an exemplary computer system 500 to implement the proposed system in accordance with embodiments of the present disclosure.

[0090] As shown in FIG. 5, computer system can include an external storage device

510, a bus 520, a main memory 530, a read only memory 540, a mass storage device 550, communication port 560, and a processor 570. A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor 570 include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor 570 may include various modules associated with embodiments of the present invention. Communication port 560 can be any of an RS-232 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port 560 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.

[0091] Memory 530 can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read only memory 540 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 570. Mass storage 550 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.

[0092] Bus 520 communicatively couples processor(s) 570 with the other memory, storage and communication blocks. Bus 520 can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 570 to software system. [0093] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to bus 520 to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 560. External storage device 510 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD- RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.

[0094] Thus, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named.

[0095] While embodiments of the present invention have been illustrated and described, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the invention, as described in the claim.

[0096] In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure can be practiced without these specific details. In some instances, well- known structures and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present invention.

[0097] While the foregoing describes various embodiments of the disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof. The disclosure is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the disclosure when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE INVENTION

[0098] The present disclosure provides a participant behavior determination system that captures participant facial expressions to determine level of attentiveness during a real time interactive event.

[0099] The present disclosure provides a mechanism to capture a participant behavior during a participant being logged in the system.

[00100] The present disclosure facilitates to provide a mechanism to determine a participant presence during a participant being logged in the system for real time interactive event.

[00101] The present disclosure facilitates determining and recording the participant deviation during participation in a real time interactive event.

[00102] The present disclosure facilitates updating and upgrading content being presented during the real time interactive event based on the participant behavior.

[00103] The present disclosure facilitates to provide a time and location flexibility for the participants of the real time interactive event.

[00104] The present disclosure facilitates to provide easier access and sharing of information for the participants of the real time interactive event.