Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR DEVICE MOTION DETECTION FOR GESTURE RECOGNITION
Document Type and Number:
WIPO Patent Application WO/2022/120474
Kind Code:
A1
Abstract:
A method for gesture recognition comprising the steps of providing an electronic device to be worn by a participant on a moving body member and collecting data representative of location over time of the electronic device. The location data and identification data of the electronic device are sent to a gesture recognition server to classify the incoming data as being a gesture among a dictionary of predefined gestures. In view of the gesture as classified, the gesture recognition server sends an instruction to a betting server or reward server to instruct a bet or a reward associated to the gesture as classified among the dictionary of predefined gestures, each having bet or reward instructions associated thereto; the instruction for the bet further comprising the identification data of the electronic device. Feedback representative of the bet or reward is sent to the electronic device based on the identification data thereof.

Inventors:
PERRON MARIO (CA)
Application Number:
PCT/CA2021/051754
Publication Date:
June 16, 2022
Filing Date:
December 07, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KROWDX INC (CA)
International Classes:
G07F17/32; G06F3/01
Foreign References:
US20070259716A12007-11-08
Attorney, Agent or Firm:
BENOIT & COTE INC. (CA)
Download PDF:
Claims:
CLAIMS:

1 . A method for gesture recognition comprising the steps of:

- providing an electronic device to be worn by a participant on a moving body member;

- collecting data representative of location over time of the electronic device;

- sending the data representative of location over time of the electronic device, and identification data of the electronic device, to a gesture recognition server;

- using the data representative of location over time of the electronic device to classify the incoming data as being a gesture among a dictionary of predefined gestures;

- in view of the gesture as classified, sending by the gesture recognition server an instruction to a betting server an instruction for a bet associated to the gesture as classified among the dictionary of predefined gestures, each having bet instructions associated thereto; the instruction for the bet further comprising the identification data of the electronic device; and

- by the betting server, sending a feedback representative of the bet having been placed to the electronic device based on the identification data thereof.

2. The method of claim 1 , wherein collecting data representative of location over time of the electronic device comprises using an accelerometer of the electronic device to infer a three-dimensional motion over time.

3. The method of claim 1 , wherein collecting data representative of location over time of the electronic device comprises using an external camera which identifies the location over time of the electronic device over at least a two-dimensional plane of a field of view of the camera.

4. The method of claim 1 , wherein collecting data representative of location over time of the electronic device comprises using a detector which detects the electronic device to track a three- dimensional motion thereof over time.

5. The method of claim 1 , wherein the betting server is distinct from the gesture recognition server.

6. A method for gesture recognition comprising the steps of: - providing an electronic device to be associated to a participant, the participant having a moving body member;

- collecting data representative of location over time of the moving body member;

- associating the moving body member and the electronic device as belonging the participant;

- sending the data representative of location over time of the moving body member, and identification data of the electronic device, to a gesture recognition server;

- using the data representative of location over time of the electronic device to classify the incoming data as being a gesture among a dictionary of predefined gestures;

- in view of the gesture as classified, sending by the gesture recognition server an instruction to a betting server an instruction for a bet associated to the gesture as classified among the dictionary of predefined gestures, each having bet instructions associated thereto; the instruction for the bet further comprising the identification data of the electronic device; and

- by the betting server, sending a feedback representative of the bet having been placed to the electronic device based on the identification data thereof.

7. The method of claim 1 , wherein collecting data representative of location over time of the electronic device comprises using an external camera which identifies the location overtime of the moving body member over at least a two-dimensional plane of a field of view of the camera.

8. The method of claim 1 , wherein collecting data representative of location over time of the moving body member comprises using an additional electronic device comprising a sensor on the moving body member, and a detector which tracks a three-dimensional motion of the sensor on the moving body member.

9. The method of claim 1 , wherein collecting data representative of location over time of the moving body member comprises using an additional electronic device comprising a sensor on the moving body member, and using an external camera which identifies the location over time of the sensor over at least a two-dimensional plane of a field of view of the camera.

10. The method of claim 1 , wherein the betting server is distinct from the gesture recognition server.

11. A method for gesture recognition comprising the steps of:

- providing an electronic device to be worn by a participant on a moving body member;

- collecting data representative of location over time of the electronic device;

- sending the data representative of location over time of the electronic device, and identification data of the electronic device, to a gesture recognition server;

- using the data representative of location over time of the electronic device to classify the incoming data as being a gesture among a dictionary of predefined gestures representative of respective visual brands;

- in view of the gesture as classified, sending by the gesture recognition server an instruction to a reward server an instruction for a associated to the gesture as classified among the dictionary of predefined gestures, each having bet instructions associated thereto; the instruction for the reward further comprising the identification data of the electronic device; and

- by the reward server, sending a feedback representative of the gesture having been associated to one of the visual brands to the electronic device based on the identification data thereof.

12. The method of claim 11 , wherein collecting data representative of location over time of the electronic device comprises using an accelerometer of the electronic device to infer a three-dimensional motion over time.

13. The method of claim 11 , wherein collecting data representative of location over time of the electronic device comprises using an external camera which identifies the location over time of the electronic device over at least a two-dimensional plane of a field of view of the camera.

14. The method of claim 11 , wherein collecting data representative of location over time of the electronic device comprises using a detector which detects the electronic device to track a three- dimensional motion thereof over time.

15. The method of claim 11 , wherein the reward server is distinct from the gesture recognition server.

16. A method for gesture recognition comprising the steps of:

- providing an electronic device to be associated to a participant, the participant having a moving body member;

18 - collecting data representative of location over time of the moving body member;

- associating the moving body member and the electronic device as belonging the participant;

- sending the data representative of location over time of the moving body member, and identification data of the electronic device, to a gesture recognition server;

- using the data representative of location over time of the electronic device to classify the incoming data as being a gesture among a dictionary of predefined gestures;

- in view of the gesture as classified, sending by the gesture recognition server an instruction to a reward server an instruction for a associated to the gesture as classified among the dictionary of predefined gestures, each having bet instructions associated thereto; the instruction for the reward further comprising the identification data of the electronic device; and

- by the reward server, sending a feedback representative of the gesture having been associated to one of the visual brands to the electronic device based on the identification data thereof.

17. The method of claim 16, wherein collecting data representative of location over time of the electronic device comprises using an external camera which identifies the location overtime of the moving body member over at least a two-dimensional plane of a field of view of the camera.

18. The method of claim 16, wherein collecting data representative of location over time of the moving body member comprises using an additional electronic device comprising a sensor on the moving body member, and a detector which tracks a three-dimensional motion of the sensor on the moving body member.

19. The method of claim 16, wherein collecting data representative of location over time of the moving body member comprises using an additional electronic device comprising a sensor on the moving body member, and using an external camera which identifies the location over time of the sensor over at least a two-dimensional plane of a field of view of the camera.

20. The method of claim 1 , wherein the betting server is distinct from the gesture recognition server.

19

Description:
SYSTEM AND METHOD FOR DEVICE MOTION DETECTION FOR GESTURE RECOGNITION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims benefit or priority of U.S. provisional patent application 63/122,177, filed December 7, 2020, which is hereby incorporated herein by reference in its entirety.

BACKGROUND

(a) Field

[0002] The subject matter disclosed generally relates to electronic device motion detection. More specifically, it relates human gesture detection using such an electronic device.

(b) Related Prior Art

[0003] Millions of people worldwide engage in various types of gambling or betting. For example, gambling involves the wagering of money (or assets) on an event having an uncertain outcome, with the primary intent of winning money or a prize or reward. Many gambling games thus include three elements: consideration, chance, and prize.

[0004] Even though the names sound similar, “instant” betting, or “in play” betting, is different from “in-game” betting and has much more to offer sports bettors. In-game wagering is simply wagering on a game while it’s happening. Sportsbook odds for the in-game will usually only change during a timeout or commercial break.

[0005] The subtle difference between the two betting options is that instant I in-play wagering (or betting) takes place continuously throughout the game by offering multiple betting opportunities. New instant I in-play betting could be created and offered for almost every play or possession throughout the game.

[0006] In the United States, a sports book is a location where gamblers can wager on various sports events, including football, basketball, baseball, hockey, golf, soccer, horse racing, boxing, mixed martial arts and various exotic bets. The term “book” historically comes from the actual notebook that those receiving the bets and making the lines would use to keep track of bets.

[0007] Sports book can benefit from instant I in-play betting by offering multiple new, faster pace wagering opportunities to the gamblers.

SUMMARY

[0008] According to an aspect of the disclosure, there is provided a method for gesture recognition comprising the steps of: - providing an electronic device to be worn by a participant on a moving body member;

- collecting data representative of location over time of the electronic device;

- sending the data representative of location over time of the electronic device, and identification data of the electronic device, to a gesture recognition server;

- using the data representative of location over time of the electronic device to classify the incoming data as being a gesture among a dictionary of predefined gestures;

- in view of the gesture as classified, sending by the gesture recognition server an instruction to a betting server an instruction for a bet associated to the gesture as classified among the dictionary of predefined gestures, each having bet instructions associated thereto; the instruction for the bet further comprising the identification data of the electronic device; and

- by the betting server, sending a feedback representative of the bet having been placed to the electronic device based on the identification data thereof.

[0009] According to an embodiment, collecting data representative of location over time of the electronic device comprises using an accelerometer of the electronic device to infer a three-dimensional motion over time.

[0010] According to an embodiment, data representative of location over time of the electronic device comprises using an external camera which identifies the location over time of the electronic device over at least a two-dimensional plane of a field of view of the camera.

[0011] According to an embodiment, collecting data representative of location over time of the electronic device comprises using a detector which detects the electronic device to track a three- dimensional motion thereof over time.

[0012] According to an embodiment, the betting server is distinct from the gesture recognition server.

[0013] According to another aspect of the disclosure, there is provided a method for gesture recognition comprising the steps of:

- providing an electronic device to be associated to a participant, the participant having a moving body member;

- collecting data representative of location over time of the moving body member;

- associating the moving body member and the electronic device as belonging the participant; - sending the data representative of location over time of the moving body member, and identification data of the electronic device, to a gesture recognition server;

- using the data representative of location over time of the electronic device to classify the incoming data as being a gesture among a dictionary of predefined gestures;

- in view of the gesture as classified, sending by the gesture recognition server an instruction to a betting server an instruction for a bet associated to the gesture as classified among the dictionary of predefined gestures, each having bet instructions associated thereto; the instruction for the bet further comprising the identification data of the electronic device; and

- by the betting server, sending a feedback representative of the bet having been placed to the electronic device based on the identification data thereof.

[0014] According to an embodiment, collecting data representative of location over time of the electronic device comprises using an external camera which identifies the location overtime of the moving body member over at least a two-dimensional plane of a field of view of the camera.

[0015] According to an embodiment, collecting data representative of location over time of the moving body member comprises using an additional electronic device comprising a sensor on the moving body member, and a detector which tracks a three-dimensional motion of the sensor on the moving body member.

[0016] According to an embodiment, collecting data representative of location over time of the moving body member comprises using an additional electronic device comprising a sensor on the moving body member, and using an external camera which identifies the location over time of the sensor over at least a two-dimensional plane of a field of view of the camera.

[0017] According to an embodiment, the betting server is distinct from the gesture recognition server.

[0018] According to another aspect of the disclosure, there is provided a method for gesture recognition comprising the steps of:

- providing an electronic device to be worn by a participant on a moving body member;

- collecting data representative of location over time of the electronic device;

- sending the data representative of location over time of the electronic device, and identification data of the electronic device, to a gesture recognition server; - using the data representative of location over time of the electronic device to classify the incoming data as being a gesture among a dictionary of predefined gestures representative of respective visual brands;

- in view of the gesture as classified, sending by the gesture recognition server an instruction to a reward server an instruction for a associated to the gesture as classified among the dictionary of predefined gestures, each having bet instructions associated thereto; the instruction for the reward further comprising the identification data of the electronic device; and

- by the reward server, sending a feedback representative of the gesture having been associated to one of the visual brands to the electronic device based on the identification data thereof.

[0019] According to an embodiment, collecting data representative of location over time of the electronic device comprises using an accelerometer of the electronic device to infer a three-dimensional motion over time.

[0020] According to an embodiment, collecting data representative of location over time of the electronic device comprises using an external camera which identifies the location over time of the electronic device over at least a two-dimensional plane of a field of view of the camera.

[0021] According to an embodiment, collecting data representative of location over time of the electronic device comprises using a detector which detects the electronic device to track a three- dimensional motion thereof over time.

[0022] According to an embodiment, the reward server is distinct from the gesture recognition server.

[0023] According to another aspect of the disclosure, there is provided a method for gesture recognition comprising the steps of:

- providing an electronic device to be associated to a participant, the participant having a moving body member;

- collecting data representative of location over time of the moving body member;

- associating the moving body member and the electronic device as belonging the participant;

- sending the data representative of location over time of the moving body member, and identification data of the electronic device, to a gesture recognition server;

- using the data representative of location over time of the electronic device to classify the incoming data as being a gesture among a dictionary of predefined gestures; - in view of the gesture as classified, sending by the gesture recognition server an instruction to a reward server an instruction for a associated to the gesture as classified among the dictionary of predefined gestures, each having bet instructions associated thereto; the instruction for the reward further comprising the identification data of the electronic device; and

- by the reward server, sending a feedback representative of the gesture having been associated to one of the visual brands to the electronic device based on the identification data thereof.

[0024] According to an embodiment, collecting data representative of location over time of the electronic device comprises using an external camera which identifies the location overtime of the moving body member over at least a two-dimensional plane of a field of view of the camera.

[0025] According to an embodiment, collecting data representative of location over time of the moving body member comprises using an additional electronic device comprising a sensor on the moving body member, and a detector which tracks a three-dimensional motion of the sensor on the moving body member.

[0026] According to an embodiment, collecting data representative of location over time of the moving body member comprises using an additional electronic device comprising a sensor on the moving body member, and using an external camera which identifies the location over time of the sensor over at least a two-dimensional plane of a field of view of the camera.

[0027] According to an embodiment, the betting server is distinct from the gesture recognition server.

BRIEF DESCRIPTION OF THE DRAWINGS

[0028] Further features and advantages of the present disclosure will become apparent from the following detailed description, taken in combination with the appended drawings, in which:

[0029] FIG. 1 is a schematic diagram of a system allowing inputs from participants, physically or virtually attending the live event, to signal their intention to bet on an instant I in-play betting opportunity according to an embodiment of the present disclosure;

[0030] FIG. 2 is a flowchart of a method for processing motion inputs, decoding the gambler intention in context, and giving the appropriate feedback prior and after the betting opportunity unfolds according to an embodiment of the present disclosure;

[0031] FIG. 3 is a schematic diagram of a system allowing inputs from participants, physically or virtually attending an event, to perform a gesture which is representative of a visual brand and detect it according to an embodiment of the present disclosure; and [0032] FIG. 4 is a flowchart of a method for processing motion inputs, decoding the gesture which is representative of a visual brand, and giving the appropriate feedback in terms of reward to the user account tied with an electronic device undergoing the gesture, according to an embodiment of the present disclosure.

[0033] It will be noted that throughout the appended drawings, like features are identified by like reference numerals.

DETAILED DESCRIPTION

[0034] There is described below a method for gesture recognition, comprising the detection of motion from a human, especially holding an electronic device or placed in front of an image-capturing electronic device, and the recognition of the motion to identify a pattern in motion which triggers additional actions based on what type of motion is identified. As an exemplary use for the method for gesture recognition, instant / in-play betting can be performed based on said gesture recognition. Other exemplary uses such as pattern recognition of gestures for brand mimicking are described below.

[0035] As used herein, the term “motion” is intended to convey the broadest possible meaning and refers to any sign, movement, body motion, etc. that can be recognized by a computer algorithm. By way of non-limiting example, in some embodiments, the motion may be performed and recognized per se, without using an electronic device, e.g., a motion performed using bare hand, clinging of the eye, facial expression, body move, etc., or recognized using an electronic device, e.g., moving a smartphone in the air, a connected watch, a connected band, a connected loT object, a connected piece of clothing or any other connected object undergoing the motion and which is the object being tracked.

[0036] Depending on the embodiment, the motion can be detected and tracked in space (or in a plane within the space) in various ways using a user’s device such as the gambler device 106 of Fig. 1 or the participant device 302 of Fig. 3, which will be described in greater detail further below regarding these examples. Such an electronic device can be worn or held by the user, especially held by the body part of which the motion or gesture is to be monitored, e.g., the smartphone or any other suitable electronic device can be handheld when the hand gesture is monitored.

[0037] A first way to track motion during a gesture is to use a camera or any other imagecapturing electronic device having a field of view in which the person for whom the gesture is to be recognized is to be positioned. The camera or any other image-capturing electronic device collects images overtime, i.e. , it collects a video of that person while presumably making the gesture, and sends the video or a representative portion thereof (cropped, sampled, etc.) to a recognition server which analyzes the images, as described further below. In this embodiment, the gesture recognition server (106, 306) needs to perform image analysis to identify body parts for which motion is expected (either with or without an electronic device to aid in this gesture recognition and analysis), detect the motion and track the motion in space or in a plane facing the camera (i.e., normal plane subtended by the field of view of the camera used to externally track motion of any object, such as personal computing device or body part in motion).

[0038] In another embodiment, a combination of a sensor and a detector can be used as the user’s device. The person may wear a sensor on the body part for which the motion is to be tracked in space or in a plane. A detector can be placed close to that person to detect the location of the sensor. This can include a camera or any other image-capturing electronic device which is programed to track a particular sensor such as a part having a particular shape or color, or a passive sensor which can reflect a signal emitted and detected by the image-capturing electronic device, or an active sensor which can emit a signal to be detected by the image-capturing electronic device. The motion tracking data is sent to the gesture recognition server (106, 306) which analyzes the data.

[0039] In yet another embodiment, the user’s electronic device can be programed to have a self- awareness of its own motion in space using the user’s electronic device or other ways to determine its location and/or movement and send the location and/or movement (including acceleration) data to the gesture recognition server (106, 306) which analyzes the data. For example, this would include a smartphone being handheld or held or worn using appropriate support on the body part of which the 2D or preferably 3D motion is being monitored for gesture recognition.

[0040] This can include acceleration measurements made by an accelerometer in the user’s electronic device, which can be a built-in accelerometer, the measurements being accessible via the operating system of the user’s electronic device. The accelerometer can comprise the required measuring parts to measure acceleration in three independent axes, allowing the measurements of 3D motion in space.

[0041] The 3D motion recognition technology can be based on artificial intelligence (e.g., machine learning), that supports the secure and specific authentication of a person, based on his/her unique way of moving his/her smartphone and/or wearable loT or other object or body part in the air while executing a specific motion, such as mimicking a visual brand with a motion occurring over a period.

[0042] A 2D motion recognition technology could, for example, involve a camera with a real-time image analysis technology coupled thereto to identify a body part or electronic device on the image and follow its movement on the image, thereby identifying 2D motion. Optionally, in such a circumstance, one may be able to add a technology to determine a distance (depth) from the camera, for example by measuring the variation of the area of the electronic device or body part indicating that the distance varies. A variation of size and shape would also indicate 3D rotation of the electronic device being tracked, such that the two motions (translation in a radial direction from the camera which increases or decreases distance; and rotation) may be discriminated and thereby measured independently. Optionally, a technology such as Bluetooth™ transceivers may be used to determine approximate distance from the camera if all devices are Blueooth™-enabled.

[0043] In some cases, a 2D motion recognition from a reference location known to the users may be sufficient if the rules being communicated to the participants are clear to this effect. Otherwise, to permit a location-independent motion recognition from a user (i.e. , without having to make a movement specifically in front of the same camera for everyone), a free-space, 3D motion recognition may be more appropriate and flexible, and preferable, despite being technically harder to implement using a camera.

[0044] As an exemplary use for the method, and referring to Figs. 1-2, the present invention can be used for acquiring an intended bet from a gambler, intentionally expressed by the gambler using a body motion and/or a motion involving the gambler’s personal electronic device, especially for instant I inplay betting as discussed above, using motion recognition by one of the three embodiments describe above, confirming the bet to said gambler via vibration and/or haptic signal, sound and/or image and finally, heralding a win to the gambler and, optionally, the surrounding attendees. This authenticated execution of a 3D gesture is converted by a server into a communication over a network to convey a firm order for an instant I in-play bet on a betting platform (e.g., DraftKings™) for example using APIs to have servers (e.g., a server for the gesture analysis and another independent server for placing the bet) communicate with each other.

[0045] For example, this exemplary use for the method may include having an attendee to a live event, in person or from afar (home, sports bar, etc.), where the attendee, or each of the attendees, brings their own device (smartphone, smartwatch, smart sneakers, smart cloths, head-mounted device, etc.), or they are provided with one (e.g., RFID bracelet, connected LED light stick, etc.).

[0046] The 2D or 3D gesture recognition is used to expedite betting orders given in a split second by the attendee, as a gambler, in response to an instant I in-play betting opportunity initiated by the “operator” (sports book I online betting platform, i.e., the “house”).

[0047] A betting opportunity, for use by the server to generate offers from which an eventual intention-backed motion is to be recognized from participants, is created by the operator at any moment during the game and publicized to the participants, for example and without limitation: Penalty kick (soccer), Steal (basketball), Interception (football), Power play (hockey), Full count (baseball), etc. These events are non-limiting examples of instantaneous or short events which happen in real-time and over a short period of time during a sport event. Betting may be made over such instantaneous or short events, according to the method described herein. The method described herein therefore needs to collect bets in a very short period of time from a great number of participants, e.g., hundreds of participants in the same space can bet simultaneously over a prediction of an instantaneous or short event proposed by the operator, and the betting period may be of only a short period such as shorter than 10 seconds, or shorter than one minute, for example. This implies that the server should have allocated capacity to treat the images and detect betting intents from the motion of multiple users in a very short period of time. This also implies that the server and processing capacity for such images in short periods of time are essential to carry out embodiments of the method according to the present disclosure.

[0048] Since it is in the spur of the moment, this betting opportunity has to be very clear for the gamblers: for example “will the soccer player score a goal with this penalty kick that was just called by the referee?”.

[0049] After the betting opportunity has been created by the operator, the system 100 is notifying the attendees immediately (via vibration, haptic signal, flashing light, or other on their smart device), that is within a time period starting from the notification of the betting opportunity that was just created, and ending after a set period of time which is typically a few seconds, e.g., between 0 and 3 seconds, or between 0 and 10 seconds, or between 0 and 20 seconds, or between 0 and 30 seconds, or between 0 and 1 minute, or between 0 and 2 minutes, for example. In the case of crowds, processing the number of bets which are manually signified using a bpdy movement with or without a personal computing device can require very significant processing capacity, hence the need for a dedicated server to perform this task specifically and be able to achieve the processing (identifying the motion for each person within the period of time) in time. The bets need to be confirmed and recorded in time before the instantaneous event on which the participants are betting occur (otherwise, the betting makes no sense if the time required to collect the bets is too long and the event on which people bet has occurred).

[0050] The instant odds for this betting opportunity on the smart device screen, preferably the same electronic device of which the motion is being tracked for gesture recognition, or the electronic device which is previously paired with the RFID bracelet or other independent device or loT device of which the motion is being tracked for gesture recognition instead of the smartphone. The electronic device on which the odds and betting opportunities are shown should be tied with a user account to ensure that confirmations, betting history and payments are correctly paired with a user and their device.

[0051] According to a first option, and without limitation, the attendee has to make a quick choice between wager on YES, wager on NO, or pass this betting opportunity. The attendee uses a well-defined 3D gesture for us to recognize as their answer: shake for YES, circle for NO, do nothing for PASS. The association between an “answer” to a betting opportunity being offered and the associated gesture can be different from these examples, as long as they are well communicated to the participants. Preferably, the gesture should be easy to reproduce and also to recognize by the server. That betting intention which is communicated by a gesture from a participant is interpreted by a gesture recognition server 106 and is then registered with the betting platform under their name (using open APIs) on the other server, the betting server 108, which is preferably distinct from the gesture recognition server 106 and which receives bets.

[0052] According to a second option, the attendee has to make a quick choice between wager on the player who just did the action, or wager on the team who just did the action. The attendee uses the 3D gesture representing the player (for example: capital “J” for LeBron James) versus the 3D gesture representing the Team (for example capital “L” for the Lakers) to signify his intention to bet on said player or Team (which carry different ODDS of course: for examples the odds can be: 3-1 the player scores vs 2-1 the Team scores). Again, different associations between a betting intention and a gesture shape or pattern associated thereto can vary from these examples, as long as they are clearly communicated to the participants when they offer is being made. When the process of treating a betting opportunity is done and the instantaneous event on which the participants placed their bet has occurred, the result can be displayed on the attendee’s electronic device’s screen as discussed above, plus bell & whistle: flashing screen for winners, sound, etc.

[0053] Embodiments of the present disclosure generally provide a system and method of interactive technologies that allow participants to provide gesture-based inputs to a computer on site at the live event (e.g., in a crowd inside a room or in a stadium or other venue), or remotely, from afar while following up the event on which they bet via TV or live electronic transmission of said event.

[0054] To illustrate the system and method for instant I in-play betting, Fig. 1 depicts a simplified schematic of a representative system 100 enabling the communication between at least one participant so as to provide real-time inputs to a live performance. It should be understood that the system 100 shown in FIG. 1 is for illustrative purposes only and that any other suitable system could be used in conjunction or in lieu of system 100 according to one embodiment of the present disclosure.

[0055] As mentioned above, a first way to track motion during a gesture is to use a camera or any other image-capturing electronic device having a field of view in which the person for whom the gesture is to be recognized is to be positioned. The camera or any other image-capturing electronic device is the gambler device 102 which and sends the video or a representative portion thereof (cropped, sampled, etc.) to a gesture recognition server 106 which analyzes the images, as described further below. In this embodiment, the gesture recognition server 106 needs to perform image analysis from the images collected by the gambler device 102 to identify body parts for which motion is expected, detect the motion and track the motion in space or in a plane facing the camera.

[0056] As mentioned above, in another embodiment, a combination of sensor and detector can be used as the gambler device 102. The person may wear a sensor on the body part for which the motion is to be tracked in space or in a plane. A detector which is placed with the sensor or with a plurality of sensors (for a corresponding plurality of participants or body parts being tracked) can be placed close to that person to detect the location of the sensor. This can include a camera or any other image-capturing electronic device which is programed to track a particular sensor such as a part having a particular shape or color, or a passive sensor which can reflect a signal emitted and detected by the image-capturing electronic device, or an active sensor which can emit a signal to be detected by the image-capturing electronic device. The motion tracking data is sent from the gambler device 102 to the gesture recognition server 106 which analyzes the data.

[0057] As mentioned above, in another embodiment, the gambler device 102 can be programed to have a self-awareness of its own motion in space using the user’s electronic device (such as the accelerometer thereof) or other ways to determine its location and/or movement and send the location and/or movement (including acceleration) data to the gesture recognition server 106 which analyzes the data received from the gambler device 102.

[0058] The gambler device 102 may be embodied as a smartphone, hand-held unit, wireless device, client computer accessing an internet portal, client computer accessing an intranet portal, a device enabled for network access, RFID tag (passive or active, with appropriate reader), Ultrasound Identification (US-ID), Ultrasonic ranging (US-RTLS, which further has the advantage of measuring a distance to aid in the 3D motion tracking), Ultra-wideband (UWB) or other computing device suitable for communicating with the system, or any combination thereof.

[0059] The at least one gambler device 102 communicate with other components of the system 100, such as gesture recognition server 106, betting server 108 or any other gambler device 102, via a communication network 104, such as (but not limited to) a cellular network or the internet, which can be implemented or accessed over a wired or wireless network. If the communication network 104 is embodied as wireless network, any type of wireless network, such as wireless personal area network, wireless local area network, cellular, or any combination of one or more wireless networks may be used. For example, the system 100 may comprise a communication network 104 allowing any gambler device 102 to communicate through any current commercial cellular network, such as, but not limited to, GSM/GPRS and CDMA/1xRTT, or any faster data service that might be available such as 3G services, namely EDGE, UMTS, HSPDA, EVDO and WCDMA), any latest 4G LTE and/or5G network or any next generation mobile data transmission network.

[0060] Furthermore, other types of wireless communication network 104 may be used in the system 100, such as, but not limited to any wireless local area network (such as WiFi network) and any private cellular network, such as a picocell-type antenna base station configuration or a satellite based wireless system, any form of radio frequency (RF) communication, optical (could be infrared) or acoustic (could be ultrasound) network technology.

[0061] The gesture recognition server 106 is configured to receive and process data, signals, query requests, audio, images, and/or video, and output any such information as necessary from any number of sources, such as, gambler device 102 (including a plurality of gambler devices 102 from a corresponding plurality of participants, treated simultaneously) or communication network 104.

[0062] According to an embodiment, the gesture recognition server 106 comprises a memory for storing a program and data, and a processor for executing the program, where the program comprises a machine learning or deep learning algorithm which is configured to receive raw successive images (the data of which can be formatted in various ways) or raw tracking data corresponding to time series of coordinates in space or in a plane, over time.

[0063] The machine learning algorithm is first trained to receive such data, and during the training phase, is trained to categorize or to classify the inputted data into a given category, where the category or class corresponds to a predefined gesture. Then the machine learning algorithm operated on the gesture recognition server 106 receives the new data and once the new data is inputted, the machine learning algorithm classifies the new data into a corresponding gesture, if any. The machine learning algorithm may be further trained to determine if the new data contain a gesture or do not contain a gesture, to avoid making a classification if there is no identified gesture, and to perform a classification only when a gesture is being identified, with the possibility of triggering an alert of a non-recognized gesture when a gesture is detected but is classified with a low probability in a particular class (gesture identified with low confidence).

[0064] Once a gesture is recognized, appropriate action can be triggered on the gesture recognition server 106, for example initiating an action such as a network communication. In the present example, once the gesture is identified and classified as a particular gesture among a dictionary of training gestures to be identified, the gesture recognition server 106 can initiate the action that corresponds to that particular gesture among a dictionary of training gestures. When the operator generates a gesture proposition for a bet in an instant betting setting, the gesture proposition should be within the dictionary of training gestures in order to be recognizable in real time. Also, when classifying the gestures of the participants, the server may not need to classify the gestures as belonging to any one of the gesture among a dictionary of training gestures; it may rather classify only between the proposed gestures, which are expected to be, for example, between one and four different possible gestures in that circumstance.

[0065] The process may involve making a bet with certain parameters, and therefore the communication includes the instructions for the bet, for example an API request, with all necessary data such as the instructions that correspond to the identified gesture and the personal data of the gambler making the bet using their personal electronic device on which a user account is linked and includes the necessary information. Implementing the recognition on a server and having another server making the bets ensure that the functions are well separated and can be implemented using separate APIs, such as a REST API.

[0066] The gesture recognition server 106 may be configured to communicate with one or more communication network 104 such as, for example, a dedicated communication network connection, wired connection, wireless connection Internet, Intranet, WiFi, Bluetooth, ZigBee, LAN, WAN, mobile phone communication network, social communication network, or any other suitable communication systems, or any combination thereof.

[0067] The gesture recognition server 106 is configured to execute one or more computer program that aggregates the gambler device 102 inputs received through the communication network 104 from various users in a very short period of time, such as within a few seconds, since instant bets are involved. In order for the gambler device 102 to communicate with the at least on gesture recognition server 106 over at least one communication network 104, any suitable, i.e., fast and reliable communication protocol may be used, such as TCP/IP or Ethernet protocols. The gesture recognition server 106 is further configured to send betting instructions based on the identified and classified gestures (the betting instructions corresponding to the gesture as classified and to the user from which the gesture was identified) to one or more selected betting servers 108 through the communication network 104. The selected betting servers 108 typically comprises or integrates a communication device allowing the betting server 108 to be connected to the gesture recognition server 106 through a communication network 104 or combination thereof. The betting server 108 is configured to receive the instructions and register the bet for the gambling game to operate.

[0068] For example, this is summarized in the flowchart of Fig. 2 illustrating a method 200 comprising the steps of:

[0069] Step 202: Receive gambler inputs, i.e., images or tracking data;

[0070] Step 204: Analyze gambler inputs, i.e., analyze images to track specific body parts or preferably a device such as a connected device or a sensor, or analyze directly the tracking data;

[0071] Step 206: Determine from the data if a betting gesture is made, and classify it as being one among a dictionary of gestures (each being associated to a betting instruction);

[0072] Step 208: Confirm Betting Intention, for example by a prompt on the user’s device; [0073] Step 210: Place the bet instruction by performing a request from one server to another based on the confirmed betting intention in view of the identified gesture and comprising user information for the bet instruction;

[0074] Step 212: Confirm the outcome from the betting server to the user’s device.

[0075] It is now referred to another exemplary use of the method, in the context of brand recognition in gestures, for example in promotional campaigns or sponsored events, especially in relation with Figs. 3-4.

[0076] Fig. 3 is analogous to Fig. 1 , but in the case of this particular example. The participant device 302 is analogous to the gambler device 102 and can be embodied by similar devices and work in a similar fashion to collect movement-related data, such as a change of location over time or a series of instantaneous values for acceleration, for example. Live, incoming data is communicated, inputted and used from the smart device sensors: accelerometer, gyroscope, magnetometer, gravity, device motion, ambient light detector, proximity detector, etc.

[0077] The gesture recognition server 306 works in a manner analogous to the gesture recognition server 106 described above, and communication of data is also performed similarly.

[0078] In this example, the 3D gesture recognition technology, based on artificial intelligence (such as, without limitation, a trained neural network), supports the creation of a 3D brand gesture, manifested by a person moving his/her smartphone and/or wearable loT (or any other suitable sensor or device, or properly filmed body part) in the air, their movement (gesture) reproducing the shape of said brand in 3D so as to be perceived as such by other people. People can witness 3D brand gesture recognition in person, live on camera remotely (e.g., Facebook Live) or post-performance via recording (e.g., on Tik Tok, YouTube or Instagram).

[0079] This can involve participants to a live event, in person or from afar (home, sports bar, etc.). They bring their own device (smartphone, smartwatch, smart sneakers, smart cloths, etc.) or they are provided with one (RFID bracelet, connected LED light stick, etc.). There is recognized a shape that participant draw in the air, in 3D, using their device: capital “C” for CocaCola, the Pepsi wave, Red Bull chug (drinking gesture with the device), Nike “swoosh”, etc.

[0080] According to an embodiment, the machine learning or deep learning algorithms on the gesture recognition server first identify the 3D gesture done by the participant (e.g., he/she is doing a Nike “swoosh”). This can be done in 2 ways: Is the person trying to do a Nike “swoosh” - answer: true or false; or, which 3D gesture is that person doing right now (given a “vocabulary” of possible 3D gestures) - answer: Nike “swoosh”. [0081] Then, we can also positively match the 3D gesture with the person executing it (the deep learning algorithms can distinguish between people doing the same gesture using the same sensor data).

[0082] According to an embodiment, with the brand gesture, some kind of reward is minted for the participant associated with the performance of the said 3D gesture: virtual points, loyalty points, a QR Code to claim a reward, for example a can of Red Bull, or a non-fungible token (written on the Blockchain). This can be embodied by a reward server 308 which receives an instruction from the gesture recognition server 306 that a specific gesture was identified and is associated with a reward. The instruction should comprise identification information of the participant.

[0083] A communication network 304, analogous to the communication network 104, is used for the participant device 302 to communicate data to the gesture recognition server 306, for the gesture recognition server 306 to send instructions to the reward server 308, and for the reward server 308 to give feedback (such as the attribution of a reward) to the participant device 302 linked with the user account associated to the instruction of reward.

[0084] For example, this is summarized in the flowchart of Fig. 4 illustrating a method 400 comprising the steps of:

[0085] Step 402: Receive participant inputs from a user device (smartphone, loT device, etc.) which is self-aware of its motion or tracked by a nearby suitable detector;

[0086] Step 404: Analyze participant inputs for optional determination that a gesture is being made in the data fed to the gesture recognition server 306 and formatting for machine learning or deep learning algorithm;

[0087] Step 406: Gesture recognized (classified) by the machine learning or deep learning algorithm as being among a predefined dictionary of gestures to be recognized, optionally with a constraint on the possible gestures in a given context to aid in the accuracy and rapidity of the classification;

[0088] Step 408: Reward participant using a reward associated with the identified one of the dictionary of gestures to be recognized. This can include a monetary reward or other in-app reward to be sent by the reward server 308 to an application on the participant device 302.

[0089] While preferred embodiments have been described above and illustrated in the accompanying drawings, it will be evident to those skilled in the art that modifications may be made without departing from this disclosure. Such modifications are considered as possible variants comprised in the scope of the disclosure.