Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
GESTURE AND SHOW CONTROL SYSTEM FOR PUPPETED WALK-AROUND CHARACTER PERFORMERS
Document Type and Number:
WIPO Patent Application WO/2023/183947
Kind Code:
A2
Abstract:
A gesture and show control system for a puppeted walk-around character performer used for engaging directly with guests. The system having a power management, a costume and show controller has one or more than one processor that has storage for storing instructions executable on the one or more than one processor for operating and controlling the puppet, an audio subsystem, one or more than one control sensor, one or more than one visual indicator, and one or more than one glove control operably connected to the costume and show controller.

Inventors:
BEAUDRY DAVID (US)
Application Number:
PCT/US2023/064972
Publication Date:
September 28, 2023
Filing Date:
March 26, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BEAUDRY DAVID (US)
International Classes:
A63J19/00
Attorney, Agent or Firm:
VAN TREECK, Norman (US)
Download PDF:
Claims:
CLAIMS

What Is Claimed Is:

1. A gesture and show control system for a puppeted walk-around character performer used for engaging directly with guests, the system comprising: a) power management; b) a costume and show controller operably connected to the power management; c) an audio subsystem operably connected to the power management and the costume and show controller; d) one or more than one control sensor operably connected to the costume and show controller; e) one or more than one visual indicator operably connected to the costume and show controller; and f) one or more than one on-performer control operably connected to the costume and show controller.

2. The system of claim 1, wherein the power management comprises components to operate the in-costume and on-performer electronics, wherein the components are selected from the group consisting of: a battery, a power switch, or a converter.

3. The system of claim 1, wherein the gesture and show controller comprises one or more than one processor that has storage for storing instructions executable on the one or more than one processor for operating and controlling the puppet, wherein the instructions perform operations including: a) retrieving a time from an internal clock; b) determining a beginning of data processing; c) reading sensor data from one or more than one sensor input; d) determining a new scene number if a new state exists; e) inputting one or more than one scene modifier and one or more than one beat modifier for selecting the next sound bite; f) inputting one or more than one stored previous cue modifier for selecting the next sound bite; g) inputting one or more than one nearby character modifier for selecting the next sound bite; h) inputting one or more than one nearby character identification, if there is one or more than one different character nearby, into the one or more than one nearby character modifier; i) inputting one or more than one seasonal modifier for selecting the next sound bite; j) pre-processing gesture sensor data from the one or more than one sensor; k) selecting a gesture from a gesture storage by:

1) determining if a gesture is active;

2) determining if a performer has met all conditions for the gesture;

3) determining if a mouth actuator is engaged;

4) progressing to the next gesture in gesture storage if none of the above conditions are met; l) determining from the selected gesture, a playback status, and a lockout status if a next cue is found for selecting the next sound; and m) playing back the selected sound.

4. The system of claim 1, wherein the audio subsystem comprises one or more than one audio amplifier, and optionally one or more than one audio transceiver, so that the puppet can interact with guests at a venue.

5. The system of claim 1, wherein one or more than one control sensor comprises: a) one or more than one switch operably connected to the gesture and show controller; b) one or more than one sensor operably connected to tire gesture and show controller; and c) one or more than one actuator and actuator sensor operably connected to the gesture and show controller to make the puppet appear more lifelike to the guests at a venue.

6. The system of claim 1, wherein one or more than one visual indicator is an RGB LED light that display a state of system to the performer.

7. The system of claim 6, wherein the one or more than one visual indicator is a unique color to differentiate each scene, each beat, and system status to the performer.

8. The system of claim 1, wherein one or more than one glove control comprises one or more than one finger bend sensor and an inertial measurement unit for determining performer movements, gestures, and poses.

9. The system of claim 1, further comprising in-costume hardware, wherein the in-costume hardware comprises: a) one or more than one speaker operably connected to the audio subsystem; b) a mouth actuator and actuator sensor; c) a scene selector; and d) one or more than one on-performer control.

10. The system of claim 9, wherein the one or more than one speaker is used to play the selected audio to the guests.

11. The system of claim 9, wherein one or more than one visual indicator is used to indicate to the performer that a gesture or audio playback has been completed, or that a gesture or audio playback is able to be used.

12. The system of claim 9, wherein the mouth actuator is used to open the puppet’s mouth, so that tlae audio playback can match die mouth movement.

13. The system of claim 9, wherein the scene selector is used by the performer in show navigation to advance to a next scene or to a previous scene.

14. The system of claim 9, wherein the one or more than one on-performer control is used by the performer in combination with the mouth actuator sensor to activate sound playback to guests.

15. The system of claim 1, wherein the one or more than one sensor, the one or more than one microcontroller, one or more than one on-board audio amplifier and one or more than one speaker, enable the performer to trigger sound bites using gestures and makes the system entirely self-contained within the costume/puppet.

16. The system of claim 1, wherein the system further comprises an area wide show control the area wide show control comprising a gesture and show controller and a wireless radio.

17. The system of claim 16, wherein the wireless radio provides network connectivity to one or more than one wired, wireless or both wired and wireless network and external control systems.

18. The system of claim 17, wherein the network connectivity is used to for triggering environmental cues, area wide audio control cues, theatrical lighting control cues, special effects cues, animatronic effects control cues, and show and environmental related cues and sequences.

19. The system of claim 18, wherein the network connectivity alerts the gesture and show controller of other characters in close proximity and transmits the other characters identification to the gesture and show controller for processing.

20. A computer-implemented method for gesture and show control system for a puppeted walk-around character performer used for engaging directly with guests, the method comprising die steps of: a) retrieving a time from an internal clock; b) determining a beginning of data processing; c) reading sensor data from one or more than one sensor input; d) determining a new scene number if a new state exists; e) inputting one or more than one scene modifier and one or more than one beat modifier for selecting the next sound bite; f) inputting one or more than one stored previous cue modifier for selecting the next sound bite; g) inputting one or more than one nearby character modifier for selecting the next sound bite; h) inputting one or more than one nearby character identification, if there is one or more than one different character nearby, into the one or more than one nearby character modifier; i) inputting one or more than one seasonal modifier for selecting the next sound bite; j) pre-processing gesture sensor data from the one or more than one sensor; k) selecting a gesture from a gesture storage by:

1) determining if a gesture is active;

2) determining if a performer has met all conditions for the gesture;

3) determining if a mouth actuator is engaged;

4) progressing to the next gesture in gesture storage if none of the above conditions are met;; l) determining from the selected gesture, a playback status, and a lockout status if a next cue is found for selecting the next sound; and m) playing back the selected sound.

Description:
Gesture and Show Control System For Puppeted Walk- Around Character Performers

CROSS-REFERENCE TO RELATED APPLICATIONS

[001] This Application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application Ser. No. 63/323,574, filed on 2022-03-25, the contents of which are incorporated herein by reference in their entirety.

FIELD OF THE INVENTION

[002] The present invention is in the technical field of gesture systems and show control systems, and more particularly to a gesture and show control system for a puppeted walk-around character performer used for engaging directly with guests.

BACKGROUND

[003] One of the hallmarks of a theme park is getting to meet your favorite characters in real life. The challenge with most walk-around characters is that they can’t talk. Guests are excited to meet and take pictures with the characters, however the experience is left a little empty without any verbal exchange. It can be frustrating for the younger guests who know the character can talk, and extremely limiting for visually impaired guests who rely primarily on sound and touch when meeting others. Additionally, guests expect to hear the authentic voice of their favorite character, a voice that often only one performer can provide, however but that performer or voice actor is almost never in the costume engaging with guests.

[004] Therefore, there is a need for a gesture control system for a puppeted walk-around character performer for engaging directly with guests, overcoming the limitations of the prior art.

BRIEF DESCRIPTION OF THE DRAWINGS

[005] The present invention overcomes the limitations of the prior art by providing a gesture control system for a puppet and a performer for engaging directly with guests.

[006] These and other features, aspects and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying figures where: [007] FTG. l is a system diagram of a gesture and show control system for a puppeted walk-around character performer used for engaging directly with guests, according to one embodiment of the present invention;

[008] FIG. 2 is a flowchart diagram of some steps of a computerized software method for event and sensor processing of the system of FIG. 1;

[009] FIG. 3 is a workflow diagram of a greeting scene beat and a question and answer scene beat used by a puppeted walk-around character and a performer;

[0010] FIG. 4 is a workflow diagram showing a photograph opportunity scene beat used by a puppeted walk-around character and a performer;

[0011] FIG. 5 is a workflow diagram showing a goodbye scene beat used by a puppeted walk-around character and a performer;

[0012] FIG. 6 is an area wide show control block diagram for the system of FIG. 1;

[0013] FIG. 7 is a front view of in-costume hardware required for the system of FIG. 1;

[0014] FIG. 8 is a rear view of in-costume hardware required for the system of FIG. 1;

[0015] FIG. 9 is a diagram of gesture control system components that are on-performer showing inertial measurement unit placement useful for the in system of FIG. 1;

[0016] FIG. 10 is a diagram of an on-performer gesture glove showing finger bend sensor and inertial measurement unit placement useful for the system of FIG. 1;

SUMMARY

[0017] The present invention overcomes the limitations of the prior art by providing a gesture and show control system for a puppeted walk-around character performer used for engaging directly with guests. The system has power management; a costume and show controller operably connected to the power management; an audio subsystem operably connected to the power management and the costume and show controller; one or more than one control sensor operably connected to the costume and show controller; one or more than one visual indicator operably connected to the costume and show controller; and one or more than one on- performer control operably connected to the costume and show controller. The system enables the performer to trigger sound bites using gestures and makes the system entirely self-contained within the costume/puppet. The power management comprises components to operate the in- costume and on-performer electronics, where the components are selected from the group consisting of a battery, a power switch, or a converter.

[0018] The gesture and show controller has a processor with storage and computer instructions for operating and controlling the puppet. The instructions perform operations including: retrieving a time from an internal clock; determining a beginning of data processing; reading sensor data from one or more than one sensor input; determining a new scene number if a new state exists; inputting one or more than one scene modifier and one or more than one beat modifier for selecting the next sound bite; inputting one or more than one stored previous cue modifier for selecting the next sound bite; inputting one or more than one nearby character modifier for selecting the next sound bite; inputting one or more than one nearby character identification, if there is one or more than one different character nearby, into the one or more than one nearby character modifier; inputting one or more than one seasonal modifier for selecting the next sound bite; pre-processing gesture sensor data from the one or more than one sensor; selecting a gesture from a gesture; determining from the selected gesture, a playback status, and a lockout status if a next cue is found for selecting the next sound; and playing back the selected sound.

[0019] To select a gesture from a gesture storage the system determines if a gesture is active; determines if a performer has met all conditions for the gesture; determines if a mouth actuator is engaged; and progresses to the next gesture in gesture storage if none of the above conditions are met.

[0020] The audio subsystem has an audio amplifier, and optionally one or more than one audio transceiver, so that the puppet can interact with guests at a venue.

[0021] The control sensors have switches, sensors, and actuator sensors connected to the gesture and show controller to make the puppet appear more lifelike to the guests at a venue. [0022] The visual indicators can be an RGB LED light that display a state of system to the performer. Each visual indicator has a unique color to differentiate each scene, each beat, and system status to the performer.

[0023] The on performer control comprises finger bend sensors and an inertial measurement units for determining performer movements, gestures, and poses.

[0024] The system also has in-costume hardware. The other in-costume hardware is speakers; visual indicators; a mouth actuator, an actuator sensor; a scene selector; and a gesture glove. The speakers are used to play selected audio to the guests. The visual indicators are used to indicate to the performer that a gesture or audio playback has been completed, or that a gesture or audio playback is able to be used. The the mouth actuator is used to open the puppet’s mouth, so that the audio playback can match the mouth movement. The scene selector is used by the performer in show navigation to advance to a next scene or to a previous scene. The gesture glove is used by the performer, in combination with the mouth actuator sensor, to activate sound playback to guests.

[0025] The system further comprises an area wide show control the area wide show control comprising a gesture and show controller and a wireless radio. The wireless radio provides network connectivity to one or more than one wired, wireless or both wired and wireless network and external control systems. The network connectivity is used to for triggering environmental cues, area wide audio control cues, theatrical lighting control cues, special effects cues, animatronic effects control cues, and show and environmental related cues and sequences. The network connectivity also alerts the gesture and show controller of other characters in close proximity and transmits the other characters identification to the gesture and show controller for processing.

[0026] There is also provided a computer-implemented method for gesture and show control system for a puppeted walk-around character performer used for engaging directly with guests. The method comprising the steps of: retrieving a time from an internal clock; determining a beginning of data processing; reading sensor data from one or more than one sensor input; determining a new scene number if a new state exists; inputting one or more than one scene modifier and one or more than one beat modifier for selecting the next sound bite; inputting one or more than one stored previous cue modifier for selecting the next sound bite; inputting one or more than one nearby character modifier for selecting the next sound bite; inputting one or more than one nearby character identification, if there is one or more than one different character nearby, into the one or more than one nearby character modifier; inputting one or more than one seasonal modifier for selecting the next sound bite; pre-processing gesture sensor data from the one or more than one sensor; selecting a gesture from a gesture storage by: determining if a gesture is active; determining if a performer has met all conditions for a the gesture; determining if a mouth actuator is engaged; progressing to the next gesture in gesture storage if none of the above conditions are met; determining from the selected gesture, a playback status, and a lockout status if a next cue is found for selecting the next sound; and playing back the selected sound.

DETAILED DESCRIPTION OF THE INVENTION

[0027] The present invention overcomes the limitations of the prior art by providing a gesture control system for a puppeted walk-around character performer for the purpose of engaging directly with guests with authentic voice recordings of the character voice talent. [0028] The goal of this invention is to enhance exchanges with guests by giving the performers within these costumes, such as, for example Big Bird " “talk” to guests, by providing the ability to trigger in-the-moment dialogue that is aesthetically appropriate for the current situation. This can be in a typical “meet and greet” experience, a live show, or even walking around the environment, completely untethered.

[0029] Through easy gestures and poses, the performer can cue and trigger pre-recorded sound bites in the moment that are relevant and conversational. With this system, the performer within the costume does not need to follow a prescribed cue list. Similar gestures and poses trigger different sound bites based not only on scene and show beats, such as meet and greets, photograph opportunity, or going on a park “walkabout”, but also the previously triggered cue. For example, if a performer triggered a more solemn reply in response to a guest's action, the next set of cue options would be toned to match.

[0030] The controls and gestures are designed to be simple and natural to the character’s mannerisms, especially given the often limited mobility within the costume. These are also tuned to the character, the physical costume and puppeting mechanics, and the physical limitations of the performer while in the costume. These gestures range from, but are not limited to, simple hand waves to merely striking a pose. Key to the exchange with the guest is that nothing is spoken until the performer begins to puppet the character’s mouth using a mouth actuator 706. Tying the triggering of the sound bite to the mechanical movement of the character’s mouth provides both a safety net so that the performer will not accidentally trigger an unintentional sound bite, and helps ensure synchronization between the spoken dialog and the mouth movement at the start of the sound bite.

[0031] All electronics required for the gesture and show control system are within the costume. This includes battery power and distribution, microcontrollers, sensors, wireless transceivers, and sound amplification. For in-costume sound amplification, a key component to verbal exchanges with the guest, placement of speakers at or near the character’s mouth is paramount, making it sound like the character is speaking directly with the guest. With all the required equipment embedded in the costume or on the performer, the costumed performer is freely able to walk around and interact with guests anywhere within the environment, untethered from any physical or virtual control systems. However guest exchanges with characters can be enhanced with wireless communication to and from the in-costume gesture and show control system. When within range, wireless signals can give the in-costume system the ability to interact with the surrounding environment, triggering additional off-costume sound, lighting, or other theatrical cues and effects. The system also allows wireless communication between characters, giving the gesture and show control system the ability to provide alternative sound bites based on other activated nearby character.

[0032] All dimensions specified in this disclosure are by way of example only and are not intended to be limiting. Further, the proportions shown in these figures are not necessarily to scale. As will be understood by those with skill in the art with reference to this disclosure, the actual dimensions and proportions of any system, any device or part of a system or device disclosed in this disclosure will be determined by its intended use.

[0033] Methods and devices that implement the embodiments of the various features of the invention will now be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the invention and not to limit the scope of the invention. Reference in the specification to “one embodiment” or “an embodiment” is intended to indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least an embodiment of the invention. The appearances of the phrase “in one embodiment” or “an embodiment” in various places in the specification are not necessarily all referring to the same embodiment.

[0034] Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. In addition, the first digit of each reference number indicates the figure where the element first appears.

[0035] As used in this disclosure, except where the context requires otherwise, the term “comprise” and variations of the term, such as “comprising”, “comprises” and “comprised” are not intended to exclude other additives, components, integers or steps. [0036] In the following description, specific details are given to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. Well-known circuits, structures and techniques may not be shown in detail in order not to obscure the embodiments. For example, circuits may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail.

[0037] Also, it is noted that the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. The flowcharts and block diagrams in the figures can illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer programs according to various embodiments disclosed. In this regard, each block in the flowchart or block diagrams can represent a module, segment, or portion of code, that can comprise one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function. Additionally, each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

[0038] Moreover, a storage may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other non-transitory machine-readable mediums for storing information. The term "machine readable medium" includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels and various other non-transitory mediums capable of storing, comprising, containing, executing or carrying instmction(s) and/or data. [0039] Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s). One or more than one processor may perform the necessary tasks in series, distributed, concurrently or in parallel. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or a combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted through a suitable means including memory sharing, message passing, token passing, network transmission, etc. and are also referred to as an interface, where the interface is the point of interaction with software, or computer hardware, or with peripheral devices.

[0040] In the following description, certain terminology is used to describe certain features of one or more embodiments of the invention.

[0041] The term “puppet” refers to any full body or larger than full body character that requires the performer/controller to be inside the character.

[0042] The term “beat” refers to one or more than one subdivision in action within a scene.

[0043] Various embodiments provide a gesture control system for a puppet and a performer. One embodiment of the present invention provides a gesture control system for a puppet and a performer. In another embodiment, there is provided a method for using the system. The system and method will now be disclosed in detail.

[0044] Referring now to FIG. 1, there is shown a diagram of a gesture and show control system 100 for a puppeted walk-around character performer used for engaging directly with guests. As can be seen, the system 100 comprises: power management 102, a costume and show controller 104, an audio subsystem 106, one or more than one control sensor 108, one or more than one visual indicator 110 and one or more than one glove control 112. [0045] The power management 102 comprises the necessary components, such as, for example, a battery, a power switch and a converter, to operate the in-costume and on-performer electronics.

[0046] The costume and show controller 104 comprises one or more than one processor that has storage for storing instructions executable on the one or more than one processor for operating and controlling the puppet.

[0047] The audio subsystem 106 comprises one or more than one audio amplifier and optionally one or more than one audio transceiver. The audio subsystem 106 is used so that the puppet can interact with guests at a venue, using the correct voice or sounds that the character is famous for, including any catch phrases.

[0048] The one or more than one performer control sensor 108 comprises a variety of switches, sensors and actuators to make the puppet appear more lifelike to the guests at the venue. For example:

[0049] Inside the puppet costume the is one or more than one visual indicator 110, such as, for example, RGB LED lights, that help the performer know what scene they are in, what beat they are in, whether a motion gesture was just performed, and an indicator that a sound bite is playing. Normally, the one or more than one visual indicator is positioned at the edge of the performer’s peripheral vision to provide a quick reference when needed.

[0050] The one or more than one visual indicator 110 is a unique color to differentiate each scene and each beat. The active scene or beat is a third color. All of the one or more than one visual indicator 110 is lit to make it easier for the performer to determine the scene or beat number. Brightness of the one or more than one visual indicator 110 is set to provide enough feedback to the performer without creating too much light inside the costume.

[0051] A single visual indicator 110 in the middle serves multiple functions. It will light up one defined color when a motion gesture has been performed, another unique color when a sound bite is actively playing, and a third unique color when the performer has “muted” the system.

[0052] While a sound bite is playing, no other gesture will be recognized by the system 100 so the performer is free to emote as much as they want. This is handy for the pose gestures that are triggered by the mouth actuator 706 since the mouth actuator 706 will be in continuous motion while the performer puppets to the triggered dialog. [0053] Also during all scenes, a “double-OK” gesture mutes all gestures and poses, meaning the performer cannot trigger a sound bite with the mouth actuator 706. This is added to give the performer a chance to open and close the mouth without triggering a sound bite, often a desired look for photograph moments. When muted, the central LED lights up a unique, identifiable color.

[0054] In this embodiment, the performer navigates from scene to scene is using a rocker switch on the mouth actuator 706. Once in a specific scene or beat, only certain gestures are active.

[0055] Within the costume, one or more than one visual indicator 704 indicate what beat the performer is currently in/just triggered, for example, a first visual indicator 704 is for the greeting show beat, a second visual indicator 704 is for the question and answer show beat, and so on.

[0056] The one or more than one glove control 112 provides extra capabilities for the performer to actuate more gestures and audio clips without overwhelming the performer. The one or more than one glove control 112 are discussed in more detail below.

[0057] Referring now to FIG. 2, there is shown a flowchart diagram of some steps of a computerized software method for event and sensor processing of the system 100. The computerized software method determines whether or not performer has performed a pose or motion gesture, if a sound bite can be triggered, and what specific sound bite can be played if allowed. The computerized software method comprises the steps of first, a time is retrieved from an internal clock 202. Then, the time is used to determine beginning of data processing 204. Next, reading sensor data 206 from sensor input 208, and parsing and formatting the sensor data for further computation 210. Then, determining if a new state 212 has occurred for any of the from one or more than one sensor 210. Next, updating a current scene number if a new state 212 exists. Then, determining the current sound bite to be triggered 228. Next, inputting one or more than one scene modifier 216 and one or more than one beat modifier for selecting the next sound bite 228. Then, inputting one or more than one stored previous cue modifier 220 for selecting the next sound bite 228. Next, inputting one or more than one nearby character modifier 222 for selecting the next sound bite 228. Then, a character modifier 222 is determined if there is one or more than one different character nearby, inputting one or more than one nearby character identification 224 into the one or more than one nearby character modifier 222 Next, inputting one or more than one seasonal modifier 226 for selecting the next sound bite 228. Internal counters within 228 determine the final sound bite variation for playback for each gesture type when triggered by the performer once all conditions for sound bite selection 228 have been determined. Then, pre-processing gesture sensor data 232 from the one or more than one sensor 210. Next, selecting a gesture from a gesture storage 233 by: first, determining if a gesture is active 236. Then, if the gesture in the previous step is not active, selecting a next gesture 234. Next, if the gesture in the previous state is active, determining if a performer has met all conditions for the gesture 238. Then, if the performer has not met all conditions for the gesture, selecting the next gesture 238. Next, if the performer has met all conditions for the gesture, determining if a mouth actuator 706 is engaged 240. Then, if the mouth actuator 706 is not engaged, selecting the next gesture 234. Next, if mouth actuator is engaged, determining from the selected gesture 233, a playback status 242, and a lockout status 244. Then, if both playback status 242 and lockout status 244 are false 246, a sound file is selecting the next sound 228 for the selected gesture 233. Finally, playing back the selected sound 230.

[0058] The following gestures stored in gesture storage 233 are just examples of some of the capabilities of the invention. As will be understood by those with skill in the art with reference to this disclosure, other gestures are possible based on the selected character, performers motion ranges, and other physical and aesthetic factors. Below is a list of gestures stored in the gesture storage 233: [0059] Hand up pose

[0060] Using horizontal as 0° angle, positive angle values towards the sky, and negative angle values towards the ground, the performer holds their hand/forearm at a positive angle and squeezes a mouth actuator 706. The puppet’s performer’s hand is to be open and relaxed.

[0061] 2. Hand down pose

[0062] Using the same angle reference above, the performer holds their hand/forearm at a negative angle and squeezes the mouth actuator 706. The performer’s hand to be open and relaxed.

[0063] 3. Side-to-side hand wave motion

[0064] With the performer’s palm facing out to the guest, the performer rotates their hand side to side at the wrist. The visual indicator 110 within the costume will light up informing the performer that the gesture was recognized. [0065] 4. Bent finger wave motion

[0066] With the performer’s palm facing out to the guest, the performer bends all their fingers (minus the thumb) several times, bending fingers at a rate of 2-3 times a second. The visual indicator 110 within the costume will light up informing the performer that the gesture was recognized. The performer will have 1-1.5 seconds to squeeze the mouth actuator 706 and trigger the gesture.

[0067] 5. Make a fist pose

[0068] The performer makes a fist and squeezes the mouth actuator 706.

[0069] 6. Thumbs up pose

[0070] For this pose, the performer has their fingers closed, thumb up, palm perpendicular to the ground, and forearm pointed up (positive angle relative to the ground). [0071] 7. Index finger point (one-finger point) pose

[0072] With the performer’s hand in any position, the performer points with their index finger, bending all other fingers, and squeezing the mouth actuator 706. Variations on this pose gesture include forearm tilted upwards or downwards, and is often used to modify sound bite selection.

[0073] 8. Index and middle finger point (two-finger point)

[0074] With the performer’s hand in any position, the performer points with their index and middle fingers, bending their ring and pinky finger, and squeezing the mouth actuator 706. Variations on this pose gesture include forearm tilted upwards or downwards, and is used to modify sound bite selection.

[0075] 9. OK pose

[0076] The performer makes the OK sign bending just their index finger, being sure to keep other fingers straight.

[0077] 10. Double-OK pose

[0078] The performer makes a double-OK sign bending their index and middle finger, being sure to keep their other fingers straight.

[0079] This collection of gestures stored in the system 100, along with squeezing the mouth actuator 706, is what the performer uses to trigger the sound bites. As expected, there are many more sound bites than gestures. What sound cue gets triggered when the actuator sensor is engaged, is governed by the computerized software portion of the system 100. This is determined by what scene, what beat, previously triggered sound bite, nearby characters, and other factors. It is the gesture and show controller 104 that gives context to each gesture.

[0080] The gesture and show controller 104 can be used to trigger sound bites related to a scripted show. Variations in triggered sound bites can still exist in these types of scenes.

However, the sequence of events is more linear versus a more typical non-linear exchange with a guest. To enhance the quality of the experience, the characters in-costume audio is often additionally routed to a wireless audio transmitter within the costume. This audio signal is received by an audio receiver and can be amplified to area speakers in the proximity of the character. With the singular audio signal coming within the costume for both in-costume amplification and area-wide amplification, audio synchronization in ensured. The “meet and greet” scene 304 is the most complicated, yet most common, of all the scenes walk-around character performers engage in with guests.

[0081] Referring now to FIG. 3, there is shown a workflow diagram 300 of a greeting scene beat and a question and answer scene beat used by a puppeted walk-around character performer within the meet and greet scene. The meet and greet scene 304 is the most complicated, yet most common, of all the scenes walk-around character performers engage in with the guest. This example demonstrates how the gesture and show control system 100 triggers the appropriate in-the-moment dialog when engaging with guests, as well as an example of how gestures can navigate show beats.

[0082] A meet and greet is broken down into four beats: a greeting 304 beat, a question and answer 310 beat, a photograph opportunity 404 beat, and lastly, a goodbye 504 beat. The cycle repeats itself for each guest or group, under the control of the performer inside the costume.

[0083] The following is a breakdown of the beat structure for the meet and greet scene 304. Each beat can be reached by executing the unique gesture defined to initiate that beat. For example, the performer can jump into the photograph opportunity 404 beat by doing the OK sign gesture pose and squeezing the mouth actuator 706. Not only will this trigger the appropriate audio cue, it will move the show control system into the photograph opportunity 404 beat. With this system 100, performers are not required to follow a set order of show beats. For example, the performer could trigger the greeting beat 304, then immediately jump into the photograph opportunity 404 beat if the guest is excited to take a photograph at that moment. [0084] In a typical meet and greet scene, the performer would return to the greeting beat 304 performing the side-to-side hand wave gesture 302 while squeezing the mouth actuator 706. The performer can jump around and skip beats as needed. The side-to-side hand wave gesture 302 will always get the performer to the start of the greeting beat 304, the single-finger point 308 will always get the performer to the question and answer beat 310, thumbs-up will get the performer to photograph opportunity 404, and the bent finger wave will always get the performer to the goodbye show beat 504.

[0085] Gestures are designed to be natural to the puppeted character and simple for a performer within the costume to execute given the limited mobility within the costume. These range from simple hand waves to merely striking a pose, such as, for example, hand up, hand down, one-finger point, two-finger point, OK sign, etc. No sound bite is triggered until the mouth actuator 706 for the character’s mouth is used to open the mouth. This both provides a safety net, by no accidentally triggering an unintentional sound bite, and ensures synchronization between the spoken dialog and the puppeting of the mouth from the start of the sound bite.

[0086] The following are the gestures currently utilized in this initial system. As will be understood by those with skill in the art, the “pose” gestures just require the performer to match the desired pose. To trigger the gesture the performer squeezes the mouth actuator 706. For “motion” gestures, the performer must first perform the gesture. A visual indicator 110 within the costume will light up when the system 100 has recognized the gesture. With motion gestures, the performer has a set time, approximately 1.0 to 1.5 seconds, to squeeze the mouth actuator 706 to trigger the sound bite associated with the motion gesture and other sound bite modifiers. Once the visual indicator 110 is off, or the sound bite is triggered, the gesture is no longer active.

[0087] The greeting beat 304 is typically the starting beat, but this beat 304 is not “activated” until the performer does the side-to-side hand wave gesture 302 and squeezes the mouth actuator 706. Once that gesture has been performed, one of several greeting sound bites will play 306, and the show control system 100 will move into the greeting show beat 304. The performer can continue to trigger additional greeting sound bites 306 by repeating the side-to- side hand wave gesture 302 and squeezes the mouth actuator 706 .

[0088] In the question and answer beat 310, the character/puppet asks guests simple yes or no questions 312 . This beat 308 is triggered when the performer performs a one-finger point pose gesture 308 and squeezes the mouth actuator 706. This also triggers one of several question sound bites. Typically, the performer waits for a response 314 from the guest, then reacts to the answer with a two-finger point 322 pose gesture and squeezing the mouth actuator 706. The system 100 also allows for slight variation in response in this sound bite. If the performer’s forearm is tilted upwards 322, such as, for example, greater than horizontal, the sound bite triggered is a positive sound bite 324. If the performer’s forearm is tilted downwards 318, the sound bite triggered is a more somber sound bite 320. If the performer needs to ask another question, the performer simply performs the one-finger point gesture 308 and squeezes the mouth actuator 706 again. The system 100 will not use the same question, which is typically desired.

[0089] Additionally, there are gestures that are always active 326 for the performer. First is the two-finger point pose gesture. This gesture is constant in all scenes and show beats, and triggers response sound bites that might be useful for generic answers or filler when interacting with guests. There are also variations to this gesture to allow the performer to modify the response to be more appropriate for the given exchange. In this case, if the hand is horizontal or tilted higher than horizontal, then the response is positive. If the hand is pointed down, or titled towards the ground, then the response is more somber. The other always-active gesture 326 is the double-OK pose gesture. This gesture does not require the mouth actuator 706. Doing this pose prevents a sound bite from being triggered, and is useful if the performer wishes to animate the mouth without triggering a sound, such as, for example, opening the puppets mouth to smile at a guest.

[0090] All triggered sound bites have multiple variations. Which variation is triggered in a given moment is handled automatically in the system 100 where internal counters 228 track the last sound bite triggered specific to a defined action.

[0091] Referring now to FIG. 4, there is shown a workflow diagram 400 showing a photograph opportunity scene used by a puppeted walk-around character and a performer for a photograph opportunity beat. The performer’s first performer action 402 is forming the OK sign with the puppet’s hand while squeezing the mouth actuator 706. The system 100 interprets this show control 404 movement is a photograph opportunity beat. The system 100, then triggers a first audio cue 406 to ask the guest if the guest would like their picture taken 408 with the puppet. If the guest answers affirmatively, then the performer performs a second performer action 410 of opening the puppet’s hand and squeezing the mouth actuator 706. The system 100 interprets this movement and triggers a second audio cue 412, such as, “Say Cheese” or whatever the character is known to say. If the guest would like more pictures 414, the performer repeats the second performer action 410 until all pictures are taken. For a group photograph, the performer can repeat the same gesture 410 as many times as they like in case more effort is needed to gather all the guests for a picture if more than one guest is present. Then, the performer performs a third performer action 416, that is to move the puppet’s hand into a thumbs up position and squeeze the mouth actuator 706. This triggers a third audio cue 418 from the system 100 that is appropriate for the character the puppet represents, such as, “That was great!”. [0092] Referring now to FIG. 5, there is shown a workflow diagram 500 showing a goodbye scene beat 504 used by a puppeted walk-around character and a performer. The last beat in a typical meet and greet scene is the goodbye scene beat 504. However, the goodbye scene beat 504 can be triggered at any time. This beat, and related sound bite, is triggered doing the bent finger wave gesture 502 and squeezing the mouth actuator 706. This gesture 502 triggers a goodbye audio cue 506 using internal counters in 228. The performer can repeat the gesture 502 and trigger different audio cues 506 as needed. The system 100 will detect multiple, repeated gestures 502 and alter the goodbye audio cue 506 until the performer ceases the gesture 502 after the guest has left 508.

[0093] The collection of gestures shown in FIG.s 3, 4 and 5, along with squeezing the mouth actuator 706, is what the performer will use to trigger the sound bites. As expected, there are many more sound bites than gestures. What sound cue gets triggered is governed by the computerized software portion of the system 100, depending upon what scene the performer is in, what beat the performer is in, what sound bite the performer just triggered, and other factors. It is the computerized software, running on the system 100, that gives context to each gesture. [0094] Referring now to FIG. 6, there is shown an area wide show control block diagram 600 for the system 100. Although not required for its primary functionality, embedded in the gesture and show controller 104 is a wireless radio 602 providing system 100 network connectivity to one or more than one wired, wireless or both wired and wireless network 604 and control systems 606, 610, 612, 614, and 616 external to the system 100. This is used to for triggering environmental cues, such as sound bites or music cues through area wide audio control 614, theatrical lighting control 610 effects, animatronic effects control 612, amplifying the characters voice 616 through the area audio system 614 if required, or any one of a multitude of show and environmental related cues and sequences. These external cues are often triggered by unique gestures within a scene. For example, the OK pose gesture, which is not typically used for guest exchanges during the “Walkabout” scene, can be used to trigger a specific character dialog exchange using one or more than one wired, wireless or both wired and wireless radios 602 to trigger area sound bites or sound effects based on area show control 606 and area audio control 614.

[0095] In addition to wireless signals triggering environmental cues, the same show control 600 can be used to alert the on-board gesture and show controller 104 that there is another character in close proximity. This information can be used to “cue up” alternative sound bites that are more specific to an exchange, dialog, or reference to the other near-by character. The show controller 104 also provides direct communication to other characters via other peer- to-peer wireless communication protocols such as, but not limited to, Bluetooth communication. [0096] Although not needed for basic functionality of system, there is a wireless control tablet 608 that gives the performer and support staff the ability to examine sensor input, calibrate sensors, and remotely trigger in-costume audio for testing. The tablet 608 can also be used to tell the in-costume gesture and show system 104 to use specific seasonal subsets of the triggered sound bites, such as sound bites for holidays, summer, Halloween, etc. Communication to the tablet 606 can be through one or more than one wired, wireless or both wired and wireless network 604 connection with the gesture and show controller 104 in the costume.

[0097] Referring now to FIG. 7, there is shown a front view 700 of in-costume hardware required for the system 100. The system 100 comprises “in-costume” and “on-performer” hardware. The combination of sensors and microcontrollers enables the performer to trigger sound bites while performing simple gestures. This, combined with an on-board audio amplifier and speaker, make the system 100 entirely self-contained within the costum e/puppet.

[0098] The in-costume hardware comprises one or more than one speaker 702, one or more than one visual indicator 704, mouth actuator 706, scene selector 708, and one or more than one gesture glove 710.

[0099] The one or more than one speaker 702 is used to play the selected audio to the guests. The one or more than one speaker 702 can be placed near the puppet’ s/character’s mouth, but are not limited to that area alone. Other puppets/characters can make different sounds from different areas, depending on the character. To support the illusion that the puppet/character is real and communicating with the guests, the one or more than one speaker 702 can be placed strategically on the puppet/character costume.

[00100] The one or more than one visual indicator 704 is used to indicate to the performer that a gesture or audio playback has been completed, or that a gesture or audio playback is able to be used. In this embodiment, the one or more than one visual indicator 704 is mounted in the costume within view of the performer to provide scene, show beat, and status information. Other indicators, as discussed above, are also able to convey many different pieces of information to the performer inside the costume depending on the venue, character, location, etc.

[00101] The mouth actuator 706 is used to open the puppet’s mouth, so that the audio playback can match the mouth movement. This brings the puppet to a more real status for the guests. A sensor on the actuator informs the primary gesture and show controller 104 of the current state of the mouth actuator 706.

[00102] The scene selector 708 is used in show navigation and is typically attached to the mouth actuator 706 so they can be utilized by the same hand of the performer.. Generally, the scene selector 708 is a two-sided, momentary rocker switch. The performer uses their thumb to navigate scenes forward and backwards with he scene selector 708. Pressing the scene selector 708 in one direction will advance a scene. Pressing the scene selector 708 in the opposite direction will move the scene selection backwards. The scene selector 708 always returns to its neutral state. This neutral state prevents accidental scene navigation. The scene selector 708 must be held for a defined period of time, approximately one second before a scene moves forwards or backwards to prevent unintentional scene changes..

[00103] The one or more than one gesture glove 710 is used by the performer in combination with the mouth actuator 706 to activate sound playback to the guests

[00104] Referring now to FIG. 8, there is shown a rear view 800 of in-costume hardware required for the system 100. As can be seen, the in-costume hardware further comprises one or more than one gesture and show controller 802, one or more than one audio amplifier 804, and power distribution 806.

[00105] The gesture and show controller 802 is discussed above, but this diagram shows the placement of the gesture and show controller 802 in-costume.

[00106] The one or more than one audio amplifier 804 can be placed in a variety of locations depending upon the puppet/character. Most puppets/characters will have at least one audio amplifier, but in larger venues, more audio amplifiers 804 may be needed for all the guests to hear the audio playback from the puppet/character.

[00107] The power distribution 806 provides power to all in-costume and on-performer components. The power distribution 806 is positioned in an easy to repair location and out of the way of the performer.

[00108] Referring now to FIG. 9, there is shown a diagram of an on-performer gesture control system component showing inertial measurement unit placement 900 useful for the in system 100. The on-performer hardware comprises one or more than one inertial measurement unit sensor 902 and 904. The one or more than one inertial measurement unit sensor 902 and 904 is attached to a gesture glove 1000 worn on the performer’s hand. The one or more than one inertial measurement unit sensor 902 and 904 measure the motion and position of the performer’s hand. Specific hand motion gestures can be interpreted by the system 100 and converted into a movement, action or audio playback depending on the scene, beat and location of the puppet.

[00109] Referring now to FIG. 10, there is shown a diagram of an on-performer gesture glove 1000 showing finger bend sensor and inertial measurement unit placement useful for the system 100. The gesture glove 1000 comprises sensors for determining performer movements, gestures, and poses. The gesture glove 1000 comprises one or more than one embedded finger bend sensor 1002, 1004, 1006, and 1008 and is worn by the performer to detect finger movement. Each finger movement of the one or more than one embedded finger bend sensor 1002, 1004, 1006, and 1008 can activate a defined gesture, a movement, or an audio clip depending on the gesture, the puppet, the location, the scene and the beat. Additionally, other nearby puppets, props, performers or guests can also be used to determine the next action of the puppet.

[00110] Gestures are designed to be natural to the puppeted character and simple for a performer within the costume to execute given the limited mobility within the costume. These range from simple hand waves to merely striking a pose, such as, for example, hand up, hand down, one-finger point, two-finger point, OK sign, etc. No sound bite is triggered until the mouth actuator 706/sensor for the character’s mouth is used to open the mouth. This both provides a safety net and won’t accidentally trigger an unintentional sound bite. This also ensures synchronization between the spoken dialog and the puppeting of the mouth from the start of the sound bite.

[00111] What has been described is a new and improved system for a gesture control system for a puppeted walk-around character and a performer for engaging directly with guests, overcoming the limitations and disadvantages inherent in the related art.

[00112] Although the present invention has been described with a degree of particularity, it is understood that the present disclosure has been made by way of example and that other versions are possible. As various changes could be made in the above description without departing from the scope of the invention, it is intended that all matter contained in the above description or shown in the accompanying drawings shall be illustrative and not used in a limiting sense. The spirit and scope of the appended claims should not be limited to the description of the preferred versions contained in this disclosure.

All features disclosed in the specification, including the claims, abstracts, and drawings, and all the steps in any method or process disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in the specification, including the claims, abstract, and drawings, can be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.

[00113] Any element in a claim that does not explicitly state "means" for performing a specified function or "step" for performing a specified function should not be interpreted as a "means" or "step" clause as specified in 35 U.S.C. § 112.