Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND SYSTEMS FOR COMMUNICATION AND INTERACTION USING 3D HUMAN MOVEMENT DATA
Document Type and Number:
WIPO Patent Application WO/2022/026630
Kind Code:
A1
Abstract:
Described herein are methods and systems for using three-dimensional human movement data as an interactive and synesthetic means of communication that allows body language to be shared between and among individuals and groups, permitting never-before-seen means of expressivity and sharing, and forming the basis for a novel type of media having numerous applications, for example as part of or to enhance the application of psychedelic-assisted therapy, especially where such therapy incorporates augmented or virtual reality.

Inventors:
HASHKES SARAH (US)
HOE MATTHEW (US)
Application Number:
PCT/US2021/043580
Publication Date:
February 03, 2022
Filing Date:
July 28, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RADIX MOTION INC (US)
International Classes:
G06K9/00; G06T13/40
Foreign References:
US20180336714A12018-11-22
US8407756B22013-03-26
US9325936B22016-04-26
US20200184701A12020-06-11
CN111443619A2020-07-24
US9974478B12018-05-22
Attorney, Agent or Firm:
PECHENIK, Graham (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method for communication using 3D human movement data, the method comprising the steps of: a. capturing 3D human movement input from a sender; b. creating a 3D movement data package from the 3D human movement input; c. sending the 3D movement data package to a recipient device; and d. rendering a 3D movement object on the recipient device, from the 3D movement data package.

2. The method of claim 1 further comprising the step of saving a 3D movement data file to permanent storage, comprising the 3D movement data package.

3. The method of claim 2 wherein the permanent storage contains a searchable movement database indexed based on the metadata of 3D movement files.

4. The method of claim 2 wherein the permanent storage contains a searchable movement database indexed based on movement data analysis of 3D movement files.

5. The method of claim 1 wherein the 3D movement data package created from the 3D human movement input reflects additional input from one or more additional input means.

6. The method of claim 5 wherein the additional input is voice input or input from the touchscreen of a smartphone or tablet device or the controls of a VR device.

7. The method of claim 5 wherein the additional input includes physiological, physiometric, or biometric data.

8. The method of claim 1 wherein the sender receives feedback from an output means.

9. The method of claim 8 wherein the feedback from an output means is visual feedback, auditory feedback, haptic feedback, or any combination thereof.

10. The method of claim 8 wherein the feedback from an output means is generated using higher-level features of the 3D human movement input.

11. The method of claim 10 wherein the higher-level features include smoothness of motion, range of motion, reaction time to a cue, gait size and speed, limb flexibility, and closeness of match to a predefined 3D movement.

12. The method of claim 1 wherein the 3D movement object on the recipient device is viewable to a recipient.

13. The method of claim 12 wherein the 3D movement object viewable to a recipient is interactive.

14. The method of claim 1 further comprising capturing 3D human movement input from at least one additional sender.

15. The method of claim 14 wherein the 3D movement object rendered on the recipient device is a combined 3D movement object, an amalgamated 3D movement object, or an average 3D movement object, said 3D movement object based on the captured 3D human movement input from the sender and the at least one additional sender.

16. The method of claim 1 further comprising rendering a 3D movement object on at least one additional recipient device.

17. The method of claim 1 further comprising the step of using the 3D movement data package to operate a mechanical apparatus.

18. A non-transitory computer-readable storage medium storing executable instructions that, when executed by a processor, cause the processor to perform steps comprising: a. capturing 3D human movement input from a sender; b. creating a 3D movement data package from the 3D human movement input; c. sending the 3D movement data package to a recipient device; d. receiving a 3D movement data message from a sending device; and e. rendering a 3D movement object from the 3D movement data message.

19. The non-transitory computer-readable storage medium of claim 18 further comprising the step of saving a 3D movement data file to permanent storage, comprising the 3D movement data package.

20. The non-transitory computer-readable storage medium of claim 18 further comprising the step of using the 3D movement data message to operate a mechanical apparatus.

21. A system for communication using 3D human movement data, comprising a processor and a non-transitory computer-readable storage medium storing executable instructions that, when executed by the processor, cause the processor to perform steps comprising: a. capturing 3D human movement input from a sender; b. creating a 3D movement data package from the 3D human movement input; c. sending the 3D movement data package to a recipient device; d. receiving a 3D movement data message from a sending device; and e. rendering a 3D movement object from the 3D movement data message.

22. The system of claim 21 further comprising the step of saving a 3D movement data file to permanent storage, comprising the 3D movement data package.

23. The system of claim 21 further comprising the step of using the 3D movement data message to operate a mechanical apparatus.

Description:
METHODS AND SYSTEMS FOR COMMUNICATION AND INTERACTION USING 3D HUMAN MOVEMENT DATA

CROSS REFERENCE

[0001] This application claims priority under 35 U.S.C. §119(e) to the U.S. Provisional Patent Application entitled “Methods and Systems for Communication using 3D Human Movement Data,” filed with the U.S. Patent and Trademark Office on July 28, 2020, and assigned Serial No. 63/057,873, which is incorporated by reference as if fully set forth herein.

FIELD OF THE INVENTION

[0002] Described herein are methods and systems for using three-dimensional human movement data as an interactive and synesthetic means of communication.

BACKGROUND OF THE INVENTION

[0003] Human beings are inherently social animals, for whom communication is both a fundamental feature and a fundamental need. Communication forms the foundation for human interaction, connection, and bonding.

[0004] Communication, generally defined, is the act of conveying meaning, through the use of mutually understood signs, symbols, and semiotic rules. While sometimes narrowly understood to refer to verbal and written language specifically, communication also includes non-linguistic modes of meaning transfer, such as eye movements, facial expressions, hand gestures, body postures, and the use of common physical space (taken together, “body language”). Indeed, the word communication comes from the Latin verb “communicare,” meaning broadly “to share.”

[0005] Body language in fact plays an outsize role in human communication. Studies have demonstrated that as much as 55 percent of human communication is based on body language. From the earliest age, humans mimic their parents, learn skills by copying others, and respond behaviorally and emotionally to the body language of others around them, even before they are able to understand and use verbal language. [0006] In multiple areas of the brain responsible for processing movement and touch, humans have “mirror neurons” that fire both when a person acts and when a person observes the same action performed by another. Mirror neurons have been demonstrated to underpin the ability to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own (“theory of mind”), and to contribute to the human capacity for empathy. Empathy, the ability to understand and share someone else’s emotions, is an imperative ingredient of individual well-being, and a critical component of successful social interaction. According to the theory of “embodied cognition,” it also has been shown that physical experience is an irreducible aspect of human cognition, and that bodily movement and interaction in the context of a task or environment will impact an individual’s perceptions, emotions, and behaviors. Given such teachings, it is therefore believed that, in large part, the ability to use and observe body language in communication with others is necessary for individual and social flourishing.

[0007] Generally, advances in communication technology have focused only on addressing spatial and temporal limitations. For instance, from the first smoke signals, through the telegraph and telephone, to wireless and satellite technologies, advances have allowed communication with more and more recipients who are distant in space. And from the first cave paintings, through print, radio, and television, to the internet and social media, advances have allowed communication with more and more recipients who are also distant in time.

[0008] Despite these advances, human communication technologies remain substantially incomplete. For instance, while such advances permit widespread communication of verbal and written language, no current technologies allow communication of each aspect of body language. Thus, the ability for humans to fully express themselves — and to fully share their thoughts, feelings, emotions, and beliefs — remains unrealized. As increasing numbers of humans retreat behind screens, both at work and at home, there is especially great need for a further advance.

[0009] Several attempts to bridge this divide have been made. Video chat applications (e.g., Skype, FaceTime, Zoom) introduced a visual modality to communications that might otherwise have involved only voice or text. Messaging platforms allow sharing of emoticons and emojis as well as “animoji,” “bitmoji,” “memoji,” and the like, that permit some sharing of expressions, emotions, and other non-linguistic information. Social media platforms (e.g., Snapchat, Facebook, Instagram, TikTok) provide the ability to share videos and “stories” that combine visual communication with the expressive aspects of bitmoji and various “filters,” i.e., effects that augment facial or bodily movements, overlaid onto a video clip. It also has become common to share visual content and “memes” (e.g., image macros and animated gifs) to communicate emotions and other information that may not be easily conveyed linguistically.

[0010] However, none of these attempts solve all of the problems of prior communication technologies. Video chat applications, for example, generally are for synchronous communication, are used in ways that only exchange facial information, and even there have latency and bandwidth issues that make conveying emotional signals with facial expressions less robust and reliable. While facial expressions and some additional body language can be shared asynchronously (and even modified or enhanced) on messaging and social media platforms, these only can be shared as two-dimensional video files that are not interactive. And while some expressive visual information can be shared through “reaction gifs” and other memes, these rely on (and are in fact used because of) a predetermined vocabulary of symbolic meaning, rather than the unique and personal meaning of the sender.

[0011] Overcoming many of the limitations in the prior art, applicant herein discloses novel communication methods that utilize three-dimensional (3D) human movement data. These novel communication methods permit body language to be shared, deepening human connections through the emotion and empathy that such sharing represents and engenders. Although prior art systems exist to record 3D human movement data, the use of such data is typically for animation (e.g., movies, video games), and its purpose is ultimately to generate two-dimensional (2D) video. Applicant is unaware of any prior methods or systems that use 3D human movement data as a medium of communication, as a basis for a social communication platform, or in ways that allow the types of interactivity taught herein.

[0012] Applicant also discloses novel methods for synesthetic communication, which permit never-before-seen forms of expressivity and sharing, further intensifying human connections in new ways. While prior art virtual reality (VR) systems allow users to try on various “avatars,” some with different features or capabilities than humans (e.g., wings, tails, tentacles), such technologies never have been used in ways that connect VR systems with technology available on smartphones (e.g., mobile augmented reality (AR)), or that create experiences solely with mobile AR systems, involving communication with 3D human movement data, and multiple forms of interactivity with such data. [0013] These novel methods for synesthetic communication are believed to increase neural prediction error in a way analogous to the increase in prediction error caused by consuming psychedelic substances, increasing the level of surprise in the brain, and resulting in greater neuroplasticity and learning.

[0014] The disclosed methods accordingly not only can be used as part of psychedelic experiences (including psychedelic-assisted therapy), and to improve and enhance such psychedelic experiences, but they also can teach individuals about psychedelic experiences without the individual having to consume a psychedelic substance, by demonstrating to an individual (including to a psychedelic naive individual) multiple aspects of what a psychedelic experience may be like.

[0015] Additionally, applicant discloses multiple novel methods and systems for using the 3D human movement data collected in the described communication methods, in fields such as artificial intelligence (AI), social, gaming, education, fitness, health, entertainment, research, and robotics. Through these methods and systems, applicant discloses how the 3D movement data itself forms a new resource and a new type of media with numerous significant applications.

INCORPORATION BY REFERENCE

[0016] Each patent, publication, and non-patent literature cited in the application is hereby incorporated by reference in its entirety as if each was incorporated by reference individually. Unless specifically stated otherwise, reference to any document herein is not to be construed as an admission that the document referred to or any underlying information in the document is prior art in any jurisdiction.

BRIEF SUMMARY OF THE INVENTION

[0017] The invention provides methods for communication using 3D human movement data. In preferred embodiments these methods allow for interactive and/or synesthetic communication.

[0018] In some embodiments, the methods for communication using 3D human movement data comprise the steps of: (a) capturing 3D human movement input from a sender; (b) creating a 3D movement data package from the 3D human movement input; (c) sending the 3D movement data package to a recipient device; and (d) rendering a 3D movement object on the recipient device, from the 3D movement data package.

[0019] In some embodiments, these methods further comprise the step of saving a 3D movement data file to permanent storage, comprising the 3D movement data package. In some preferred embodiments, the permanent storage contains a searchable movement database indexed based on the metadata of 3D movement files, or based on movement data analysis of 3D movement files.

[0020] In some embodiments, the 3D movement data package created from the 3D human movement input reflects additional input from one or more additional input means, including voice input, input from the touchscreen of a smartphone or tablet device or the controls of a VR device, and physiological, physiometric, or biometric data.

[0021] In some embodiments, the sender receives feedback from an output means. The feedback may be visual feedback, auditory feedback, haptic feedback, or any combination thereof. In some embodiments, the feedback is generated using higher-level features of the 3D human movement input, which include smoothness of motion, range of motion, reaction time to a cue, gait size and speed, limb flexibility, and closeness of match to a predefined 3D movement.

[0022] In some embodiments, the 3D movement object is viewable to a recipient and optionally is also interactive.

[0023] In some embodiments, 3D human movement input is captured from at least one additional sender, including in some embodiments from groups of senders.

[0024] In some embodiments, the 3D movement object rendered on the recipient device is a combined 3D movement object, an amalgamated 3D movement object, or an average 3D movement object, said 3D movement object based on the captured 3D human movement input from the sender and the at least one additional sender, including in some embodiments from a group of senders.

[0025] In some embodiments, the 3D movement object is rendered on at least one additional recipient device, including in some embodiments on the recipient devices of a group of recipients.

[0026] In some embodiments, 3D movement data is used to operate a mechanical apparatus. [0027] The invention also provides non-transitory computer-readable storage media storing executable instructions that, when executed by a processor, cause the processor to perform steps comprising methods such as described above.

[0028] The invention further provides systems for performing the steps of such methods.

[0029] These and other objects, features, improvements, and advantages of the present invention may be more clearly understood and appreciated from a review of the following detailed description of the disclosed embodiments and examples, and by reference to the appended claims. The foregoing summary has been made with the understanding that it is to be considered as a brief and general synopsis of only some of the objects and embodiments disclosed herein, is provided solely for the benefit and convenience of the reader, and is not intended to limit in any manner the scope, or range of equivalents, to which the appended claims are lawfully entitled.

BRIEF SUMMARY OF THE DRAWINGS

[0030] To further clarify various aspects of some embodiments of the present invention, a more particular description of the invention will be rendered by reference to the embodiments which are illustrated in the included figures. It will be understood and appreciated that the figures depict only certain exemplary implementations of the invention and are not to be considered limiting of its scope. As the figures are generally illustrated diagrammatically, or otherwise representationally, they are simply provided to help illuminate various concepts of the invention. Additional aspects of the invention are further elucidated and explained with greater specificity, but still by way of example only, in the detailed description, which shall be read with reference to the accompanying figures in which:

[0031] FIG. 1 is a block diagram illustrating an exemplary system architecture in which embodiments of the present invention may be implemented, and illustrating an exemplary flow from a sender to a recipient, according to an implementation. Where modules or steps are connected with arrows using dashed lines, they shall be considered optional to the exemplary implementation of the illustrated embodiment.

[0032] FIG. 2 is a flow diagram illustrating embodiments of the methods of communication using 3D human movement data of the present invention, illustrating an exemplary flow from a sender to a recipient, according to an implementation. Where modules or steps are connected with arrows illustrated using dashed lines, they shall be considered optional to the exemplary implementation of the illustrated embodiment.

[0033] FIG. 3 is a flow diagram illustrating embodiments of the methods of communication using 3D human movement data of the present invention, illustrating an exemplary flow from a sender to a recipient, from the perspective of the sender, according to an implementation.

[0034] FIG. 4 is a flow diagram illustrating embodiments of the methods of communication using 3D human movement data of the present invention, illustrating an exemplary flow from a sender to a recipient, from the perspective of the recipient, according to an implementation.

[0035] FIG. 5 is a block diagram illustrating an exemplary computing architecture comprising a backend application programming interface (API) and a client software development kit (SDK), illustrating some embodiments in which novel applications can store, query, access, and utilize the 3D human movement data described herein.

[0036] FIG. 6A is a representation of a screenshot from an exemplary implementation of the invention using mobile AR on an iPhone, illustrating the screen of a sender device, and illustrating a timepoint in the capture of 3D human movement data of sender comprising blowing a kiss, further illustrating a visual overlay of a graphical representation of the kiss being blown.

[0037] FIG. 6B is a representation of a screenshot from an exemplary implementation of the invention using mobile AR on an iPhone, illustrating the screen of a sender device, and illustrating a later timepoint in the capture of 3D human movement data of sender comprising blowing a kiss, further illustrating a visual overlay of a graphical representation of the kiss being blown.

[0038] FIG. 6C is a representation of a screenshot from an exemplary implementation of the invention using mobile AR on an iPhone, illustrating the screen of a sender device, the screen being that used to send the captured 3D human movement data to one or more recipients, optionally including message text.

[0039] FIG. 6D is a representation of a screenshot from an exemplary implementation of the invention using mobile AR on an iPhone, illustrating the screen of a recipient device, and illustrating a timepoint in the viewing of 3D human movement message from sender comprising blowing a kiss, and illustrating a visual overlay with a sent text message saying “Hi ”

[0040] FIG. 6E is a representation of a screenshot from an exemplary implementation of the invention using mobile AR on an iPhone, illustrating the screen of a recipient device, and illustrating a later timepoint in the viewing of 3D human movement message from sender comprising blowing a kiss, further illustrating a visual overlay of a graphical representation of the kiss being blown, and illustrating a visual overlay with a sent text message saying “Hi.”

[0041] FIG. 6F is a representation of a screenshot from an exemplary implementation of the invention using mobile AR on an iPhone, illustrating the screen of a recipient device, and illustrating another timepoint in the viewing of a 3D human movement message from sender comprising blowing a kiss, further illustrating a visual overlay of a graphical representation of the kiss having been caught by recipient (as part of the game “hearts”), and showing the text “Kisses Caught: 1.”

[0042] FIGS. 7A-7D are diagrams illustrating an exemplary implementation of the present invention, i.e., Example 6, in which 3D human movement data is captured, the human shape is segmented/separated from the background, the joints of the human form are identified, and then the 3D human movement data (including the identified joints) is combined with the segmented human form to be played back in augmented reality (AR) space at the receiving end, with 3D interactive effects and optionally games in accordance with an embodiment of the present invention.

[0043] FIG. 8 is a diagram illustrating four representations of screenshots demonstrating exemplary implementations of a “meme” ball UI being used to control various functions of a 3D movement data system, as discussed in Example 7, in accordance with an embodiment of the present invention.

[0044] FIGS. 9A-9B are diagrams illustrating representations of screenshots demonstrating exemplary implementations of VR environment/avatar features for a user, as discussed in Example 7, in accordance with an embodiment of the present invention.

[0045] FIGS. 10A-10B are diagrams illustrating representations of screenshots demonstrating exemplary implementations of Eds for selecting avatar, games, and other like features/settings, as discussed in Example 7, in accordance with embodiments of the present invention. [0046] FIGS. 11A-11C are diagrams illustrating representations of screenshots demonstrating exemplary implementations wherein a recorded human form can appear to be present (e.g., as a “hologram”) in a real environment, as discussed in Example 6, in accordance with embodiments of the present invention.

DETAILED DESCRIPTION OF THE INVENTION

[0047] While the present invention is now further described in terms of particular embodiments, examples, and applications, and by reference to the exemplary embodiments that are depicted in the accompanying figures, this description it is not intended to in any way limit its scope to any such embodiments, examples, and applications, and it will be understood that many modifications, substitutions, alternatives, changes, and variations in the described embodiments, examples, applications, and other details of the invention illustrated herein can be made by those skilled in the art without departing from the spirit of the invention, or the scope of the invention as described in the appended claims, including all equivalents to which they are lawfully entitled.

[0048] Various modifications, as well as a variety of uses in different applications, will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of aspects. Thus, the present invention is not intended to be limited to the aspects presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. The description below is designed to make such embodiments apparent to a person of ordinary skill in the art, in that they shall be both readily cognizable and readily creatable without undue experimentation.

[0049] When introducing elements of the present invention or the embodiments thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. Any reference to an element in the singular is therefore not intended to mean “one and only one” unless specifically so stated, but rather “one or more”; therefore, the term “or” standing alone, unless context demands otherwise, shall mean the same as “and/or.” The terms “comprising,” “including,” “such as,” and “having” are also intended to be inclusive and not exclusive (i.e., there may be other elements in addition to the recited elements). Thus, for example, the terms “including,” “may include,” and “include,” as used herein mean, and are used interchangeably with, the phrase “including but not limited to.” The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect, embodiment, process, or implementation described herein as “exemplary” is therefore not to be construed as necessarily preferred or advantageous over others.

[0050] Among these various aspects and embodiments of the present invention are methods and systems for communication using 3D human movement data. Such methods and systems can be better understood by reference to the following examples. EXAMPLE 1: Communication of 3D Movement Data from a Sender to a Recipient

[0051] Through this Example, it will be evident how in aspects of the invention, a sender can communicate with a recipient by sending the recipient 3D human movement data generated by the sender. Compared to prior technologies that have allowed a sender to communicate various other messages — e.g., text, voice, images, 2D video — embodiments of the invention allow, for the first time, a sender to communicate a message that consists of 3D human movement data.

[0052] For example, a sender can share a hug, blow a kiss, or show off a dance move. A parent can also share their child’s dance move, and thus it will be readily appreciated that for this Example, and in the other embodiments described and claimed, a “sender” is the human from whose movements the 3D movement data is generated. The sender, in other words, is the human who provides the 3D human movement input. The sender however need not also be the person operating the device to capture or transmit the 3D movement data.

[0053] Because an improvement of the present invention is the ability to communicate using 3D human movement data, it will be understood however that the sender is human. For convenience, the terms “3D movement data” and “3D human movement data” are therefore used interchangeably herein. Elsewhere, the word “human” also may be left out for convenience, without changing the meaning of a term. “3D movement data” includes data representing any form of human body language, such as eye movements, facial expressions, hand gestures, body postures, and the use of physical space.

[0054] It will be readily appreciated that “3D movement data” and “3D human movement data” also need not in all embodiments be the movement data of a “whole” human, but in some embodiments may be data captured from part of a human, such as a human torso, a human down to (and either including or not) the knee joints, a human down to (and either including or not) the ankle joints, a human all but the feet, and so forth, as will be easily understood.

[0055] Critically, the 3D movement data is not simply a 2D video capture of a 3D movement or series of movements; nor is it a 2D video conversion of 3D movement data that was captured. Instead, it is comprised of 3D movement data itself, as the following description will make clear.

[0056] FIG. 1 is a block diagram illustrating an exemplary system architecture in which embodiments of the present invention may be implemented, and further illustrating an exemplary flow from a sender to a recipient, according to this Example. Where modules or steps are connected with patterned arrows illustrated using dashed lines, they shall be considered optional; however, even where modules or steps are connected using solid arrows, they also may be considered optional, depending on the particular embodiment claimed, as will be readily appreciated by the fact that all figures are merely exemplary and not limiting.

[0057] In some embodiments taught by FIG. 1, a sender 101 desires to send a 3D movement data message to a recipient 115. In other embodiments, a sender 101 may send a 3D movement data message to multiple recipients; however, for sake of simplicity, reference generally shall be made to a single recipient, although it will be readily appreciated that an embodiment can be adapted to allow messages to be sent to multiple recipients, by reference to the teachings herein combined with the ordinary skill of the art.

[0058] It also will be appreciated that, besides teaching the communication of 3D movement data messages between a sender and a recipient, or between a sender and multiple recipients, among the improvements of the invention is its novel disclosure of a social platform based on 3D movement data, wherein many senders and recipients may share 3D movement data and interact with each other’s data. The invention further discloses the use of 3D movement data as a part of interactive asynchronous multiplayer movement games.

[0059] In preferred embodiments, 3D movement data is combined with or accompanied by computer-generated or sender-defined metadata or additional sender-defined data (such as an accompanying text message). Examples of computer-generated metadata include a unique message identifier, a sender identifier, a recipient identifier, and a time stamp.

[0060] In some embodiments, a sender may record a 3D movement data message only to be displayed to a recipient when specific criteria are met, e.g., not until and unless the recipient device is in a particular location (e.g., based on GPS coordinates, based on proximity to another device, within range of a specific Wi-Fi signal, and the like), or on a particular date and/or a particular time, which may be a specific pre-set date and/or time (e.g., 9:01 pm PDT on July 28, 2021), or a time defined by one or more sender- and/or receiver-defined parameters being satisfied (e.g., upon incarceration, incapacitation, disappearance, or death).

[0061] In some exemplary implementations, examples of sender-defined metadata may include information about: (1) the avatar, e.g., "skinOn" (is avatar on), "skinHueStart" (base color of avatar), "skinHueSize" (amount of variance in base color), "skinNoiseForce" (noise function of skin); (2) the particles, e.g., "particlesOn" (are particles on), "particlesDecaySpeed" (speed of particle decay), "particlesHueStart" (baseline color of particle), "particlesHueSize" (amount of variance from base color); (3) the filter chosen by the sender, e.g., "explosive" (filter based on movement data kinetic energy), "peaceful" (movement filter based on openness of body posture), and interactive auditory filters; (4) the prosocial game chosen by the sender, e.g., "hearts" (where a sender can blow 3D kisses and a recipient can catch them with a counter showing how many kisses were caught) or "follow" (where a recipient receives points for how well a sender’s movements are followed); and (5) other parameters, e.g., "remix" (should data be remixed with other data). The types of other parameters that may be considered for adoption are only limited by the imagination of the ordinary artisan.

[0062] Together, the 3D movement data of the sender, along with its metadata and any additional data, form a “3D movement data package.” A 3D movement data package may reside on volatile or non-transitory computer-readable media, but when transmitted, is also referred to as a “3D movement data message.” A 3D movement data message therefore comprises a 3D movement data package, optionally including any metadata necessary for the file transfer format, and optionally compressed or otherwise modified as appropriate to accomplish the transfer. A “3D movement data message” as used herein thus also means a message transmitted from sender to recipient, comprising the 3D movement data of the sender.

[0063] In the embodiments now described, it shall be assumed that sender 101 misses the recipient 115 and, rather than sending a text message saying “I miss you” or sending a heart emoji via text message, desires to express her feelings by sending a unique 3D movement data message representing her own 3D movement of blowing a kiss. In these embodiments, the desire of sender 101 can be instantiated through the use of a sender device 116 and a receiver device 118, both having such functionality as set forth in FIG. 1 and now described. (Or, in embodiments where messages are sent to multiple recipients, to multiple receiver devices 118.)

[0064] In these embodiments, sender device 116 will have a 3D motion capture means 103, processing means 105, recording means 106, and sending means 108. Optionally, sender device 116 may have additional input means 102, output means 104, and permanent storage 107.

[0065] In these embodiments, receiver device 118 will have a receiving means 110, a 3D motion rendering means 112, and an output means 114. Optionally, receiver device 118 may have permanent storage 111 and input means 113. [0066] Additionally, an optional permanent storage 109 may be utilized, that is physically separate from permanent storage 107 of sender device 116 and permanent storage 111 of receiver device 118, such as a cloud storage device on cloud 117 or another suitable remote storage device. Although each permanent storage will be understood to be physically separate in the embodiments of FIG. 1, each permanent storage may be operationally or functionally coupled so as to communicate with each other and transfer data. A “3D movement data file” comprises a “3D movement data package” when on permanent storage (i.e., is a 3D movement data package, optionally together with the metadata specific to the file format, such as the file header, and optionally compressed or otherwise modified as appropriate for storage).

[0067] In some embodiments, sender device 116 and receiver device 118 may be a portable device such as an Apple iPhone running iOS, a handset running Android, or any other suitable smartphone, tablet, or personal computing device. In other embodiments, 116 or 118 may be a VR device or system. At the time of filing, such devices and systems include the Oculus Quest VR, the Oculus Rift S, the Sony PlayStation®VR, the HTC Vive Cosmos, the Valve Index, Windows Mixed Reality headsets, and others. In yet other embodiments, 116 or 118 may be desktop systems or console systems.

[0068] It will be readily appreciated that the methods of the present invention are not directed towards, or limited by, any particular hardware. Although system and design requirements may vary, it will be understood that software embodying the invention can be implemented on different hardware without reliance on teachings outside of this disclosure or outside of the general knowledge of one of skill in the art.

[0069] In some embodiments, sender 101 uses as sender device 116 an Apple smartphone or tablet capable of mobile AR, for instance, an iOS device with an A12 chip. In these embodiments, sender 101 provides 3D human movement input which is captured by the 3D motion capture means 103. Suitable 3D motion capture means include the body-tracking functionality in the ARKit framework on device 116, which recognizes and tracks a person’s movements using an iOS device’s rear camera.

[0070] In the Examples and embodiments described herein, reference may be made to FIGS. 6A-6F, which show representations of screenshots from an iOS device of sender 611 providing the 3D human movement input of blowing a kiss, which kiss additionally takes 3D virtual form as heart 612. More specifically, 3D movement object 613 is generated based on sender 611’ s movements as shown in screenshots 610 and 620 in FIGS. 6 A and 6B, respectively; transmitted as shown in screenshot 630 of FIG. 6C; and then received and rendered on a recipient device as shown in screenshots 640, 650, and 660 of FIGS. 6D, 6E, and 6F, respectively. Screenshots 610 and 620 in FIGS. 6A and 6B, respectively, also show control menu 614, with control button 615 (“Back to Inbox”) and 616 (“Recording”), the latter of which is selected in order to begin the capturing and creating process. Screenshot 620 in FIG. 6B shows a later part of the sender’s 3D movement input, where sender 611 raises her arm, as reflected by the raised hand 623 of 3D movement object 613, thereby sending off the virtual heart 612 to the recipient.

[0071] Together, the rear camera and the ARKit framework discussed above, along with the other hardware and software necessary for them to perform their functions, therefore comprise a suitable 3D motion capture means 103, but one of skill will recognize that other suitable 3D motion capture means include comparable hardware and software configurations on other portable devices such as those running Android, on desktop and console systems, and in VR systems.

[0072] While motion capture may be accomplished with a camera (such as a smartphone camera, or a depth camera utilizing Intel® Real Sense™ or similar technology), 3D motion capture means also include optical (including active, passive, and semi-passive), inertial (e.g., gyroscopes, accelerometers), mechanical, and magnetic systems, as well as systems implemented using Wi-Fi (e.g., WiCapture, WiTrack) or Ultra-WideBand (UWB) technology.

[0073] Other suitable 3D motion capture means 103 include volumetric video capture means. In some embodiments, volumetric video capture means include the use of multiple cameras (and camera perspectives), digital graphics processing, photogrammetry, and other multi-sensor and/or computation-based approaches used in combination to generate volumetric 3D video. For example, volumetric video data can be captured using a mesh-based approach, e.g., a 3D triangle mesh as with the geometry used for computer games and visual effects, or using a point-based approach, e.g., volumetric 3D data represented by points/particles in 3D space carrying attributes such as color and size. Exemplary volumetric video capture means include or use HOLOSYS™ Volumetric Video Capture System (4Dviews); Mixed Reality Capture Studio, Kinect 4 Azure, and Azure Kinect Developer Kit (DK) (Microsoft); Aspect 3D (Level Five Supplies Ltd.); Depthkit Studio; Mantis Vision handheld 3D scanners, 3D Studio 3iosk, and Echo software kit; IO Industries volumetric cameras, sensors, and software; the Microsoft Kinect 4 Azure and Azure Kinect Developer Kit (DK); and Intel® Real Sense™. [0074] Using the iOS device’s rear camera, the 3D movement input of sender 101 is captured. It will be readily appreciated that any 3D movement input, and the underlying movement or series of movements that it represents (in the exemplary screenshots of FIGS. 6A-6F, the underlying movements are of sender 611 blowing a kiss), will have a start and end time. How such start and end times are selected represents a design choice to be left to the ordinary artisan, but may for example be implemented based on user activation (e.g., screen taps or button presses, voice initiation and termination), with timers including countdown timers, through motion analysis software (i.e., using software to automatically determine the start time and/or end time, by analyzing features of the movement itself), and the like, and ultimate start and end times may be altered with post-capture editing.

[0075] It also should be readily appreciated that each 3D movement input is in fact a series of timepoints, from the start time through the end time, the total number of which is determined by the particular frame rate. Typical frame rates for motion capture systems include 30 frames per second (fps) and 60 fps, but depending on the system and its use, may be lower, such as 24 fps, or higher, such as 100 fps, 120 fps, 160 fps, 200 fps, 400 fps, or even 10,000 fps and above.

[0076] Three-dimensional motion capture generally is the process of tracking motion in 3D and converting it to data. In some embodiments, a suitable 3D motion capture means 103 tracks the motion in 3D of sender 101, by capturing joint positions and rotations across time. Herein, the term “joint” shall have its ordinary meaning in the field of motion capture, i.e., a potential point of articulation on a skeleton model. While joints thus may correspond to anatomical joints, they may also simply represent a portion of a model that can be moved or deformed in some way.

[0077] In the embodiments that use ARKit, joints may include: (1) torso joints, i.e., the hip joint, which is the root of the ARKit joint hierarchy, and seven spine joints; (2) head and neck joints, i.e., four neck joints extending from the spine, as well as joints for controlling the head, eyes and eyelids, nose, chin, and jaw; (3) arm and shoulder joints, i.e., threejoints, representing the shoulder, elbow, and wrists; (4) leg and foot joints, i.e., joints for moving the upper legs, lower legs, feet, and toes; and (5) hand joints, i.e., the thumbs which each have four joints, and the eight fingers each comprised of five joints, and which all descending from the hand joint.

[0078] Depending on system and design requirements, different joints can be selected, and it should be understood that the ultimate selection of joints, and the choice of total number of joints, will be for the ordinary artisan as part of the implementation of the invention using the practice of ordinary skill. It should be readily appreciated that, while expressivity may increase with greater numbers of joints, there is no specific minimum number required by the invention.

[0079] Joint positions and rotations can be captured as 3D movement data in various forms. Joint positions are typically represented by a coordinate system in 3D that uses +y for up, +z for forward, and +x for right, but other systems are possible. Rotations in 3D can be represented, for example, by Euler angles (i.e., roll, pitch, yaw), or more preferably by quaternions. In 3D space, any rotation or sequence of rotations of a coordinate system about a fixed point is equivalent to a single rotation by a given angle Q about a fixed axis that runs through the fixed point. Quaternions encode this axis-angle representation in four numbers, and can be used to apply the corresponding rotation to a position vector, representing a point relative to the origin in 3D space. Accordingly, each joint can be represented by a 3D position vector {x,y,zj and its quaternion, at each timepoint or frame of a 3D movement input. For a motion in 30 fps, one second of motion at each joint would thus be captured as a set of 30 such representations.

[0080] Simultaneous with or subsequent to their capture, the captured 3D movement data can be extracted, combined with other data including metadata, compressed, modified, manipulated, or otherwise processed by processing means 105, to create a 3D movement data package. In embodiments using ARKit, processing means 105 may be a software application programmed to communicate with the ARKit framework so as to obtain captured 3D movement data therefrom. The design of such software applications will be understood to be within the practice of ordinary skill, but as an example, in some preferred embodiments the software application may be built using the Unity game engine developed by Unity Technologies, a cross-platform engine that supports development for numerous platforms across mobile, desktop, consoles, and VR.

[0081] Captured 3D movement data can be processed so that data are only extracted for specific joints, whether selected by a user or by the designer. Captured 3D movement data also can be processed, for example, to reduce the frame rate (e.g., by only selecting half of the frames). And as discussed in greater detail in Example 2, processing can combine captured 3D movement data with information relating to avatars, filters, games, or other sender-selected parameters and data.

[0082] A 3D movement data package, as defined above, represents the 3D movement data of the sender, along with its metadata and any additional data. It moreover will be in a format suitable for sending (i.e., as a 3D movement data message) or storing (i.e., as a 3D movement data file). And as further discussed below, a 3D movement data package is also in a format suitable for ultimately rendering to an output as a viewable 3D movement object, viewable to a recipient, thereby accomplishing a goal of some embodiments of the present invention.

[0083] In some embodiments, the 3D positional vectors and 4D quaternions captured by 3D motion capture means 103 are further processed by processing means 105 to compress them so they take up less memory, transfer faster and use less bandwidth, or otherwise use less computing resources. Various suitable data compression algorithms will be known to one of ordinary skill. In one embodiment, the vectors and quaternions are compressed to three decimal points and concatenated into strings, with each string mapped to a particular body position or rotation. To further elucidate this embodiment, sample strings, representing the position and rotation of the head and hands for two frames (i.e., at two timepoints), are as follows:

"rHandPos": " 1.068: 0.951:-0.683: 1.068: 0.951:-0.683:1.069: 0.955"

"rHandAng": "-0.37 :-0.091: 0.543:-0.748:-0.369:-0.097:0.544:-0.747" "lHandPos ": " 0.232: 0.762:-0.781: 0.232: 0.762:-0.78: 0.233: 0.762"

"lHandAng": "-0.452:-0.473: 0.085:-0.751:-0.453:-0.473:0.083:-0.751" "headPos" : " 0.469: 1.081:-1.035: 0.468: 1.081:-1.035:0.467: 1.081"

"headAng" : "-0.Oil:-0.358: 0.015:-0.934:-0.Oil:-0.359:0.015:-0.933"

[0084] After a 3D movement data package is created, as above, it can be sent as a 3D movement data message and/or stored for later retrieval. For either, the 3D movement data package is first stored to volatile memory by recording means 106. Depending on the embodiment, the processing means 105 and the recording means 106 may comprise the same hardware, software, or combination of hardware and software, or may be separate modules, and processing and recording may take place simultaneously, sequentially (e.g., where processing and recording are of the entirety of a 3D movement data package), or alternatingly (e.g., where processing and recording are of separate frames or portions of a 3D movement data package), and in any order.

[0085] In embodiments where a 3D movement data package is permanently stored, it may be stored on local permanent storage 107 on the sender device 116, on local permanent storage 111 on the recipient device 118, and/or on remote permanent storage 109, such as cloud storage in cloud 117. In some embodiments, for example, sender 101 may choose to store a sent movement data message. “Permanent storage” should be understood to mean any storage device or collection of devices that retains data when unpowered, such as a hard drive or solid- state drive (SSD) (i.e., “persistent” as opposed to “volatile” memory).

[0086] A 3D movement data package may be stored as a 3D movement data file in any suitable format that allows for storage and retrieval of data, including relational databases using tabular relations (e.g., SQL), non-relational databases (e.g., NoSQL), standard motion capture data formats (e.g., Biovision Hierarchy Animation .bvh files), and others. In some embodiments, the permanent storage is a dedicated 3D movement data server, which may additionally store 2D media and other data. It will be understood that stored data may optionally be aggregated, indexed, compressed, or otherwise modified, and may be extractable or retrievable for use in other processes or by other systems, as may be further elucidated by reference to Example 5.

[0087] A 3D movement data package may be sent as a 3D movement data message between sender device 116 and recipient device 118 using any suitable sending means 108 and receiving means 110. Such sending means and receiving means include those means capable of sending and/or receiving over cellular networks (e.g., 3G CDMA/GSM, 4G LTE, 5G NR), over Wi-Fi, over Bluetooth, over AirPlay, by mobile broadband, by wired internet, or by any other communications or file transfer protocols known in the art. In some embodiments, sender device 116 and receiver device 118 may be hard-wired or otherwise directly connected. In other embodiments, it will be understood that a 3D movement data message need not be sent directly from a sending means 108 to a receiving means 110, but may be transferred between any number of intermediary hardware and/or software modules, network devices, or servers, e.g., as may reside on cloud 117.

[0088] It also will be understood that the 3D movement data package may be compressed, encrypted, or otherwise altered, either by sending means 108 before sending, or by an intermediary module, device, or server during transmission. If a 3D movement data message is received by receiving means 110 in a format that is compressed, encrypted, or otherwise altered, it will be within the practice of ordinary skill to decompress, decrypt, or otherwise return to renderable format such 3D movement data message.

[0089] Once received by recipient device 118, a 3D movement data message may be viewed by recipient 115 (or, in some embodiments, received by more than one recipient device 118 and/or viewed by more than one recipient 115). It also may be stored on permanent storage 111. A 3D movement data package may be stored before and/or after it is viewed, and storage may be by default software rule or by user selection. For instance, recipient 115 may not be available or may not wish to view a 3D movement data message immediately, and thus it may be saved by the decision of recipient device 118 or recipient 115 for later viewing. Or, recipient 115 may view it immediately, and then decide to store it permanently for repeat viewing, e.g., in a “saved” folder or a “favorites” folder. One of skill will understand that many design choices involving storage 111 (and 107 and 109) are possible, and within the practice of ordinary skill.

[0090] As shown in FIGS. 6D-6F, a favorites folder or the like is used in a preferred embodiment, so that the recipient may save special 3D movement data files like a child’ s cutest dance, a partner’s hug, or a friend’s secret handshake. More specifically, the recipient may select the “Favorites” heart-shaped button 641 in control menu 642 as shown in screenshots 640 and 650 of FIGS. 6D and 6E, respectively, in order to store the received 3D movement data message.

[0091] In some embodiments, viewing a 3D movement data package is made possible with 3D motion rendering means 112 and output means 114. A suitable 3D motion rendering means 112 is any hardware, software, or hardware/software combination (whether as a single module or combination of modules) that is capable of rendering a 3D movement data package as a 3D movement object, regardless of the specific technical basis on which such rendering is performed (e.g., whether rendering is generated ahead of time (pre-rendered) or in real-time, regardless of choice of specific rendering algorithm, etc.).

[0092] Many rendering algorithms are known to ordinary artisans, and software used for rendering may employ any number of different techniques to obtain a final animation. For instance, in embodiments that capture 3D movement data using a time series of positional vectors and quaternions to represent joints across time t, a suitable 3D motion rendering means 112 will be able to recreate a skeleton model comprising those joints, in like positions. That time series of 3D movement data is used to animate the skeleton model, using mathematical processes known in the art, such as inverse kinematics, combined with suitable computer animation techniques (e.g., skeletal animation or “rigging,” and “skinning”).

[0093] In one preferred embodiment, to create an aesthetically balanced distribution of particles, a custom particle engine is implemented on a graphics processing unit (GPU). In this embodiment, rather than spawning a particle evenly across a polygon mesh (i.e., the collection of vertices, edges, and faces that defines the shape of an object), different distributions are calculated at run time. Each particle saves its barycentric coordinates and references to its nearest vertices. A new “spawn position” is then calculated, by first skinning the surrounding vertices in reference to their bone transform/weights, and then placing the spawn position using its stored barycentric coordinates. However, various other rendering and animation techniques can be utilized without departing from the scope of the invention.

[0094] Where, in certain embodiments, the captured 3D movement data is processed to reduce the frame rate (e.g., from 60 fps to 30 fps), or where a higher rendering frame rate is otherwise desired, 3D motion rendering means 112 may utilize an interpolation algorithm to smooth the data. It will be appreciated that 3D movement data generally can be rendered by a 3D motion rendering means 112 using numerous variations in style and practice, depending on system and design requirements.

[0095] A suitable 3D rendering means 112 for purposes of embodiments of the invention need only be minimally capable of outputting a 3D movement object, viewable to the recipient, that is a like representation of the 3D movement data package captured (although, as should be apparent, it also may be modified or altered, according to designer goals or user parameters). It is therefore contemplated that the ordinary artisan may implement the 3D rendering in a variety of ways, utilizing different particle systems, different reflection, scattering, and surface shading techniques, different color palettes and background images, different visual effects, and the like.

[0096] In some embodiments, the 3D motion rendering means 112 will optionally use sender-defined parameters, which may or may not be dynamically updated by a sender, sent as part of the 3D movement data message, to render the ultimate 3D movement object. As noted above, such parameters may include metadata indicating that the 3D movement data should be rendered using a particular avatar, having specific body modifications (e.g., wings, a tail, tentacles), incorporating photographic and video data (e.g., to render a 3D movement object having the sender’s own face and/or body, or another particular face or body), or the like. An ordinary artisan will appreciate that many solutions exist to permit such modifications to be made; for example, the data representing different avatars and other alterations can be locally or remotely stored, received from the sender, or obtained from a third-party server (including, as an example, customized avatars that may be offered for in-app purchase), but such solutions are design choices that can be made with ordinary skill. [0097] Ultimately, the 3D movement data message is rendered so as to be viewable to recipient 115, using output means 114. Suitable output means will be understood to include the screen of recipient device 118, whether a smartphone, tablet, or other personal device, or a VR headset. In other embodiments, output means 114 may be (or may additionally include) a monitor, a television, a projection system, a holographic display, a stereo display or 3D display, or any other output screen which may be physically separate from but operationally or functionally coupled to recipient device 118.

[0098] Preferably, but optionally, sender 101 may view her own 3D movement input on output means 104. Suitable output means are understood to be those comparable to output means 114 (e.g., the screen of sender device 116, a VR headset, a monitor or TV, a projector or holographic display, a stereo display or 3D display, etc.). When 3D movement is rendered on output means 104, it will be understood that processing means 105 further includes suitable 3D motion rendering means, whether as hardware, software, or hardware/software combinations, comparable to 112. In certain preferred embodiments, the sender avatar is rigged as a mirrored puppet, allowing for real-time feedback of the sender’s own movements, as shown by example of 3D movement object 613 in screenshots 610 and 620 in FIGS. 6A and 6B, respectively.

[0099] In some embodiments, the 3D motion rendering performed by processing means 105 will optionally use sender-defined parameters, so that sender 101 is therefore able to select and try on different avatars, experiment with various filters and feedback, and otherwise set and change parameters and view and interact with her 3D movement input in real time, whether or not it is also being captured. Depending on the embodiment, various parameters can be determined based on the sender’s 3D movement; alternately, they can be determined by other input, or through choices made through an alternate input, using an optional additional input means 102, such as voice, the touchscreen of smartphone or tablet device 116, or controls of VR device 116.

[0100] Additional input means 102 also may include sensing means for responding to (i.e., providing feedback based on) or recording (along with 3D movement data, whether ultimately included in a 3D movement message or not) physiological, physiometric, or biometric data such as that relating to cardiovascular and pulmonary functions (e.g., pulse rate, heart rate variability (HRV), ECG traces, blood oxygenation, respiration rate, temperature or CO2 content of exhaled air, heart sounds, body resonance), brain activity (e.g., encephalography such as electroencephalography (EEG), quantitative EEG (qEEG), magnetoencephalography (MEG), electrocorticography (ECoG), functional magnetic resonance imaging (fMRI), positron emission tomography (PET), nuclear magnetic resonance (NMR), spectroscopy or magnetic resonance spectroscopy (MSR), single-photon emission computed tomography (SPECT), near infrared spectroscopy (NIRS), functional NIRS (f IRS), or event-related optical signal (EROS)), electrodermal activity (e.g., skin conductance), and other such alternative input types.

[0101] With the description and definitions above now understood, reference is made to FIG. 2 to further understand various exemplary embodiments. Using FIG. 2, it again can be demonstrated how a sender, who wishes to send a 3D movement data message of her blowing a kiss to a recipient, may do so.

[0102] In a first step 201, the sender makes the physical movement of blowing a kiss.

[0103] In a second step 202, that human movement input is captured by sender device 210. As above, the step of capturing 3D movement input 202 may be implemented by a 3D motion capture means 103.

[0104] A 3D movement data package is created in a third step 203, which may be implemented using a processing means 105 and a recording means 106, according to the teachings above.

[0105] In a fourth step 204, that 3D movement data package is sent, which may be implemented using a sending means 108.

[0106] In an optional fifth step, the 3D movement data package may be stored on storage 205. Although styled as a “fifth” step, it will be understood that the 3D movement data package may be stored by sender device 210 before sending, may be stored by receiver device 212 after receiving, and/or may be stored by cloud 211 during transmission, and that storage 205 therefore may be local storage, remote storage, or a combination thereof (as with permanent storage 107, 109, and 111). In these embodiments, storage 205 refers to permanent storage, and it will be understood that even if never stored in such permanent storage, a 3D movement data package may nevertheless reside in volatile memory in one or more copies, at multiple locations, and at any step(s) in the methods here described. Devices 210 and 212, and cloud

211, shall be understood with reference to devices 116 and 118, and cloud 117, above.

[0107] In a sixth step 206, the 3D movement data package is received by recipient device

212, as implemented for example by a receiving means 110. [0108] In a seventh step 207, the 3D movement data package is rendered, for example by a 3D motion rendering means 112.

[0109] In an eighth step 208, a viewable 3D movement object is output, for example on an output means 114. That viewable 3D movement object, in the example illustrated by FIGS. 6A-6F, is the sender blowing a kiss.

[0110] In a ninth step 209, the recipient views the 3D movement object 613 of the sender, and receives, in this exemplary embodiment, the shared kiss 612 as shown in screenshot 650 of FIG. 6E.

EXAMPLE 2: Communication of Synesthetic Movement Data from Sender Perspective

[0111] Having disclosed various embodiments viewed in light of the overall process from sender to recipient, in this Example embodiments will be described in further detail from the perspective of a sender, by reference to the flow chart of FIG. 3.

[0112] In this Example, it will continue to be assumed that a sender wishes to send a recipient a 3D movement data message of her blowing a kiss. With that wish in mind, sender 301 opens the application on her sender device 116 to initiate the method. In this exemplary implementation, the application is understood to be stored and run on device 116, and to be operationally and functionally coupled with the hardware and software so comprising, and therefore together they comprise 3D motion capture means 103, processing means 105, recording means 106, permanent storage 107, and sending means 108, along with additional input means 102 and output means 104, all operating as above described.

[0113] Upon opening the application, sender 301 is first asked whether she has an existing account 302. Depending on her choice, she is able to create a new account 303 or log in using her existing account credentials 304. While logging in may be used as one means to identify and authenticate a sender, in other embodiments the sender may have the option to bypass log in (or, e.g., to log in as a “guest” or as “anonymous”), or a sender may be automatically logged in based on user authentication managed by another application (e.g., managed by Google or Facebook) or through the operating system (e.g., iOS), or via device authentication.

[0114] After optionally logging in (or otherwise being authenticated, if authentication is required), sender 301 may select an avatar 305, the choice of which may affect the 3D movement that is captured or provide additional feedback 308. Sender 301 is then presented with the choice of whether to record a 3D movement 306. Although by selecting “no,” the exemplary flow of FIG. 3 is shown to terminate, it is understood that sender 301 may nonetheless continue to interact with the application as discussed above. In some embodiments, sender may interact with the application for as long as she desires, and experiment with various movements and filters before sending a message.

[0115] More specifically, as illustrated in screenshots 640 and 650 of FIGS. 6D and 6E, respectively, when the user wishes to create a message, she can use the button “Create Meu” button 643 in control menu 642. In the exemplary implementation herein, a “Meu” will be understood to be a “3D human movement data message” or a “3D movement data message.” By selecting “Create Meu” button 643 illustrated in screenshots 640, 650, and 660 of FIGS. 6D, 6E, and 6F, respectively, the user can then perform a 3D movement 307 to be captured. Sensory feedback 309 may be provided depending on sender movement data, in light of sender- defined parameters. Examples of feedback include auditory, visual, haptic, or multimodal.

[0116] For instance, motion detection algorithms may be used to provide feedback by analyzing higher-level features of the 3D movement data in real time. Depending in part on choice of filters, feedback may be provided based on one or more specific higher-level features such as smoothness of motion, range of motion, reaction time to a cue, gait size and speed, limb flexibility, and closeness of match to a predefined 3D movement (using a suitable function to determine closeness of match or goodness of fit, or any one or more of such other higher- level features, as would be understood by one in the art, using the practice of ordinary skill).

[0117] Non-limiting examples of howto calculate such higher-level features are as follows. Smoothness of motion may be determined based on the amount of trajectory or velocity adjustments during a specific movement, reflecting movement intermittency and movement coordination. Smoothness may also be calculated using mathematical analysis, wherein the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain. Range of motion may be determined using the measurement of the amount of movement around a specific joint or body part (e.g., the extent of movement of a joint, measured in degrees of a circle). Reaction time to a cue may be determined as the time between a stimulus (the cue) and a response. Gait size may be determined based on the distance between successive points of initial contact of the same foot (i.e., stride length) or the distance between the point of initial contact of one foot and the point of initial contact of the opposite foot (step length). Gait speed may be determined based on the time one takes to walk a specified distance on a surface, or based on the rate in steps per minute (cadence). Limb flexibility may be determined based on the anatomical range of movement in a joint or series of joints (as compared to, e.g., an average or defined reference).

[0118] All such higher-level features may be calculated based on a single determination or the mean of multiple such determinations, and may be averaged across multiple features (e.g., the mean of multiple reaction times, mean limb flexibility at a single limb or averaged across multiple limbs, range of motion at a single joint or averaged across arm joints, leg joints, all joints, and the like, as will be readily appreciated).

[0119] Closeness of match and goodness of fit functions include any one or more of, as well as such others as will be known to those in the art: Bayesian information criterion; Kolmogorov-Smimov test; Cramer-von Mises criterion; Anderson-Darling test; Shapiro-Wilk test; Chi-square test; Akaike information criterion; Hosmer-Lemeshow test; Kuiper's test; Kernelized Stein discrepancy; Zhang’s ZK, ZC and ZA tests; Moran test; Pearson’s chi-square test; and G-tests. Also contemplated are such regression analyses as coefficient of determination (the R-squared measure of goodness of fit), lack-of-fit sum of squares, reduced chi-square, regression validation, and Mallows’s Cp criterion.

[0120] In some embodiments the movement data, or various higher-level features extracted from such movement data, may drive both the auditory and visual experience, connecting movement data to color, speed, fade, elasticity, and noise functions of the particles and avatar, as well as an interactive music system that changes based on the sender’s movements.

[0121] In one such embodiment, adaptive music is created in Fmod using the Fmod Unity plugin, that allows the movement data to change the music track parameters in real time. In preferred embodiments, music is composed specifically to support different filters, and further consists of loops and layers that fade in and out depending on the sender’s movements. For example, one filter (“peaceful”) uses the position of the hands to control cello and flute loops in the music, while a pose detection algorithm connects an open body posture to a musical “swell” overlay. Another filter (“explosive”) uses velocity measurements to control bass and drums, and an average velocity over longer periods to control other portions of the track.

[0122] Haptic feedback is also provided in some embodiments, for instance vibrations may be activated when touching an avatar, communicating the sensation of physical presence. Such physical presence enhances the ability to play touch-based mirroring games asynchronously. [0123] In embodiments where the methods and systems of the present invention are used as part of psychedelic-assisted therapy, haptic feedback also permits a therapist and a patient to interact by touch.

[0124] A motion detection algorithm also can be utilized to detect closeness of match to a predefined 3D movement and thus recognize specific 3D movements representing different gestures or body language, and to create 3D visualizations that enhance them. Reference is made to FIGS. 6A and 6B, demonstrating sender 611 (or, equivalently, sender 301) blowing a kiss 612 to a recipient(s) (i.e., recipient 313). More specifically, a motion detection algorithm is used to recognize when the sender’s hand extends out and up as shown in screenshot 620 of FIG. 6B from its previous position (i.e., at the mouth as shown in screenshot 610 of FIG. 6A), and to visually output heart(s) 612 which mimics and continues the hand’s movement from the mouth of the sender upward and outward (e.g., as in the prosocial game “hearts,” as above). Thus, when the sender 301 blows a kiss to recipient 313, not only will the recipient see the 3D movement itself, but the recipient will also see a 3D visualization that includes, for example, hearts coming out of the sender’ s mouth, timed with her movements (and can further interact with that visualization, as below). Thus, it will be understood that any visualization or feedback may be output not only to the sender, but also may be saved as part of the 3D movement data package, and where such feedback is saved as part of the 3D movement data package, it also may be rendered or otherwise played for the recipient. Visualizations and feedback may also be output to a recipient by analyzing features of the 3D movement data of sender during rendering on a recipient device.

[0125] With reference again to FIG. 3, now that sender 301 has performed the desired 3D movement, she will be asked (according to this exemplary implementation) whether she wishes to include an additional message, such as a text message 310, when her 3D movement data message is transmitted. Should she wish to create such a message 311, it will be included with the 3D movement data message as described in detail above. Reference is made to FIG. 6C, illustrating screenshot 630, in which the user may use input box 631 and keyboard 632 to send an additional message (e.g., “Hi” 633). In other embodiments, additional messages could include virtual gifts (e.g., virtual flowers or puppies), which in some embodiments can be monetized as in-app purchases.

[0126] Sender 301 is next asked to which recipient addresses she wishes to send her 3D movement data message 312. It will be readily appreciated that the implementation of an address feature can utilize various identifiers including display names, actual names, email addresses, phone numbers, assigned IDs, or any other identifying information, and recipients can be individuals, or groups, including sender-defined groups (e.g., friends, family, team members, coworkers, etc.) or application-defined groups (e.g., current players of a particular game). For example, in the implementation demonstrated in FIG. 6C, sender 301 may choose to send a 3D movement data message to a sender-defined group (by selecting slider 634 under “Send to all friends”), to an application defined group (by selecting slider 635 under “Send to Meu team”), or to a specific individual recipient (by entering it in box 636 displaying the grayed out text “Type here and click on friend” under the text “Type Friend Display Name or Email”). Finally, the 3D movement data message is then transmitted to the chosen recipient s) 313.

[0127] In some embodiments, a gif creation tool also may be used to turn the 3D movement data message into a 2D gif, which may further include sender-defined filters and text. Such 2D gifs may be sent in addition to a 3D movement data message, or may be sent in the alternative, such as to recipients who do not have a suitable application yet installed to render the 3D message.

[0128] In yet other embodiments, rather than be transmitted to a device to be output to a screen, and viewable to a recipient, a 3D movement message will be transmitted to a device to be output so as to control a puppet, toy, robot, or similar physical device. In these embodiments, rather than be graphically rendered as an animation, the 3D movement data will be converted to control signals to operate a mechanical apparatus, using methods known to those of ordinary skill (e.g., mapping the captured motion of human joints to like joints of the mechanical apparatus, mapping other captured human movement features to the movement of the mechanical apparatus, and the like).

EXAMPLE 3: Communication of Synesthetic Movement Data from Recipient Perspective

[0129] Having disclosed various embodiments viewed in light of the overall process from sender to recipient, and in Example 2 from the perspective of a sender, in this Example embodiments will be described in further detail from the perspective of a recipient, by reference to the flow chart of FIG. 4. In the implementations taken as exemplary for purposes herein, the flow is now presumed to begin where Example 2 concluded.

[0130] Taking up where Example 2 left off, it is therefore understood that sender 401 has created a 3D movement message comprising, in the example illustrated in FIGS. 6A-6C, the sender blowing a kiss, and transmitted it to recipient 404. In the exemplary implementation of this Example, recipient 404 first receives a notification (e.g., a sound, vibration, badge, banner, etc.) on a receiver device, to alert recipient to new message 403. In some embodiments, recipient may immediately respond to the alert, and the new message may thus be transmitted from sender without being first stored in permanent storage 402. In other embodiments, recipient may not be aware of the alert, or may ignore the alert, and new message may be saved to storage 402 for later retrieval. In these embodiments, it is immaterial if storage 402 is on sender device, recipient device, another device, and/or the cloud.

[0131] To view new message 403, recipient 404 may first log in 405 to the appropriate application software installed (or be otherwise authenticated, if required). Although omitted from FIG. 4, it will be understood that without an existing account, recipient may have the option of first creating one before logging in, as in FIG. 3 (see 302, 303, 304). While logging in may be used as one means to identify and authenticate the proper recipient 404, in other embodiments a recipient may have the option to bypass log in (or, e.g., to log in as a “guest” or as “anonymous”), or a recipient may be automatically logged in based on user authentication managed by another application (e.g., managed by Google or Facebook) or through the operating system (e.g., iOS), or via device authentication.

[0132] Once optionally logged in (or otherwise authenticated, if authentication is required), recipient 404 may select a 3D movement data message for viewing 406. It is assumed that recipient will be able to select new message 403 for viewing, but depending on how many other new messages are ready for viewing, and depending on how many saved messages are available for reviewing, recipient may have a number of different messages that could be played, including a stored message 411 from storage 410. In the example illustrated by screenshot 640 in FIG. 6D, the recipient would select the “Get Meu From” button 644 in control menu 642 to select a message for viewing.

[0133] Having selected new message 403 for viewing 406, recipient 404 thus plays it as a 3D movement message 407. The 3D movement output that is viewed (i.e., the 3D movement object) corresponds to the 3D movement data package sent and recreates the 3D human movement input that was captured, therefore allowing a 3D human movement to be communicated (see, e.g., FIGS. 6D-6F). If a text message was included (310, 311), sender will be able to view the text message at one or more points during playback of the 3D movement message 407. In the example illustrated by FIGS. 6A-6F, the accompanying text message “Hi” entered as 633 by the sender as shown in screenshot 630 of FIG. 6C is displayed to the recipient as text “Hi” 645 in screenshot 640 of FIG. 6D.

[0134] In some preferred embodiments, recipient will be able to interact with the played message 408. For example, as new message 403 is of recipient blowing a kiss, in some described embodiments a parameter is set as part of the 3D movement data message, indicating that the prosocial game “hearts” is selected. In alternate embodiments, “hearts” could be offered as a selection to recipient 404 based on motion detection software running on the recipient device. In some embodiments, when the game “hearts” is played, recipient can utilize recipient’s own movement data, during viewing, to “catch” the blown kisses, represented by animated hearts, as they come toward recipient. In some embodiments, further feedback can be provided (e.g., haptic feedback when a kiss is caught), and a score can be displayed. In the example illustrated by FIGS. 6A-6F, the recipient’s number/score of received kisses is displayed as “Kisses Caught: 1” 661 in screenshot 660 of FIG. 6F. Recipient 404, in other embodiments, can activate other forces that interact with the avatar’s particle system, to create any number of novel types of combined action between sender and recipient.

[0135] In some embodiments, novel types of interactions that are impossible in the real world are possible. For example, by playing with scale as one of an avatar’s parameters, an extra layer of interpersonal communication is created, and body size becomes an expressive component of communication regardless of one’s own actual size. In some such embodiments, the scale of the 3D movement data will be manipulated, while keeping the proportions between the body parts of an avatar equal. In one such implementation, recipient 404 could therefore “miniaturize” the 3D movement object representation of sender 101 and thus “shrink” sender down, so that sender 101 could, e.g., dance on top of the palm of recipient 404. In other implementations, the proportions between the body parts of an avatar could be manipulated. In yet other implementations, scale and/or proportions could be manipulated, and when such techniques are used in combination with different avatars, and with different other techniques of the present disclosure, a variety of novel uses and applications will be readily envisioned.

[0136] In some embodiments, size will be manipulated toward therapeutic ends. For example, a patient undergoing psychedelic-assisted therapy such as described below may be given an avatar with, e.g., different body parts or proportions, as part of a therapeutic protocol to manage the distress or symptoms of one or more body dysmorphic disorders. [0137] After new message 403 is played 407, recipient 404 has the option to save the message 409 to storage 410 and/or send a reply message 412. If recipient 404 chooses to send a reply message 412, it will be understood that the recipient now becomes a sender 301, and the exemplary process of this implementation repeats (see FIG. 3). In the example illustrated by the screenshots of FIGS. 6A-6F, the recipient’s options in control menu 642 as shown in screenshot 640 of FIG. 6D include saving the received message via the heart-shaped “Favorites” button 641 or sending a 3D movement data message as a reply via “Create meu” button 643.

EXAMPLE 4: 3D Movement Data as Part of a Novel Social Communication Platform

[0138] Besides embodiments where a 3D movement data message is transmitted from one sender to one recipient, and besides those additional embodiments where a 3D movement data message is transmitted from one sender to multiple recipients (including a defined group or class of recipients), yet further embodiments exist where 3D movement data messages are transmitted between multiple senders and multiple recipients (including defined groups or classes thereof).

[0139] In these further embodiments, it will be readily appreciated how 3D movement data does not only form a novel medium of communication, but also forms the basis for a novel social communication platform. For example, transmission of 3D movement data messages between groups of senders and recipients permits the creation of novel 3D-enhanced social interactions, such as interactive and/or asynchronous events, games, contests, dance-offs, parties, and the like.

[0140] Such 3D-enhanced social interactions also will include group classes for yoga, movement, dance, boxing, martial arts, or other exercise, for instance where the teacher and students can share and interact with each other’s physical movements and utilize novel forms of feedback, facilitating skill acquisition and training.

[0141] For example, in some embodiments, students are able to embody a dance teacher’s virtual avatar and learn to dance “inside of them” to acquire their moves and techniques. Additional 3D-enhanced social interactions will include using musical instruments as part of musical instruction, facilitating learning, especially with instruments demanding a high degree of motor control, such as the drums. For instance, a student can embody a drum teacher’s arms, hands, legs, and feet, and receive haptic feedback to help guide the student’s movements. [0142] Training of other skills involving difficult motor control also can be facilitated (e.g., sign language, juggling), and it will be readily appreciated how benefits directly flow from the ability to embody the 3D movements of others, and allow others to embody one’s own 3D movements, especially with additional audio, visual, and haptic feedback, and more especially with additional synesthetic capabilities unique to this new form of interaction. Moreover, interaction with captured 3D movement data can involve different visualization methods, feedback types, and playback speeds, and 3D models can also be frozen in space with no movement to permit deep study and show negative space. In these and such other exemplary implementations, it will also be appreciated that such embodiments also will allow for improved learning and training when done asynchronously. Other possibilities for fitness and education, which should now be within the contemplation of an ordinary artisan, are legion.

[0143] Other suggestive examples, in the field of entertainment, include embodiment as entertainers, dancers, musicians, actors, extreme sports figures, athletes, or as novel avatars having unique affordances in an immersive social play environment such as a scavenger hunt.

[0144] Additional suggestive examples, in the field of mental health and emotional well being, include embodiment practices that provide feedback about one’s body to make oneself feel safer therein (reducing symptoms of depression, anxiety, or post-traumatic stress disorder). In some such examples, the methods and systems of the invention are advantageously used as part of psychedelic-assisted therapy, to enhance and accelerate the treatment process.

[0145] Additional embodiment practices will be used to break down implicit biases or reduce discrimination (e.g., by embodying different people having different characteristics). Yet further embodiment practices will be useful for academic research, e.g., through experiments designed to measure human movement data in response to specific triggers or cues, or to study the effects of embodiment on any of the above classes of activities. For instance, research can be done to compare which visualization methods, feedback types, playback speeds, and the like have the best outcomes and lead to the fastest acquisition or greatest retention of skills (and further, such research may even be done with large sets of such data, as in Example 5 below.)

[0146] As the above exemplary implementations demonstrate, 3D-enhanced interpersonal and social interactions will be curated or designed in any number of novel ways, for any number of never-before-seen applications, and the limit resides only in the imagination of an ordinary artisan armed with knowledge of this disclosure. [0147] While some implementations will have specific purposes or goals in mind, other implementations will be purely for entertainment, exploration, and play. For instance, in some examples, a single avatar may have different body parts (joints, limbs, etc.) that are combined and mapped to different users (e.g., one user operates the right leg, another the left leg, another the right arm, another the left arm, etc.). In such embodiments, for example, a method may comprise the steps of: capturing 3D human movement input from at least one sender; creating a (combined) 3D movement data package from the (aggregate) 3D human movement input of the at least one sender (by use of any of numerous means of combining, amalgamating, aggregating, and/or averaging such input as will be known to those in the art); sending the (combined) 3D movement data package to a recipient device; and rendering a (combined) 3D movement object on the recipient device, from the (combined) 3D movement data package.

[0148] In some embodiments, 3D movement data will be combined from any number of multiple users, to create an amalgam of a shared movement. In such embodiments, for example, a method may comprise the steps of: capturing 3D human movement input from at least one sender; creating an (amalgamated) 3D movement data package from the (aggregate) 3D human movement input of the at least one sender (by use of any of numerous means of combining, amalgamating, aggregating, and/or averaging such input as will be known to those in the art); sending the (amalgamated) 3D movement data package to a recipient device; and rendering an (amalgamated) 3D movement object on the recipient device, from the (amalgamated) 3D movement data package.

[0149] In other examples, a single avatar may have the movement of each of its joints be rendered by taking the mathematical average of a set of users’ joints (e.g., a group of friends waves or dances, and the movement is the average of all of their movements). In such embodiments, for example, a method may comprise the steps of: capturing 3D human movement input from at least one sender; creating an (average) 3D movement data package from the (aggregate) 3D human movement input of the at least one sender (by use of any of numerous means of combining, amalgamating, aggregating, and/or averaging such input as will be known to those in the art); sending the (average) 3D movement data package to a recipient device; and rendering an (average) 3D movement object on the recipient device, from the (average) 3D movement data package.

[0150] Accordingly, in these exemplary implementations, one or more 3D objects rendered on the recipient device is a combined 3D movement object, an amalgamated 3D movement object, or an average 3D movement object, said 3D movement object based on the captured 3D human movement input from the sender and the at least one additional sender

[0151] Moreover, even apart from the benefits of such novel forms of interaction to learning and mental well-being, simply increasing daily activity and caloric output by sending 3D movement data messages that utilize the entire body will have significant benefits on human health (indeed, it has been estimated that sending or receiving ten 3D movement data messages burns 50 calories more than the same number of regular social network messages).

EXAMPLE 5: Use of 3D Movement Data by Other Systems and Processes

[0152] Besides using 3D movement data as a novel means of communication between individuals and groups, and as a novel social communication platform, as described above, 3D movement data also can be aggregated, indexed, compressed, stored, and extractable and retrievable for use in other systems and processes. No other system known to applicant provides cloud storage for indexing and querying 3D human movement data captured from consumer VR motion capture devices or mobile phone cameras.

[0153] In some embodiments, stored 3D movement data will be used to train machine learning models (in all preferred embodiments, only with explicit user consent). Machine learning is an application of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed, for instance in applications where it is difficult or infeasible to develop conventional algorithms to perform needed tasks. Machine learning algorithms build a mathematical model based on sample data, known as “training data.” Stored 3D movement data therefore will provide novel and valuable training data for machine learning applications.

[0154] Such data, for example, will be used to train AI to understand human body language, so that computers can better understand and respond to human emotion and intention. For instance, correlations between 3D movement data and user choice of emotional avatars, filters, and other parameters will be utilized to train machine learning models to classify human emotions (i.e., sentiment analysis). Such data will also be used to develop novel models to improve health tracking, early disease detection, and other medical uses, and to improve computer vision.

[0155] For example, in some embodiments, 3D movement data is used to enhance and accelerate the treatment process in psychedelic-assisted therapy, and/or to enable such psychedelic-assisted therapy to be scaled up and brought to larger groups of patients with fewer therapists, reduced demands on therapist time, and/or other efficiencies as will be appreciated.

[0156] Other contemplated uses for which large sets of 3D human movement data will have novel and significant applications include computational statistics, data mining and “knowledge discovery in databases” (KDD), predictive analytics, user behavior analytics, and generally such applications within computer science, statistics, and data analytics that have the overall goal of using large and complex data sets and intelligent methods to extract information.

[0157] It will be readily understood and appreciated that the concepts, methods, and systems of the examples and embodiments herein may be implemented in numerous ways, with reference only to the teachings of the present disclosure and the general knowledge of the art. In an exemplary implementation, specific movement data channels can be built into an API to allow easy use and widespread adoption. In such an implementation, the API provides and defines a set of functions and procedures (e.g., defines the kinds of calls or requests that can be made, how to make them, the data formats that should be used, the conventions to follow, etc.) to allow the creation of other applications that access the features and data described in this disclosure.

[0158] One such exemplary computing architecture (comprising a backend API and a client SDK), and the interactions between the multiple software intermediaries therein, is shown by the block diagram in FIG. 5. Specifically, Client SDK 510 comprises a simplified movement data recording and processing SDK Record Movement Data 511, Store Movement API Integration 512 which allows movement data to be sent and to be stored on the server with user-defined tags, Query Movement Data API Integration 513 which allows movement data to be queried based on user-defined tags, and Rendering Engine 514 which renders the movement data using a custom avatar engine. Backend API 520 comprises Data Analyzing and Indexing 521 which indexes a database based on movement data analysis and user input; Store Movement Data 522 which stores user-created movement data captured using Client Data SDK 510, and associates it with relevant metadata provided by the user so the data is searchable; and Query Movement Data 523 which provides Client SDK 510 with movement data whereby relevant data can be queried based on user-defined tags and through data analysis. EXAMPLE 6: Communication of 3D Movement Data with 3D Interactive Effects & Games

[0159] FIGS. 7A-7D are diagrams illustrating an exemplary implementation of the invention, in which there is video of a woman leaping for joy from which 3D human movement data is captured, the human shape is separated from ambient background, and the joints and skeletal frame of the human form are identified and stored as part of the 3D human movement data, and the 3D human movement data is combined with the segmented human form to be played back in augmented reality (AR) space at the receiving end, along with 3D interactive effects, in accordance with an embodiment of the present invention. In some such embodiments, a “hologram” effect may be created.

[0160] For example, and as demonstrated in FIGS. 11A-11C, the recorded human form can be played back in a “mixed reality” environment where the recording is, e.g., rendered over a live capture of the real environment (for instance, using the back camera of a smartphone). Using such means, a recorded human form can appear to be present (e.g., as a hologram) in a real environment. In the example of FIGS. 11A-11C, demonstrating using screenshot representations 1101-1103, the sender blows kisses to the recipient, and the recipient may catch those kisses (see 1103 “Kisses Caught: 1”), as similarly described above in Example 3.

[0161] In FIG. 7A, there is video input of a woman 700 jumping for joy in a room with furniture, including a bed, a couch, and a chair as examples. FIGS. 7B and 7C illustrate the capture of 3D human movement data. Specifically, in FIG. 7B, the human form/shape 725 is segmented/isolated and separated from background 721, while in FIG. 7C, the joint positions, as indicated by exemplary joint 731, and the rotations thereof, are captured over time from the video of woman 700 jumping for joy, thereby providing movement data in regard to skeleton model/frame 734.

[0162] On the receiving side (or, equivalently, when playing back), the segmented human form/shape 741 and the joint positions/rotations 742 are used to create a 3D human movement object, as shown in FIG. 7D. Various 3D interactive effects and games may be used in accordance with embodiments of the present invention. Three different examples of this are shown in FIG. 7D, in relation to a smartphone screen. In 743, the 3D movement object is reproduced with glitter/particles moving in relation to the 3D movement object; in 744, the 3D movement object is reproduced with a background image of the Eiffel Tower; and, in 745, the 3D movement object is reproduced with hearts (which may optionally be moving and changing shape) in relation to the 3D movement object. EXAMPLE 7: 3D Movement Data and VR Environment and Avatar Features

[0163] In general, FIGS. 8, 9A-9B, and 10A-10B illustrate exemplary implementations of user interfaces (UIs) in VR/AR environments in accordance with embodiments of the present invention.

[0164] FIG. 8 illustrates four exemplary implementations of a “meme” ball UI being used to control various functions of a 3D movement data system in accordance with an embodiment of the present invention without the need for controller button (or like) input. In this embodiment, the user’s hand avatar 810 can manipulate a “meme” ball 820 in the VR environment in order to control various functions, such as recording, playback, sending/transmitting, and other like actions. In the top left of FIG. 8, hand avatar 810 is moving “meme” ball 820 to the proper receptacle/hole within the VR environment in order to initiate recording of a new Meu (“REC”). In the top right of FIG. 8, hand avatar 810 has moved “meme” ball 820 into the proper receptacle/hole within the VR environment in order to redo the recording of the Meu (“Redo R”). In the bottom of FIG. 8 (left and right-hand sides), hand avatar 810 is moving “meme” ball 820 to the proper receptacle/hole within the VR environment in order to initiate playback of a Meu (“Playback”).

[0165] FIGS. 9A and 9B illustrate avatar and VR environment features being used by user 900 wearing VR headset 905 and seeing the VR environmental features as indicated. FIG. 9A illustrates how the VR environment according to an embodiment of the present invention adapts to the user’ s height (and optionally other body characteristics), thereby encouraging user 900 to stretch and otherwise move their body, as illustrated by arrows 910. Moreover, the VR environment according to this embodiment of the present invention requires user 900 to move in order to implement certain commands, such as sending a message, which requires user 900 to extend their hand, as indicated by movement 915 in FIG. 9A.

[0166] FIG. 9B indicates how user 900 may change avatar features, as well as games and other settings, in the VR environment according to an embodiment of the present invention. In FIG. 9B, the actual body of the user 900 wearing the VR headset 905 is shown on the left-hand side of the drawing, while a mirror representation/avatar 920 of user 900 as they appear in the VR environment is shown on the right-hand side along with the manipulatable UI objects 930, including a slider UI, whereby controller button input is not needed. [0167] FIGS. 10A and 10B illustrate exemplary implementations of UIs for selecting game and avatar features/settings in a 3D movement data system in accordance with embodiments of the present invention.

[0168] FIG. 10A is a representation of a screenshot 1001 from an iOS device, illustrating a UI by which a user may choose an emotion and get a game designed around that emotion according to an embodiment of the present invention. Screenshot 1001 shows control menu 1010 (similar in certain aspects to control menus 614 and 642 in FIGS. 6A-6F) having control buttons 1015 (“Back To Inbox”) and 1016 (“Record”), as well as see-through selection menu 1020 superimposed on top of the underlying image/video, featuring a top row 1030 of avatar/image choices and a bottom row 1040 of emotion choices. More specifically, the avatar/image choice row 1030 has choices of avatars/images for the user to select from, including a user 3D movement data avatar (“Meu”) 1031 in accordance with the present invention, a cat image 1033, an owl image 1035, etc., continuing out of the screen to the right where further avatar/images may be scrolled to and selected, while emotion choice row 1040 has choices of emotions for the user to select from, including Love 1042, Creativity 1044, Joy 1046, etc., continuing out of the screen to the right where further emotion choices may be scrolled to and selected.

[0169] In the emotion choice row 1040 of selection menu 1020, Love 1042 and Creativity 1044 have been selected by the user, as indicated by the checkmark in a circle to their lower right-hand side. Thus, the game to be generated (or any other activity/function to be generated) will be designed around the user-selected emotions of Love 1042 and Creativity 1044. Similarly, in the avatar/image choice row 1030 of selection menu 1020, the cat avatar/image 1033 has been selected by the user, as indicated by the checkmark in a circle to its lower right- hand side. Thus, as also shown in screenshot 1001, cat avatar/image 1050 is seen superimposed over the human subject in the image/video, mimicking the arms-spread gesture being made by the human subject.

[0170] FIG. 10B illustrates a VR environment 1002 in which an AR UI may be used to select and control various settings/features for, e.g., avatar/images and games, in a manner similar to that shown in FIG. 8, according to an embodiment of the present invention. Like the exemplary implementation of the UI in FIG. 8, the user may use and manipulate “meme” ball and other images/holograms in VR environment 1002 to scroll through, select, edit, and otherwise change avatar images/holograms (similarly to the choosing of avatar/images on the mobile phone screen of FIG. 10A) and game attributes (similar to the manipulation/control of avatar features, as well as games and other settings, in the VR environments of FIGS. 8 and 9A-9B).

[0171] In an embodiment of the present invention, a user records 3D movement data to be played back for, e.g., family and friends, after the user’s death. In one embodiment, the 3D movement data is such that the 3D message, representation, and/or hologram is interactive, i.e., programmed to provide responsive communication to each of the family and friends. In some embodiments, the 3D message, representation, and/or hologram will be uniquely tailored to each recipient and/or uniquely tailored to other characteristics such as time, date or location.

EXAMPLE 8: Use of 3D Movement Data in Psychedelic-Assisted Therapy

[0172] In some embodiments, as in the exemplary embodiments of Example 8, are implementations in the field of psychedelic-assisted therapy or psychedelic-assisted psychotherapy (PAP).

[0173] Psychedelic-assisted therapy, broadly, includes a range of related approaches that involve at least one session where one or more patients (interchangeably, “subject” or “client,” and it will be understood that a “patient” need not be diagnosable or diagnosed with any disorder, and will include individuals seeking PAP or psychedelic experiences for individual betterment or general improvement of mental health, or simply for experiential value or “fun”) is administered a psychedelic and is monitored, supported, and/or otherwise engaged by one or more trained facilitators or mental health professionals while under the effects of the psychedelic. See, e.g., Schenberg E.E. (2018). Psychedelic-Assisted Psychotherapy: A Paradigm Shift in Psychiatric Research and Development. Frontiers pharmacok, 9, 733. https://doi.org/10.3389/fphar.2018.00733; Tullis, P. (Jan. 28, 2021). The Rise of Psychedelic Psychiatry, Nature, vol. 598, pp. 506-509; Olson D.E. (2021). The Promise of Psychedelic Science. ACS pharmacok trans. sck, 4(2), 413-415. https://doi.org/10.1021/acsptsci.lc00071.

[0174] “Psychedelics” will be understood to include those chemical compounds that are agonists of serotonin 5-HT2A receptors and generally understood as “hallucinogens” or psychedelics by those of ordinary skill, such as tryptamines (e.g., psilocybin, psilocin, DMT), phenethylamines (e.g., mescaline, 2C-B and other “2C-x” compounds), and lysergamides (e.g., LSD), as well as substances containing them such as ayahuasca, peyote, San Pedro, and “magic” mushrooms. Substances besides these “classic psychedelics,” such as 3,4- methylenedioxymethamphetamine (MDMA), 5-MeO-DMT, ibogaine, ketamine, salvinorin A, nitrous oxide, and numerous others, which have hallucinogenic, “entheogenic,” “entactogenic” or “empathogenic,” dissociative, and other effects, and which are also used in “psychedelic”- assisted therapy, will also be appreciated to be “psychedelics” in the context herein, as will single enantiomers and enantiomeric mixtures; salts and solid forms such as polymorphs, hydrates, solvates, and co-crystals; deuterated and halogenated versions; and prodrugs, metabolites, analogs, and derivatives of any of the above, including combinations thereof, and further including novel chemical compounds or NCEs having similar structures and/or effects.

[0175] Protocols have been developed for the standardization of procedures to be used with PAP, such as the provision of psychological support. See, e.g., Johnson, M.; Richards, W.; and Griffiths, R., Human hallucinogen research: guidelines for safety, J. Psychopharmacol. 22, 603-620 (2008); and Mithoefer, M.; Mithoefer, A.; Jerome, L.; Ruse, J.; Doblin, R.; Gibson, E.; Ot’alora M., A MANUAL FOR MDMA- ASSISTED PSYCHOTHERAPY IN THE TREATMENT OF POSTTRAUMATIC STRESS DISORDER (2015), published by the Multidisciplinary Association for Psychedelic Studies (MAPS); Guss, J., Krause, R., & Sloshower, J. (Aug. 13, 2020). The Yale Manual for Psilocybin- Assisted Therapy of Depression (using Acceptance and Commitment Therapy as a Therapeutic Frame) https://doi.org/10.31234/osf.io/u6v9y; Tai, S. J., Nielson, E. M., Lennard-Jones, M., Johanna Ajantaival, R. L., Winzer, R., Richards, W. A., Reinholdt, F., Richards, B. D., Gasser, P., & Malievskaia, E. (2021). Development and Evaluation of a Therapist Training Program for Psilocybin Therapy for Treatment-Resistant Depression in Clinical Research. Frontiers in psychiatry, 12, 586682. https://doi.org/10.3389/fpsyt.2021.586682. However, it will be readily appreciated that such protocols and procedures are merely exemplary of the types that may be utilized.

[0176] Typically, PAP comprises one or more psychedelic dosing (drug administration) session(s), one or more preparation sessions before the one or more psychedelic dosing session(s), and one or more integration sessions after the psychedelic dosing session(s). Optionally, there may be an initial screening session to determine the patient’s suitability for PAP, as well as one or more sessions to provide a regimen of after-care and/or relapse management after the integration session(s) (whether either type of session is necessary depends on the mental health condition being treated, the outcome(s) of the dosing and other sessions, etc., as would be understood by one of ordinary skill in the art). It will be readily appreciated that the number and relative timing and order of the sessions will be chosen based on the therapeutic goal(s), the protocol(s) or clinical manual(s) followed, the psychedelic(s) used, the characteristics of the patient(s) and the disorder(s) to be treated (or improvements in mental health sought), and such other characteristics as will be readily appreciated by those of ordinary skill in the art.

[0177] In implementations directed to PAP and related therapies, the methods and systems for communication using 3D human movement data according to embodiments of the present invention may be used to provide psychological support to patients during one or more of the screening session(s), preparation session(s), psychedelic dosing session(s), integration session(s), and/or after-care/relapse management session(s). Moreover, such methods and systems for communication using 3D human movement data may be used to provide the patient with a consistent, controlled, and calm environment during PAP dosing sessions, and/or to customize and optimize the patient’s PAP experience.

[0178] In implementations directed to PAP and related therapies, the methods and systems for communication using 3D human movement data according to embodiments of the present invention may be used to provide remote connections and interactions between the therapists (or facilitators, “guides,” clinical psychologists, psychiatrists, other trained medical professionals, and the like) monitoring/overseeing the PAP or related therapy and the patient(s) of that PAP or related therapy, and/or between and among the patients themselves.

[0179] For example, in one embodiment, a group of (i.e., two or more) patients will interact between and among themselves during a PAP session by, e.g., sharing gestures and/or performing physical exercises as a group, according to the teachings herein.

[0180] In one embodiment, a therapist overseeing a PAP dosing session will remotely “attend” the PAP dosing session with the patient, and will through sending or sharing 3D movement data provide psychological support. This advance over the art will reduce the need for specially trained therapists who can provide high-quality care to patients as part of PAP, and/or reduce the burden on individual such therapists.

[0181] For example, in some embodiments a single therapist will provide psychological support or other care to multiple patients across space and/or time. In some such embodiments, a single therapist will provide care to multiple patients who are “separate” from one another (i.e., who are unaware of the presence of each other, as if a single therapist is in the “rooms” of multiple patients all at the same time). In other such embodiments, a single therapist will provide care to multiple patients who are “together,” for example in a group preparation session, group drug-administration session, or group integration session (i.e., if all such patients are together in a single “room” or other virtual space or location). “Together” will be understood to mean that the patients are aware of the presence of each other (e.g., are able to see and optionally interact with each other’s avatars) not necessarily that all are in the same room or location in physical space, or even necessarily that all are together during the same time, as some patients’ presences may in certain embodiments be pre-recorded (e.g., as stored 3D movement data).

[0182] In some embodiments, different 3D movement data of a therapist will be recorded and saved, e.g., to permanent storage. The pre-recorded 3D movement data of a therapist will thereafter be available to be used with one or more patients (e.g., used non-contemporaneously or asynchronously), and will be so used, minimizing or eliminating the need for the therapist to interact with the patient(s) at one or more times during PAP.

[0183] In one exemplary embodiment, a patient undergoing PAP may experience anxiety- provoking perceptual changes or physical sensations. It is believed that the practice of reassuring physical contact or therapeutic touch such as “arm holding” by a therapist may reduce anxiety in some such situations (if such contact or touch is agreed to by a patient). Arm holding is where, upon a patient’s request, the therapist will place a hand on the patient’s wrist, arm, hand, or shoulder, as a way of helping the patient feel more secure during PAP. This may occur, e.g., during a preparation or psychological support session, during a drug administration session, or during an integration session.

[0184] Accordingly, in some embodiments, a therapist may send 3D movement data that is received by the patient as the therapist holding the hand, arm, or shoulder of the patient. Haptic feedback is provided in some embodiments, for instance vibrations may be activated when the therapist touches a patient’s avatar, communicating to the patient the sensation of physical presence and contact, and causing the patient to experience reduced anxiety.

[0185] In embodiments where pre-recorded 3D movement data of a therapist is used with a patient, the 3D movement data of the patient can be monitored to determine when psychological support such as arm holding may be beneficial, and the pre-recorded touch can be provided to the patient’s avatar at such times, effectuating psychological support, the trigger for such provision being, e.g., any pre-determined trigger or cue or one based on AI, machine learning, or other like analysis of the patient’s 3D movement data and/or other data, or aggregate patient 3D movement data and/or other data. For example, in some embodiments, such other data includes physiological, physiometric, and/or biometric data as disclosed herein.

[0186] In various aspects as will be appreciated from the teachings herein, a therapist can provide many different forms of reassuring physical contact with one or more patients undergoing PAP, according to the methods and systems of the invention.

[0187] In various aspects as will be appreciated from the teachings herein, a therapist can provide many different forms of psychological support involving non-verbal communication with one or more patients undergoing PAP, according to the disclosed methods and systems.

[0188] In implementations directed to PAP and related therapies, the methods and systems for communication using 3D human movement data according to embodiments of the present invention may be used as part of a process to capture and store movements of patient(s) during PAP sessions in order to identify, track, and/or define characteristic movements associated with negative or difficult experiences in PAP and related therapies, such that the defined characteristic movement markers are used to predict and prevent such negative or difficult experiences. Similarly, such methods and systems may be used to identify, track, and/or define characteristic movements associated with positive or good experiences and outcomes in PAP and related therapies, and the defined characteristic movement markers are used to predict and guide patient(s) into having a positive or good experience or outcome. Moreover, such methods and systems for communication using 3D human movement data may be used to playback gestures and/or characteristic movements as a therapeutic and/or teaching aid for the patient(s) during sessions of PAP and related therapies, or for the facilitator or medical professional.

[0189] In implementations directed to PAP and related therapies, the methods and systems for communication using 3D human movement data according to embodiments of the present invention may be used to provide a digital platform for administering PAP and related therapies which is scalable from individual one-on-one PAP sessions up to widespread and general usage of PAP by the general public. For example, multiple patients can interact with the 3D movement data of a single facilitator or therapist, and/or a single facilitator or therapist can interact with the 3D movement data of multiple patients.

[0190] In some embodiments, the methods and systems of the present invention will be used to prepare one or more patients for PAP, or to educate one or more patients about PAP or any aspect(s) thereof. For example, a patient can interact with one or more 3D movement objects, any of which may or may not be pre-recorded and saved to storage, to understand what a psychedelic experience or the experience of PAP is like, and to get a deeper understanding thereof. In some embodiments, for instance, a patient will interact with multiple 3D movement objects stored together in one or more saved module(s) for purposes of providing a preparatory and/or educational learning experience about PAP or psychedelic experiences generally. In some embodiments, the 3D movement data of one or more patients will be used to determine or optimize one or more aspects of their PAP or psychedelic experience, as discussed herein.

[0191] Having now described various embodiments of the present invention, the following is provided to further clarify the scope of the disclosure. First, it should be noted that the steps or stages of a method, process, or algorithm described in connection with embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of both and/or other components. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, means, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the spirit or scope of this disclosure.

[0192] Thus, the various illustrative components, blocks, modules, means, and steps described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, processors may be communication processors or other such processors specifically designed for implementing functionality in communication devices or other mobile or portable devices. [0193] A software module may reside in RAM memory, flash memory, ROM memory, EPROM or EEPROM memory, registers, hard drive, a SSD, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may reside in a user terminal. The processor and the storage medium also may reside as discrete components in a user terminal.

[0194] Some embodiments of the present invention may include computer software and/or computer hardware/software combinations configured to implement one or more methods or functions associated with the present invention such as those described herein. These embodiments may be in the form of modules implementing functionality in software and/or hardware software combinations.

[0195] Embodiments may also take the form of a computer storage product with a computer-readable medium having computer code thereon for performing various computer- implemented operations, such as operations related to functionality as describe herein. The media and computer code may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known and available to those having skill in the computer software arts, or they may be a combination of both.

[0196] Examples of computer-readable media within the spirit and scope of the present invention include SSDs, magnetic media such as hard drives; optical media such as CD-ROMs, DVDs and holographic devices; magneto-optical media; and hardware devices that are specially configured to store and execute program code, such as programmable microcontrollers, ASICs, programmable logic devices (PLDs), and ROM and RAM devices.

[0197] Examples of computer code may include machine code, such as produced by a compiler or other machine code generation mechanisms, scripting programs, PostScript programs, and/or other code or files containing higher-level code that are executed by a computer using an interpreter or other code execution mechanism. Computer code may be comprised of one or more modules executing a particular process or processes to provide useful results, and the modules may communicate with one another via means known or developed in the art. For example, some embodiments of the invention may be implemented using assembly language, Java, C, C#, C++, scripting languages, and/or other programming languages and software development tools as are known or developed in the art. Other embodiments of the invention may be implemented in hardwired circuitry in place of, or in combination with, machine-executable software instructions.

[0198] While the methods described and illustrated herein may include particular steps or stages, it should be apparent that other processes including fewer, more, or different stages than those described and shown are also within the spirit and scope of the present invention. The methods and associated components, blocks, modules, means, and steps shown herein should therefore be understood as being provided for purposes of illustration, not limitation. It should be further understood that the specific order or hierarchy of steps or stages in the methods disclosed are only exemplary approaches. Based upon design preferences, the specific order or hierarchy of steps in the methods may be rearranged while remaining within the spirit and scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.

[0199] The foregoing description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the invention. Thus, the foregoing description of specific embodiments of the invention is presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed; obviously, many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, and to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. Accordingly, the scope of the invention shall be defined solely by the following claims and their equivalents.