Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MEDITATION SYSTEMS AND METHODS
Document Type and Number:
WIPO Patent Application WO/2023/187660
Kind Code:
A1
Abstract:
Systems and methods of changing a level of at least one user attribute through real-time adaptation of auditory attributes presented to the user such that, over a time period, the user's level of the at least one user attribute changes during use of the system/method. A system can include a sensing platform configured to monitor one or more user attributes of a user, an auditory platform configured to present one or more auditory attributes to the user based at least in part on one or more of the user attributes, a user interface configured to receive user data representative of one or more of the user attributes and send auditory data representative of one or more of the auditory attributes, and a processing platform configured to process the user data and/or the auditory data.

Inventors:
HAMILTON ADAM TY (AU)
WILLIAMS ANTHONY JAMES (AU)
Application Number:
PCT/IB2023/053097
Publication Date:
October 05, 2023
Filing Date:
March 28, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ESCAPIST TECH PTY LTD (AU)
International Classes:
A61M21/02; A61B5/00; A61B5/0205; A61B5/024; A61B5/11; A61B5/1455; A61B5/16; A61B5/256; A61B5/28; A61B5/291; A61B5/372; A61M21/00; G06N20/00; G16H20/70; H04R1/10
Foreign References:
US20200029881A12020-01-30
US20170339484A12017-11-23
US20190222918A12019-07-18
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system comprising: a sensing platform configured to monitor one or more user attributes of a user; and an auditory platform configured to present one or more auditory attributes to the user based at least in part on one or more of the user attributes.

2. The system of Claim 1 further comprising a user interface configured to: receive user data representative of one or more of the user attributes; and send auditory data representative of one or more of the auditory attributes.

3. The system of Claim 2 further comprising a processing platform configured to process the user data and/or the auditory data.

4. The system of Claim 3 further comprising an analyzing platform configured to analyze the user data for one or more physiological state indications of the user.

5. The system of Claim 4 further comprising a machine learning platform configured to learn one or more preferences of the user.

6. The system of Claim 5, wherein the machine learning platform is further configured to adjust the auditory data.

7. A system comprising: a sensing platform comprising a first sensor configured to monitor one or more user attributes of a user; and an auditory platform comprising headphones configured to present one or more auditory attributes to the user based at least in part on one or more of the user attributes; wherein the first sensor is selected from the group consisting of an EEG sensor, an ECG sensor, an accelerometer, a microphone, a thermometer, and an oximeter; wherein the user attributes are selected from the group consisting of user movements and user physiology; and wherein the auditory attributes are selected from the group consisting of volume, tone, bass, rhythm and tempo.

8. The system of Claim 7, wherein the user attribute is stress; and wherein the system is configured to lower a level of the stress of the user through real-time adaptation of the auditory attributes presented to the user such that, over a time period, the user’s stress level lowers during use of the system.

9. The system of Claim 7, wherein the first sensor is located in the headphones.

10. The system of Claim 7, wherein the sensing platform comprises at least a second sensor different than the first sensor; wherein at least one of the first and at least second sensors are located in the headphones.

11. The system of Claim 10, wherein at least one of the first and at least second sensors are located distal the headphones.

12. The system of Claim 10, wherein at least one of the first and at least second sensors are located distal the headphones, in one or more of a bracelet, necklace, and/or a head strap.

13. A system comprising: a sensing platform configured to monitor one or more user attributes of a user; an auditory platform configured to present one or more auditory attributes to the user based at least in part on one or more of the user attributes; a user interface configured to: receive user data representative of one or more of the user attributes; and send auditory data representative of one or more of the auditory attributes; and a processing platform configured to process the user data and/or the auditory data; wherein the sensing platform comprises at least a first sensor configured to monitor one or more of the user attributes; wherein the auditory platform comprises headphones; and wherein the system is configured to change a level of at least one of the user attributes through real-time adaptation of the auditory attributes presented to the user such that, over a time period, the user’s level of the at least one user attribute changes during use of the system.

14. The system of Claim 13 further comprising an analyzing platform configured to analyze the user data for one or more physiological state indications of the user.

15. The system of Claim 14 further comprising a machine learning platform configured to learn one or more preferences of the user.

16. The system of Claim 15, wherein the machine learning platform is further configured to adjust the auditory data.

17. The system of Claim 13, wherein the headphones are noise cancelling headphones.

18. The system of Claim 13, wherein the sensing platform comprises the first sensor and at least a second sensor; and wherein the user attributes are selected from the group consisting of breathing rate, heart rate, brainwave activities, temperature, and oxidation levels.

19. The system of Claim 13 further comprising a communication platform configured to transmit at least a portion of the user data remote from the user.

20. A method of changing a level of at least one user attribute comprising: monitoring one or more user attributes of a user; presenting one or more auditory attributes to the user based at least in part on one or more of the user attributes; and adapting one or more of the auditory attributes presented to the user such that, over a time period, the user’s level of the at least one user attribute changes during the method.

21. The method of Claim 20, wherein at least one user attribute is stress; and wherein a level of the user’s stress lowers in response to the adapted one or more of the auditory attributes.

22. The method of Claim 20, wherein monitoring comprises monitoring with a sensing platform comprising at least a first sensor; wherein presenting comprising presenting with an auditory platform configured to present the one or more auditory attributes to the user based at least in part on the one or more of the user attributes.

23. The method of Claim 22 further comprising: receiving user data representative of one or more of the user attributes; and sending auditory data representative of one or more of the auditory attributes.

Description:
MEDITATION SYSTEMS AND METHODS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] Not Applicable

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

[0002] Not Applicable

THE NAMES OF THE PARTIES TO A JOINT RESEARCH AGREEMENT

[0003] Not Applicable

SEQUENCE LISTING

[0004] Not Applicable

STATEMENT REGARDING PRIOR DISCLOSURES BY THE INVENTOR OR A JOINT INVENTOR

[0005] Not Applicable

BACKGROUND OF THE DISCLOSURE

1. Field of the Invention

[0006] The present invention relates to the field of audio signal generation and bioelectrical signal detection. More particularly, the present invention relates to systems and methods for collecting bioelectrical signals such as movement, breathing rate, heart rate, and brainwave activity, and utilizing the data to generate audio signals in noise-cancelling headphones.

2. Description of Related Art

[0007] Noise-cancelling headphones have become increasingly popular due to their ability to reduce unwanted ambient sounds, making them ideal for use in noisy environments such as airplanes, trains, and buses. However, current noise cancelling headphones only cancel out external noise and do not provide any additional functionality.

[0008] There are also several bioelectrical signals that can be detected from a user, such as movement, breathing rate, heart rate, and brainwave activity. These signals can be used to determine a user’s physiological state, and have several applications in healthcare, fitness, and entertainment. However, there is currently no noise-cancelling headphones that utilize these signals for audio generation.

[0009] An object of the present invention is to provide systems and methods for collecting bioelectrical signals such as movement, breathing rate, heart rate, and brainwave activity, and utilizing the data to generate audio signals in noise-cancelling headphones.

BRIEF SUMMARY OF THE INVENTION

[0010] Regular meditation can positively contribute to our overall wellness, but it can be difficult for many to make it part of a routine that can be kept. The present invention is a science-based meditation technology that helps make every meditation enjoyable and rewarding, so that the user can be calm, centered and ready to take on whatever the day throws at you.

[0011] In an exemplary embodiment, the present invention is a premium, noise-cancelling headphone system that is exceptional for listening to music and podcases, that also includes built- in biometric sensors and configured to be used with an integrated app that uniquely enables the measurement of pulse, heart-rate variability, breathing and neuro rhythm (EEG brain waves) to accurately determine your state of mind and dynamically alter (if needed) what you hear. Using responsive 3D audio, the present invention coaches the user to lower stress and quiet the mind, at will.

[0012] In an exemplary embodiment, the present invention includes systems and methods for collecting bioelectrical signals from a user and utilizing at least a portion of the data to generate audio signals in noise cancelling headphones. The system can comprise one or more sensors that can be connected to the user in a wired, wireless, direct contact or indirect contact fashion, including one or more EEG sensors, ECG sensors, accelerometers, and microphones. The data collected from one or more of the sensors is used to generate one or more bioelectrical signal datasets.

[0013] The bioelectrical signal datasets are then utilized to produce sound data sets and audio signals via the noise cancelling headphones. The audio signals can be conditioned based on the bioelectrical signals, such as changing the tempo or rhythm of the audio signal based on the user’s heart rate, or adjusting the volume based on the user’s breathing rate. [0014] The system also includes multipurpose headphones for all uses, meaning that the headphones can be used both for noise cancellation and for generating audio signals based on the bioelectrical signals collected from the user.

[0015] In an exemplary embodiment, the present invention is a system comprising a sensing platform configured to monitor one or more user attributes of a user and an auditory platform configured to present one or more auditory attributes to the user based at least in part on one or more of the user attributes.

[0016] The system can further comprise a user interface configured to receive user data representative of one or more of the user attributes and send auditory data representative of one or more of the auditory attributes.

[0017] The system can further comprise a processing platform configured to process the user data and/or the auditory data.

[0018] The system can further comprise an analyzing platform configured to analyze the user data for one or more physiological state indications of the user.

[0019] The system can further comprise a machine learning platform configured to learn one or more preferences of the user. The machine learning platform can further be configured to adjust the auditory data.

[0020] In an exemplary embodiment, the present invention is a system comprising a sensing platform comprising a first sensor configured to monitor one or more user attributes of a user and an auditory platform comprising headphones configured to present one or more auditory attributes to the user based at least in part on one or more of the user attributes, wherein the first sensor is selected from the group consisting of an EEG sensor, an ECG sensor, an accelerometer, a microphone, a thermometer, and an oximeter, wherein the user attributes are selected from the group consisting of user movements and user physiology, and wherein the auditory attributes are selected from the group consisting of volume, tone, bass, rhythm and tempo.

[0021] The user attribute can be stress, and the system configured to lower a level of the stress of the user through real-time adaptation of the auditory attributes presented to the user such that, over a time period, the user’s stress level lowers during use of the system.

[0022] The first sensor can be located in the headphones. [0023] The sensing platform can comprise at least a second sensor different than the first sensor, wherein at least one of the first and at least second sensors are located in the headphones.

[0024] At least one of the first and at least second sensors can be located distal the headphones. At least one of the first and at least second sensors can be located in one or more of a bracelet, necklace, and/or a head strap.

[0025] In an exemplary embodiment, the present invention is a system comprising a sensing platform configured to monitor one or more user attributes of a user, an auditory platform configured to present one or more auditory attributes to the user based at least in part on one or more of the user attributes, a user interface configured to receive user data representative of one or more of the user attributes and send auditory data representative of one or more of the auditory attributes, and a processing platform configured to process the user data and/or the auditory data, wherein the sensing platform comprises at least a first sensor configured to monitor one or more of the user attributes, wherein the auditory platform comprises headphones, and wherein the system is configured to change a level of at least one of the user attributes through real-time adaptation of the auditory attributes presented to the user such that, over a time period, the user’s level of the at least one user attribute changes during use of the system.

[0026] The system can further comprise an analyzing platform configured to analyze the user data for one or more physiological state indications of the user.

[0027] The system can further comprise a machine learning platform configured to learn one or more preferences of the user. The machine learning platform can further be configured to adjust the auditory data.

[0028] The headphones can be noise cancelling headphones.

[0029] The sensing platform can comprise the first sensor and at least a second sensor and the user attributes can be selected from the group consisting of breathing rate, heart rate, brainwave activities, temperature, and oxidation levels.

[0030] The system can further comprise a communication platform configured to transmit at least a portion of the user data remote from the user.

[0031] In an exemplary embodiment, the present invention is a method of changing a level of at least one user attribute comprising monitoring one or more user attributes of a user, presenting one or more auditory attributes to the user based at least in part on one or more of the user attributes, and adapting one or more of the auditory attributes presented to the user such that, over a time period, the user’s level of the at least one user attribute changes during the method.

[0032] At least one of the attributes can be stress, and a level of the user’s stress can be lowered in response to the adapted one or more of the auditory attributes.

[0033] Monitoring can comprise monitoring with a sensing platform comprising at least a first sensor.

[0034] Presenting can comprise presenting with an auditory platform configured to present the one or more auditory attributes to the user based at least in part on the one or more of the user attributes.

[0035] The method can further comprise receiving user data representative of one or more of the user attributes and sending auditory data representative of one or more of the auditory attributes.

[0036] Other aspects and features of exemplary embodiments of this disclosure will become apparent to those of ordinary skill in the art, upon reviewing the following description of specific, exemplary embodiments of this disclosure in concert with the various figures. While features of this disclosure may be discussed relative to certain exemplary embodiments and figures, all exemplary embodiments of this disclosure can include one or more of the features discussed in this application. While one or more exemplary embodiments may be discussed as having certain advantageous features, one or more of such features may also be used with the other various exemplary embodiments discussed in this application. In similar fashion, while exemplary embodiments may be discussed below as system or method exemplary embodiments, it is to be understood that such exemplary embodiments can be implemented in various devices, systems, and methods. As such, discussion of one feature with one exemplary embodiment does not limit other exemplary embodiments from possessing and including that same feature.

BRIEF DESCRIPTION OF THE DRAWINGS

[0037] The accompanying Figures, which are incorporated in and constitute a part of this specification, illustrate several aspects described below.

[0038] FIG. 1 depicts a block diagram of illustrative computing device architecture 100, according to an example implementation. [0039] FIG. 2 shows a diagram of an exemplary embodiment of a user meditating and employing a mobile device and a wearable device according to a preferred embodiment of the present invention.

[0040] FIG. 3 shows a diagram of an embodiment of a user meditating and employing a mobile device and a wearable device, where the wearable device includes an EEG sensor band and a heart sensor according to a preferred embodiment of the present invention.

[0041] FIGS. 4A, 4B and 4C are exemplary embodiments of elements of the present invention, including a headphone system and a band sensor for ease of forehead mounting.

[0042] FIG. 5 illustrates an exemplary embodiment of the present invention, according to a preferred embodiment.

[0043] FIGS. 6A-6C illustrate exemplary components of the present invention, according to exemplary embodiments, and use indications.

[0044] FIGS. 7A-7D illustrate exemplary components of the present invention, according to preferred embodiments, and use indications.

[0045] FIG. 8 shows the headphone system of FIG. 5 in zoom.

[0046] FIGS. 9-11 are zoomed illustrations of FIGS. 7B-7D, respectively.

[0047] FIG. 12 is another exemplary embodiment of the present invention.

[0048] FIGS. 13-24 are various exemplary headphone systems of the present invention, including varying views.

[0049] FIG. 25 illustrates an exemplary embodiment of the present invention in use.

DETAIL DESCRIPTION OF THE INVENTION

[0050] To facilitate an understanding of the principles and features of the various exemplary embodiments of the invention, various illustrative exemplary embodiments are explained below. Although exemplary embodiments of the invention are explained in detail, it is to be understood that other exemplary embodiments are contemplated. Accordingly, it is not intended that the invention is limited in its scope to the details of construction and arrangement of components set forth in the following description or illustrated in the drawings. The invention is capable of other exemplary embodiments and of being practiced or carried out in various ways. Also, in describing the exemplary embodiments, specific terminology will be resorted to for the sake of clarity.

[0051] It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural references unless the context clearly dictates otherwise. For example, reference to a component is intended also to include composition of a plurality of components. References to a composition containing “a” constituent is intended to include other constituents in addition to the one named.

[0052] Also, in describing the exemplary embodiments, terminology will be resorted to for the sake of clarity. It is intended that each term contemplates its broadest meaning as understood by those skilled in the art and includes all technical equivalents which operate in a similar manner to accomplish a similar purpose.

[0053] Ranges may be expressed herein as from “about” or “approximately” or “substantially” one particular value and/or to “about” or “approximately” or “substantially” another particular value. When such a range is expressed, other exemplary embodiments include from the one particular value and/or to the other particular value.

[0054] Similarly, as used herein, “substantially free” of something, or “substantially pure”, and like characterizations, can include both being “at least substantially free” of something, or “at least substantially pure”, and being “completely free” of something, or “completely pure”.

[0055] By “comprising” or “containing” or “including” is meant that at least the named compound, element, particle, or method step is present in the composition or article or method, but does not exclude the presence of other compounds, materials, particles, method steps, even if the other such compounds, material, particles, method steps have the same function as what is named.

[0056] It is also to be understood that the mention of one or more method steps does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Similarly, it is also to be understood that the mention of one or more components in a composition does not preclude the presence of additional components than those expressly identified.

[0057] The materials described as making up the various elements of the invention are intended to be illustrative and not restrictive. Many suitable materials that would perform the same or a similar function as the materials described herein are intended to be embraced within the scope of the invention. Such other materials not described herein can include, but are not limited to, for example, materials that are developed after the time of the development of the invention.

[0058] The present invention is a method and system for collecting and utilizing bioelectrical signals for audio generation in noise cancelling headphones. The system includes multiple sensors that can be connected to the user, such as EEG sensors, ECG sensors, an accelerometer, and a microphone. These sensors collect data on the user’s physiology and movement, including breathing rate, heart rate, brainwave activities, temperature, and oxidation levels, among others.

[0059] The collected data is used to generate one or more bioelectrical signal datasets, which are then utilized to produce sound data sets and audio signals via the noise cancelling headphones. The audio signals can be conditioned based on the bioelectrical signals, such as changing the tempo or rhythm of the audio signal based on the user’s heart rate, or adjusting the volume based on the user’s breathing rate.

[0060] The system also includes multipurpose headphones for all uses, including those beyond dynamic control for meditation. The headphones can be used both for noise cancellation and for generating audio signals based on the bioelectrical signals collected from the user.

[0061] The bioelectrical signal datasets can be analyzed in real-time or stored for future analysis. The analysis can be used to determine the user’s physiological state, such as their level of stress or relaxation, and can be used to adjust the audio signals accordingly. For example, if the user’s stress levels are high, the audio signals can be adjusted to promote relaxation, such as by playing calming music or sounds.

[0062] In an exemplary embodiment of the invention, the system includes a user interface that allows the user to control the audio signals generated by the headphones. The user interface can be in the form of a mobile application or a physical control panel on the headphones.

[0063] In another embodiment of the invention, the system includes a machine learning algorithm that can learn the users’ preferences over time and adjust the audio signals accordingly. For example, if the user consistently responds positively to certain types of music or sounds, the algorithm can learn this and adjust the audio signals to match the user’s preferences. [0064] In yet another embodiment of the invention, the system includes a communication module that allows the user’s bioelectrical signal datasets to be transmitted to a remote device, such as a healthcare provider or fitness coach. This allows the user’s physiological state to be monitored remotely, and can be used for a variety of applications, such as remote health monitoring or fitness coaching.

[0065] The present invention has multiple applications across healthcare, fitness, and entertainment. In healthcare, the system can be used for remote health monitoring, as well as for biofeedback training for stress management and relaxation. In fitness, the system can be used for real-time monitoring of the user’s physical activity and physiological state, as well as for personalized coaching based on the user’s bioelectrical signals. In entertainment, the system can be used to enhance the user’s listening experience by adjusting the audio signals based on the user’s physiological state.

[0066] In some instances, a computing device may be referred to as a mobile device, mobile computing device, a mobile station (MS), terminal, cellular phone, cellular handset, personal digital assistant (PDA), smartphone, wireless phone, organizer, handheld computer, desktop computer, laptop computer, tablet computer, set-top box, television, appliance, game device, medical device, display device, or some other like terminology. In other instances, a computing device may be a processor, controller, or a central processing unit (CPU). In yet other instances, a computing device may be a set of hardware components.

[0067] The user interface, headphones and/or other components of the present invention that include presence-sensitive input can be a device that accepts input by the proximity of a finger, a stylus, or an object near the device. A presence-sensitive input device may also be a radio receiver (for example, a Wi-Fi receiver) and processor which is able to infer proximity changes via measurements of signal strength, signal frequency shifts, signal to noise ratio, data error rates, and other changes in signal characteristics. A presence-sensitive input device may also detect changes in an electric, magnetic, or gravity field.

[0068] A presence-sensitive input device may be combined with a display to provide a presencesensitive display. For example, a user may provide an input to a computing device by touching the surface of a presence-sensitive display using a finger. In another example implementation, a user may provide input to a computing device by gesturing without physically touching any object. For example, a gesture may be received via a video camera or depth camera.

[0069] In some instances, a presence-sensitive display may have two main attributes. First, it may enable a user to interact directly with what is displayed, rather than indirectly via a pointer controlled by a mouse or touchpad. Secondly, it may allow a user to interact without requiring any intermediate device that would need to be held in the hand. Such displays may be attached to computers, or to networks as terminals. Such displays may also play a prominent role in the design of digital appliances such as a personal digital assistant (PDA), satellite navigation devices, mobile phones, and video games. Further, such displays may include a capture device and a display.

[0070] Various aspects described herein may be implemented using standard programming or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computing device to implement the disclosed subject matter. A computer-readable medium may include, for example: a magnetic storage device such as a hard disk, a floppy disk or a magnetic strip; an optical storage device such as a compact disk (CD) or digital versatile disk (DVD); a smart card; and a flash memory device such as a card, stick or key drive, or embedded component. Additionally, it should be appreciated that a carrier wave may be employed to carry computer-readable electronic data including those used in transmitting and receiving electronic data such as electronic mail (e-mail) or in accessing a computer network such as the Internet or a local area network (LAN). Of course, a person of ordinary skill in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.

[0071] FIG. 1 depicts a block diagram of illustrative computing device architecture 100, according to an example implementation. Certain aspects of FIG. 1 may be embodied in a computing device (for example, a mobile computing device). As desired, embodiments of the disclosed technology may include a computing device with more or less of the components illustrated in FIG. 1. It will be understood that the computing device architecture 100 is provided for example purposes only and does not limit the scope of the various embodiments of the present disclosed systems, methods, and computer-readable mediums.

[0072] The computing device architecture 100 of FIG. 1 includes a CPU 102, where computer instructions are processed; a display interface 106 that acts as a communication interface and provides functions for rendering video, graphics, images, and texts on the display. According to certain some embodiments of the disclosed technology, the display interface 106 may be directly connected to a local display, such as a touch-screen display associated with a mobile computing device. In another example embodiment, the display interface 106 may be configured for providing data, images, and other information for an extemal/remote display that is not necessarily physically connected to the mobile computing device. For example, a desktop monitor may be utilized for mirroring graphics and other information that is presented on a mobile computing device. According to certain some embodiments, the display interface 106 may wirelessly communicate, for example, via a Wi-Fi channel or other available network connection interface 112 to the extemal/remote display.

[0073] In an example embodiment, the network connection interface 112 may be configured as a communication interface and may provide functions for rendering video, graphics, images, text, other information, or any combination thereof on the display. In one example, a communication interface may include a serial port, a parallel port, a general-purpose input and output (GPIO) port, a game port, a universal serial bus (USB), a micro-USB port, a high-definition multimedia (HDMI) port, a video port, an audio port, a Bluetooth port, a near-field communication (NFC) port, another like communication interface, or any combination thereof.

[0074] The computing device architecture 100 may include a keyboard interface 104 that provides a communication interface to a keyboard. In one example embodiment, the computing device architecture 100 may include a presence-sensitive display interface 107 for connecting to a presence-sensitive display. According to certain some embodiments of the disclosed technology, the presence-sensitive display interface 107 may provide a communication interface to various devices such as a pointing device, a touch screen, a depth camera, etc. which may or may not be associated with a display.

[0075] The computing device architecture 100 may be configured to use an input device via one or more of input/ output interfaces (for example, the keyboard interface 104, the display interface 106, the presence sensitive display interface 107, network connection interface 112, camera interface 114, sound interface 116, etc.) to allow a user to capture information into the computing device architecture 100. The input device may include a mouse, a trackball, a directional pad, a track pad, a touch-verified track pad, a presence-sensitive track pad, a presence-sensitive display, a scroll wheel, a digital camera, a digital video camera, a web camera, a microphone, a sensor, a smartcard, and the like. Additionally, the input device may be integrated with the computing device architecture 100 or may be a separate device. For example, the input device may be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor.

[0076] Example embodiments of the computing device architecture 100 may include an antenna interface 110 that provides a communication interface to an antenna; a network connection interface 112 that provides a communication interface to a network. According to certain embodiments, a camera interface 114 is provided that acts as a communication interface and provides functions for capturing digital images from a camera. According to certain embodiments, a sound interface 116 is provided as a communication interface for converting sound into electrical signals using a microphone and for converting electrical signals into sound using a speaker. According to example embodiments, a random-access memory (RAM) 118 is provided, where computer instructions and data may be stored in a volatile memory device for processing by the CPU 102.

[0077] According to an example embodiment, the computing device architecture 100 includes a read-only memory (ROM) 120 where invariant low-level system code or data for basic system functions such as basic input and output (I/O), startup, or reception of keystrokes from a keyboard are stored in a non-volatile memory device. According to an example embodiment, the computing device architecture 100 includes a storage medium 122 or other suitable type of memory (e.g., RAM, ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, flash drives), where the files include an operating system 124, application programs 126 (including, for example, a web browser application, a widget or gadget engine, and or other applications, as necessary) and data files 128 are stored. According to an example embodiment, the computing device architecture 100 includes a power source 130 that provides an appropriate alternating current (AC) or direct current (DC) to power components. According to an example embodiment, the computing device architecture 100 includes a telephony subsystem 132 that allows the device 100 to transmit and receive sound over a telephone network. The constituent devices and the CPU 102 communicate with each other over a bus 134. [0078] According to an example embodiment, the CPU 102 has appropriate structure to be a computer processor. In one arrangement, the CPU 102 may include more than one processing unit. The RAM 118 interfaces with the computer bus 134 to provide quick RAM storage to the CPU 102 during the execution of software programs such as the operating system application programs, and device drivers. More specifically, the CPU 102 loads computer-executable process steps from the storage medium 122 or other media into a field of the RAM 118 in order to execute software programs. Data may be stored in the RAM 118, where the data may be accessed by the computer CPU 102 during execution. In one example configuration, the device architecture 100 includes at least 125 MB of RAM, and 256 MB of flash memory.

[0079] The storage medium 122 itself may include a number of physical drive units, such as a redundant array of independent disks (RAID), a floppy disk drive, a flash memory, a USB flash drive, an external hard disk drive, thumb drive, pen drive, key drive, a High-Density Digital Versatile Disc (HD-DVD) optical disc drive, an internal hard disk drive, a Blu-Ray optical disc drive, or a Holographic Digital Data Storage (HDDS) optical disc drive, an external mini-dual inline memory module (DIMM) synchronous dynamic random access memory (SDRAM), or an external micro-DIMM SDRAM. Such computer readable storage media allow a computing device to access computer-executable process steps, application programs and the like, stored on removable and non-removable memory media, to off-load data from the device or to upload data onto the device. A computer program product, such as one utilizing a communication system may be tangibly embodied in storage medium 122, which may comprise a machine-readable storage medium.

[0080] According to one example embodiment, the term computing device, as used herein, may be a CPU, or conceptualized as a CPU (for example, the CPU 102 of FIG. 1). In this example embodiment, the computing device may be coupled, connected, and/or in communication with one or more peripheral devices, such as display. In another example embodiment, the term computing device, as used herein, may refer to a mobile computing device, such as a smartphone or tablet computer. In this example embodiment, the computing device may output content to its local display and/or speaker(s). In another example embodiment, the computing device may output content to an external display device (e.g., over Wi-Fi) such as a TV or an external computing system. [0081] In some embodiments of the disclosed technology, the computing device may include any number of hardware and/or software applications that are executed to facilitate any of the operations. In some embodiments, one or more I/O interfaces may facilitate communication between the computing device and one or more input/output devices. For example, a universal serial bus port, a serial port, a disk drive, a CD-ROM drive, and/or one or more user interface devices, such as a display, keyboard, keypad, mouse, control panel, touch screen display, microphone, etc., may facilitate user interaction with the computing device. The one or more I/O interfaces may be utilized to receive or collect data and/or user instructions from a wide variety of input devices. Received data may be processed by one or more computer processors as desired in various embodiments of the disclosed technology and/or stored in one or more memory devices.

[0082] One or more network interfaces may facilitate connection of the computing device inputs and outputs to one or more suitable networks and/or connections; for example, the connections that facilitate communication with any number of sensors associated with the system. The one or more network interfaces may further facilitate connection to one or more suitable networks; for example, a local area network, a wide area network, the Internet, a cellular network, a radio frequency network, a Bluetooth enabled network, a Wi-Fi enabled network, a satellite-based network any wired network, any wireless network, etc., for communication with external devices and/or systems.

[0083] FIG. 2 illustrates a user 200 meditating and employing a presence-sensitive input device comprising a user interface 210, which in an exemplary embodiment can be a mobile device 210 and wearable devices 220, 230, 240, each of which may employ presence-sensitive inputs. The user 200 can use one or more of the devices 210, 220, 230, 240 to meditate. As shown, the mobile device 210 can be embodied as a smartphone running a mobile app depicting a graphical user interface with a plurality of action icons that are concurrently displayed in order to receive a user input, such as a touch input, a sound input, an image input, a shake input, or others. When activated, the action icons lead to different menus or pages within the mobile for a presentation of the meditation content and a reception of the user input, as disclosed herein. For example, the pages can present a plurality of meditation contents, where the icons correspond to the pages in a one-to-one manner, where the pages correspond to the meditation contents in a one-to-one manner. [0084] The wearable devices embody multiple forms, such as a headphone system 220, a headband or skin patch 230, and/or a wristband or bracelet 240.

[0085] Sensing technology can be integrated in any of the devices 220, 230, 240. The sensing technology is preferably non-invasive and senses parameters of interest. For example, pulse, heart rate variability, respiration, oximetry and others, can be sensed by the present invention. Further, the present invention can use sensed parameter data to calculate other important parameters that cannot be directly measured, but nonetheless are useful for the present invention to accurately determine a user’s state of mind and dynamically alter (if needed) what the user hears in a feedback loop enabling the present invention to continual monitor user parameters, and adjust delivery of audio in order to coach the user to lower stress and quiet the mind.

[0086] The present invention can incorporate sensing for any physiological signal, from cardiac monitoring and SpO2 to body temperature, pulse rate, respiration rate and blood pressure.

[0087] The headphone system 220, whether wired or wireless, whether circum-aural, supra-aural, open, semi-open, semi-closed, closed back, ear-fitting, or a headset, can host one or more sensors. The headphone system 220 can include a headband, or a cord, or an earpiece, such as an ear cup, an ear bud, an ear pad, an earphone, an in-ear piece, or a pivoting earphone, any of which can host one or more sensors, whether externally or internally, such as via fastening, adhering, mating, or others. In some embodiments, headphone system 220 includes noise-cancelling hardware or software.

[0088] For example, the headband or the skin patch 230 can hosts, whether on a flexible or rigid portion thereof, one or more sensors, whether externally or internally, such as via fastening, adhering, mating, or others.

[0089] For example, the wristband or the bracelet 240 can similarly host, whether on a flexible or rigid portion thereof, one or more sensors, whether externally or internally, such as via fastening, adhering, mating, or others.

[0090] In some exemplary embodiment, one or more of the devices 220, 230, 240 can communicate with the mobile device 210. In some exemplary embodiment, one or more of the devices 220, 230, 240 can communicate with one or more of other of the devices 220, 230, 240. The communication can be in a wired or wireless manner, such as via a radio communication technique, an optical communication technique, an infrared communication technique, a sound communication technique, or others.

[0091] FIG. 3 shows a diagram of an embodiment of the user 200 meditating and employing the mobile device 210 of FIG 2, where headband 230 is an ECG and/or an EEG sensing band, and wristband or bracelet 240 comprises, for example, a heart sensor, a pH sensor, or other biometric sensing technology, with each device configured for wired or wireless communication if a particular set-up requires same.

[0092] FIGS. 4A, 4B and 4C illustrate exemplary embodiments of the present invention. FIGS. 4A and 4C show details of an exemplary headphone system 220, while FIG. 4B illustrates the wearing of the headphone system 220 and removable headband 230.

[0093] FIG. 5 illustrates an exemplary embodiment of the present invention. The present invention can include an intuitive user interface that allows the user to interact rapidly, naturally and frustration-free. Power and mode buttons are shown on the right. The top and bottom buttons of each of the three are extended further (are higher), than the middle ones. Details on the buttons can be used to provide the user with additional tactile feedback. The size, shape, orientation, number and other aspects of the controls/buttons are variable.

[0094] FIGS. 6A, 6B and 6C illustrate another exemplary embodiment of the present invention. The present invention as shown in FIGS. 6A-6C, has for example, a silverish color and metallic finish. The panels on the earcup housings have hinges inside for as perfect a fit to the ears as possible. Under the headband of the headphone system 220, there can be placed an elastic one comprising, for example, three EEG sensors. This arrangement offers customable fit for various head sizes. The elastic secondary headband can be flexible in order to rotate to measure signals at, for example, C3-Cz-C4, P3-Pz-P4, and/or F3-Fz-F4 electrodes.

[0095] The ear cushions can be elliptically shaped so as to wrap from the inside out, enabling extra thickness for comfort. Essential oil marks can also be placed on the cushions around the temple locations. An LED indicator can provide visual feedback of the headphone status.

[0096] FIGS. 7A-7D illustrate exemplary components of the present invention, according to preferred embodiments, and use indications. FIG. 8 shows the headphone system of FIG. 5 in zoom. FIGS. 9-11 are zoomed illustrations of FIGS. 7B-7D, respectively. FIG. 12 is another exemplary embodiment of the present invention. FIGS. 13-24 are various exemplary headphone systems of the present invention, including varying views. FIG. 25 illustrates an exemplary embodiment of the present invention in use.

[0097] Numerous characteristics and advantages have been set forth in the foregoing description, together with details of structure and function. While the invention has been disclosed in several forms, it will be apparent to those skilled in the art that many modifications, additions, and deletions, especially in matters of shape, size, and arrangement of parts, can be made therein without departing from the spirit and scope of the invention and its equivalents as set forth in the following claims. Therefore, other modifications or exemplary embodiments as may be suggested by the teachings herein are particularly reserved as they fall within the breadth and scope of the claims here appended.