Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SMART WEARABLE DEVICES AND METHODS FOR OPTIMIZING OUTPUT
Document Type and Number:
WIPO Patent Application WO/2015/127062
Kind Code:
A1
Abstract:
A smart wearable devices and methods for output optimization are presented where the smart wearable device receives input from one or more sensors, including input related to the user's biological characteristics. This input is used to determine an optimal output form. If the determined output form is different from the smart wearable device's native or default output form, the smart wearable device transcribes the output into the optimal out using a transcription engine.

Inventors:
TANAKA NOBUO (US)
ELGORT VLADIMIR (US)
DANIELSON JACELYN (US)
KALACHEV ANTON (US)
WONG JOHN (US)
DACOSTA BEHRAM (US)
BHAT UDUPI RAMANATH (US)
COPERE LUDOVIC (US)
KATAOKA MASAKI (JP)
Application Number:
PCT/US2015/016597
Publication Date:
August 27, 2015
Filing Date:
February 19, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SONY CORP (JP)
SONY CORP AMERICA (US)
International Classes:
A61B5/332; G08B23/00; G16H40/67
Foreign References:
US20130135108A12013-05-30
US20120059230A12012-03-08
US20090069642A12009-03-12
US20020019584A12002-02-14
US20130119255A12013-05-16
Other References:
See also references of EP 3092631A4
Attorney, Agent or Firm:
FUJII, Harold T. (Intellectual Property Department16530 Via Esprillo, MZ 103, San Diego California, US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A smart wearable device, the device comprising:

(a) a housing, wherein the housing encases components of a wearable smart device;

(b) one or more sensors, wherein at least one sensor is a biological sensor configured to acquire biological input;

(c) one or more output forms;

(d) a memory;

(e) one or more communications interfaces;

(f) a processor; and

(g) programming residing in a non-transitory computer readable medium, wherein the programming is executable by the computer processor and configured to:

(i) receive input from the one or more sensors, wherein the input may be acquired automatically or manually entered by a user and wherein at least some of the input is related to the user's biology;

(ii) use the received input to determine an optimal output form, wherein at least some of the input used to determine the optimal output form is related to the user's biology; and

(iii) if the device's native output form is not already in the determined optimal output form, transcribe the output into the determined optimal output form using one or more transcription engines.

2. The device of claim 1 , further comprising:

one or more environmental sensors, wherein at least one environmental sensor is configured to acquire contextual input and wherein said programming is further configured to:

receive input from the one or more environmental sensors, wherein at least some of the input used to determine the optimal output form is related to the context in which the smart wearable device is operating.

3. The device of claim 1 , wherein said programming is further configured to:

transmit the optimized output to another smart wearable or non-wearable device, wherein the other smart wearable device or non-wearable device is configured to convey the optimal output form to the user. 4. The device of claim 3, wherein said programming is further configured to:

use information inferred from third party data sources or past personal preferences to determine the optimal output form. 5. The device of claim 1 , wherein the one or more communications interfaces are selected from the group consisting of a wired communications interface, a wireless communications interface, a cellular communications interface, a WiFi communications interface, a near field communications interface, an infrared communications interface, a ZigBee communications interface, a Z- Wave communications interface and a Bluetooth communications interface.

6. The device of claim 1 , wherein said programming is further configured to:

select an optimal combination of transcription engines using embedded dedicated intelligence and processing algorithms, wherein the selection is made based on input from the user sensed in real-time and the user's characteristics.

7. The device of claim 1 , wherein said programming is further configured to:

(a) receive a feedback input from the user to rate the quality of the determined optimal output form; and

(b) use the feedback input as a learning parameter to iteratively improve its determination of optimal output forms.

8. The device of claim 1 , wherein the transcription engine is accessed by the smart wearable device through a stand-alone wireless connection or tethered through a wireless-enabled non-wearable device.

9. The device of claim 1 , wherein the transcription engine is a natively- embedded application or queried remotely through a cloud-based access. 10. The device of claim 1 , wherein one or more transcription engines are selected from the group consisting of a text to speech and speech to text engine, a natural language processing engine, an image generating engine, a sound generating engine, a vibration generating engine, a smell generating engine and an integrated third party application programming interface.

1 1 . The device of claim 1 , wherein the smart wearable device has a platform selected from the group consisting of hand worn devices, finger worn devices, wrist worn devices, head worn devices, arm worn devices, leg worn devices, ankle worn devices, foot worn devices, toe worn devices, watches, eyeglasses, rings, bracelets, necklaces, articles of jewelry, articles of clothing, shoes, hats, contact lenses, and gloves.

12. A computer implemented method for determining the most optimal output form from a smart wearable device, the method comprising:

(a) providing a smart wearable device, the smart wearable device comprising:

(i) a housing, wherein the housing encases components of a wearable smart device;

(ii) one or more sensors, wherein at least one sensor is a biological sensor configured to acquire biological input;

(iii) one or more output forms;

(iv) a memory; (v) one or more communications interfaces; and

(vi) a processor;

(b) receiving input from the one or more sensors associated with a smart wearable device, wherein at least one sensor is a biological sensor configured to acquire biological input and wherein the input may be acquired automatically or manually entered by a user;

(c) using the received input to determine an optimal output form, wherein at least some of the input used to determine the optimal output form is related to the user's biology; and

(d) if the output information is not already in the determined optimal output form, transcribing output information into the determined optimal output form using one or more transcription engines;

(e) wherein said method is performed by executing programming on at least one computer processor, said programming residing on a non-transitory medium readable by the computer processor.

13. The method of claim 12, further comprising:

receiving input from one or more environmental sensors associated with the smart wearable device, wherein at least one environmental sensor is configured to acquire contextual input and wherein at least some of the input used to determine the optimal output form is related to the context in which the smart wearable device is operating.

14. The method of claim 12, wherein the one or more communications interfaces are selected from the group consisting of a wired communications interface, a wireless communications interface, a cellular communications interface, a WiFi communications interface, a near field communications interface, an infrared communications interface, ZigBee communications interface, a Z- Wave communications interface and a Bluetooth communications interface.

15. The method of claim 12, further comprising:

selecting an optimal combination of transcription engines using embedded dedicated intelligence and processing algorithms, wherein the selection is made based on input from the user sensed in real-time and the user's characteristics.

16. The method of claim 12, further comprising:

(a) receiving a feedback input from the user to rate the quality of the determined optimal output form; and

(b) using the feedback input as a learning parameter to iteratively improve its determination of optimal output forms. 17. The method of claim 12, wherein the transcription engine is accessed by the smart wearable device through a stand-alone wireless connection or tethered through a wireless-enabled non-wearable device.

18. The method of claim 12, wherein the transcription engine is a natively-embedded application or queried remotely through a cloud-based access.

19. The method of claim 12, wherein one or more transcription engines are selected from the group consisting of a text to speech and speech to text engine, a natural language processing engine, an image generating engine, and an integrated third party application programming interface.

20. The method of claim 12, wherein the smart wearable device has a platform selected from the group consisting of hand worn devices, finger worn devices, wrist worn devices, head worn devices, arm worn devices, leg worn devices, ankle worn devices, foot worn devices, toe worn devices, watches, eyeglasses, rings, bracelets, necklaces, articles of jewelry, articles of clothing, shoes, hats, contact lenses, and gloves.

Description:
SMART WEARABLE DEVICES AND METHODS FOR

OPTIMIZING OUTPUT

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to, and the benefit of, U.S. provisional patent application serial number 61/943,837 filed on February 24, 2014, incorporated herein by reference in its entirety.

INCORPORATION-BY-REFERENCE OF

COMPUTER PROGRAM APPENDIX

Not Applicable

NOTICE OF MATERIAL SUBJECT TO COPYRIGHT PROTECTION

[0003] A portion of the material in this patent document is subject to

copyright protection under the copyright laws of the United States and of other countries. The owner of the copyright rights has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the United States Patent and Trademark Office publicly available file or records, but otherwise reserves all copyright rights whatsoever. The copyright owner does not hereby waive any of its rights to have this patent document maintained in secrecy, including without limitation its rights pursuant to 37 C.F.R. ยง 1 .14.

BACKGROUND

[0004] 1 . Field of the Technology

[0005] This technology pertains generally to smart wearable devices and more specifically to smart wearable devices that use sensorial input to optimize output.

[0006] 2. Discussion

[0007] Smart wearable devices are extremely limited and ridged in the way they output information, recommendations and feedback to the user. The devices have either a very basic output interface attached to them (such as a screen, audio speaker or motor actuator) or they rely on an external mobile application (installed on a smartphone or tablet for instance) or a Web interface for a richer, more graphical output. This can make the operation of smart wearable devices difficult for some people because they are required to learn another user interface and/or language paradigm and may even have to rely on the use of an external device (such as a smartphone) in order to get the full potential from their device. Accordingly, this can limit the desire to use smart wearable devices. For example, children may not be able to read or understand textual information and may prefer to have a device display information in pictograms, videos or with entertaining icons.

[0008] Users of smart wearable devices may not be able to understand the raw information, such as number of steps taken in day or body temperature, which is output by current wearable devices. Disabled people are excluded from using some of the most current wearable devices as well. For instance, blind people who cannot get visual feedback from smart-watches, deaf people unable to hear audible feedback from smart glasses, tetraplegic people unable to feel the haptic feedback from their personal trackers, etc. Therefore, it is desirable to have smart wearable device that can determine the optimal output form for a specific user.

BRIEF SUMMARY

[0009] An aspect of the present disclosure is a smart wearable devices and methods for output optimization. In one exemplary embodiment, a smart wearable device receives input from one or more sensors, including input related to the user's biological characteristics. This input can be used to determine an optimal output form. If the determined output form is different from the smart wearable device's native or default output form, the smart wearable device may transcribe the output into the optimal output using a transcription engine. Examples of transcription engines include, but are not limited to, text to speech and speech to text engine, a natural language processing engine, an image generating engine, a sound generating engine, a vibration generating engine, a smell generating engine and an integrated third party application programming interface.

[0010] Further aspects of the technology will be brought out in the following portions of the specification, wherein the detailed description is for the purpose of fully disclosing preferred embodiments of the technology without placing limitations thereon.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)

[0011] The technology described herein will be more fully understood by reference to the following drawings which are for illustrative purposes only:

[0012] FIG. 1 is a schematic diagram of an embodiment of a smart

wearable network described herein.

[0013] FIG. 2 is a functional block diagram of an embodiment of a smart wearable device described herein.

[0014] FIG. 3 is a schematic diagram illustrating an embodiment of a smart wearable device optimizing output given specific input related to a user.

[0015] FIG. 4 is a flow diagram of an exemplary method of a smart

wearable device optimizing output given specific input related to a user.

DETAILED DESCRIPTION

[0016] The present disclosure generally pertains to wearable devices that are capable of, for example, performing an action based on one or more biological or physiological characteristics of the user wearing the device.

Using one or more sensors, a processor, and code executable on the processor, a wearable device can be configured to sense and process characteristics that include, but are not limited to, a wearer's physical characteristics such as gender, weight, height, body temperature, skin temperature, heart rate, respiration, blood sugar level, blood glucose level, stress/fatigue, galvanic skin response, ingestion (protein), digestion rate, metabolic rate, blood chemistry, sweat, core and skin temperature, vital signs, eye dryness, tooth decay, gum disease, energy storage, calorie burn rate, mental alertness, cardiac rhythm, sleep patterns, caffeine content, vitamin content, hydration, blood oxygen saturation, blood coritzol level, blood pressure, cholesterol, lactic acid level, body fat, protein level, hormone level, muscle mass, pH, etc. Such conditions may also include, but are not limited to, position (e.g., prone, upright), movement, or physical state (e.g., sleeping, exercising), etc.

[0017] A wearable device may include one or more output devices that

include, but are not limited to, haptic output devices (e.g., offset motors, electroactive polymers, capacitive voltage generators, Peltier temperature elements, contracting materials, Braille coding actuators), telemetry devices, visual devices, audible devices, and other output devices.

[0018] A wearable device include artificial intelligence so that the device can learn and adapt to the wearer. The device may be configured to accurately discriminate between erroneous (accidental, unintended, etc.) and valid sensory inputs, thereby developing accurate conclusions about a wearer's physical state or characteristics (e.g., the device does not interpret a wearer rolling over in their sleep as the wearer exercising). The device may also include one or more cameras or other visual sensors for facial, user, or other image recognition. A wearable device may also be

configured to transmit information to and/or retrieve information from a wearer's digital health history.

[0019] A wearable device may be configured to output information to a user, to another wearable device, to a non-wearable device, or to a network according to the particular features and function of the device.

[0020] A. Generalized System Implementation.

[0021] FIG. 1 illustrates a generalized networked infrastructure (e.g.,

system) 100 that includes a network 102. The network could, for example, be a local area network or a wide area network such as the Internet. One or more smart wearable devices 104-1 through 104-n according to embodiments of the technology described herein may be enabled to communicate with the network 102 through a wired or wireless connection 106. Further, one or more of the smart wearable devices may be enabled to communicate with another smart wearable device through the network 102 or by means of a direct wired or wireless connection 108.

[0022] One or more of the smart wearable devices 104-1 through 104-n also may be enabled to communicate with one or more non-wearable devices 1 10-1 through 1 10-n. The non-wearable devices, which are beyond the scope of this disclosure, may be any conventional "smart" device with a processor, associated operating system, and communications interface. Examples of non-wearable devices include Smartphones, tablet computers, laptop computers, desktop computers, and set top boxes. Any of the non-wearable devices may be of a type enabled to communicate with an external device through a wired or wireless connection. In that case, one or more of the smart wearable devices may be enabled to

communicate with one or more of the non-wearable devices by means of a direct wired or wireless connection 1 12. Further, one or more of the non- wearable devices may be of a type enabled to communicate with the network 102 through a standard wired or wireless connection 1 14. In that case, one or more of the smart wearable devices may be enabled to communicate with one or more of the non-wearable devices through the network 102.

[0023] One or more servers 1 16-1 through 1 16-n may be provided in a client-server configuration and connected to the network by means of a wired or wireless connection 1 18. The servers may include standalone servers, cluster servers, networked servers, or servers connected in an array to function like a large computer. In that case, one or more of the smart wearable devices may be enabled to communicate with one or more of the servers.

[0024] FIG. 2 illustrates a generalized embodiment of a smart wearable device according to the technology described herein. It will be appreciated that the embodiment shown may be modified or customized to enable performing the functions described herein. In the exemplary embodiment shown, the smart wearable device includes an "engine" 200 having a processor 202, memory 204, and application software code 206. The processor 202 can be any suitable conventional processor. The memory 204 may include any suitable conventional RAM type memory and/or ROM type memory with associated storage space for storing the application programming code 206.

[0025] A conventional wired or wireless communications module 208 (e.g., transmitter or receiver or transceiver) may be included as needed for performing one or more of the functions of the smart wearable device described herein. Examples of wireless communication capabilities that can be provided include, but are not limited to, Bluetooth, Wi-Fi, infrared, cellular, ZigBee, Z-Wave and near field communication. One or more conventional interfaces or controllers 210 may also be provided if needed. Examples of interfaces or controllers include, but are not limited to, analog to digital converters, digital to analog converters, buffers, etc.

[0026] The device may include at least one input 212 for a biological or physiological sensor for providing input to the device to perform one or more of the functions described herein. Sensor inputs 214-1 through 214-n for optional sensors may be included as well. These optional input sensors may include, but are not limited to, accelerometers, temperature sensors, altitude sensors, motion sensors, position sensors, and other sensors to perform the function(s) described herein. One or more conventional interfaces or controllers 216 may be provided if needed for the sensors. Examples of interfaces or controllers include, but are not limited to, analog to digital converters, digital to analog converters, buffers, etc.

[0027] Additionally, the device may include one or more outputs 218-1

through 218-n to drive one or more output devices (and include those output devices). These output devices may include, but are not limited to, haptic output devices, telemetry devices, visual devices, audible devices, and other output devices to perform the functions described herein. One or more conventional interfaces or controllers 220 may be provided if needed for the output devices. Examples of interfaces or controllers include, but are not limited to, analog to digital converters, digital to analog converters, buffers, etc.

[0028] A user input 222 may be provided according to the functions

described herein. The user input may, for example, initiate one or more functions, terminate one or more functions, or intervene in a running process. The user input can be any conventional input device, including but not limited to, manual switches, touch sensors, magnetic sensors, proximity sensors, etc. One or more conventional interfaces or controllers 224 may be provided if needed for the output devices. Examples of interfaces or controllers include, but are not limited to, analog to digital converters, digital to analog converters, buffers, etc.

[0029] Depending on the function(s) described herein, the engine 200 may also include a feedback loop 226 for machine learning or other adaptive functions. The feedback loop may also provide for device calibration.

[0030] It will be appreciated that a smart wearable device as described herein would necessarily include a housing or carrier for the above- described components. It will further be appreciated that, as used herein, the term "smart wearable device" means a device that would be worn or otherwise associated with the body of a user and be "connected" to the user by means of at least one sensor for sensing one or more biological or physiological conditions of the user.

[0031] The particular form of the housing or carrier (i.e., wearable platform) can vary according to choice and suitability for performing the functions described herein. Examples of wearable platforms include, but are not limited to, hand worn devices, finger worn devices, wrist worn devices, head worn devices, arm worn devices, leg worn devices, ankle worn devices, foot worn devices, toe worn devices, watches, eyeglasses, rings, bracelets, necklaces, articles of jewelry, articles of clothing, shoes, hats, contact lenses, gloves, etc.

[0032] It will further be appreciated that the input sensors and output

devices may be integrated into the wearable platform, or may be external to the wearable platform, as is desired and/or suitable for the function(s) of the smart wearable device. [0033] B. Smart Wearable Device and Methods for Output Optimization.

[0034] A smart wearable device that can automatically or semi- automatically translate, transcribe, render or otherwise adapt its output from its "native form" to another type (or multiple types) of output form which can be more easily, quickly or deeply understood (and acted upon) by the specific user is described herein. This includes, but is not limited to:

transforming text into a dynamically-generated picture or video,

transforming visual output into audio or haptics (or vice versa), reducing the complexity of the information (or increasing it in the case it is to be read by professionals such as health providers), etc. The smart wearable device can determine which output form is optimal for a particular user by analyzing the sensor input from the user. Such input can be acquired automatically or may be manually entered into the device by the user.

[0035] Referring now to FIG. 3, a schematic diagram 300 is show in which a wearable smart device 104-1 may include sensors 214-1 , 214-n for acquiring input, including but not limited to, biological sensors 212 that are configured to collect input related to biological characteristics of the user 302 (see also FIG. 2). Such biological characteristics may be, but are not limited to, age, gender, education level, mental status, health conditions, etc. The smart wearable device may also use third party information, such as information from social media or e-mail messages, to optimize an output form. Optionally, past personal preferences, such as a favorite type of output given a specific situation or saved manual preset output forms, may be used to optimize a particular output form. In the example embodiment shown in FIG. 3, the smart wearable device may automatically acquire input 304 from one of its sensors, such as a biological sensor 212. A user may also enter any desired input manually into the smart wearable device, such as personal characteristics or schedules. The smart wearable device can then use this input 304 to determine the optimal form for the device's output. The optimal output form selected by the smart wearable device may be, but is not limited to, images 306, sound, such as music tones or voices 308, haptic signals 310 and lights 312, or a combination of these output form examples.

[0036] If the native or default output form is determined to be different than the determined optimal output form, the smart wearable device may transcribe the output information into the optimal output form using one or more transcription engines. This transcription engine may be accessed by the smart wearable device through a stand-alone wireless connection or tethered through a wireless-enabled non-wearable device, for example. The transcription engine may also be a natively-embedded application or queried remotely through a cloud-based access. The smart wearable device may select a specific transcription engine, such as a text to speech and speech to text engine, a natural language processing engine, an image generating engine, a sound generating engine, a vibration generating engine, a smell generating engine or an integrated third party application programming interface, such as a medical dictionary, foreign language dictionary, or sign language directory. The image generation image engine can combine a set of basic patterns and images (either stored locally or remotely) into a visual image/video output. For example, the device could pull the user's Facebook picture, extract the user's face, and assemble it with a colored background to visually indicate positive (or negative) feedback.

[0037] Referring back to FIG. 3, once the output has been optimized, the smart wearable device may convey the optimized output to the user 302 itself or it may transmit 316 the optimized output to one or more non- wearable devices 1 10-1 , 1 10-n or another smart wearable device 104-n and that device may then convey the optimized output to the user 302.

[0038] Referring now to FIG. 4, a flow diagram 400 is shown, which

illustrates how one embodiment of the smart wearable device and methods may be used to optimize its output. The smart wearable device may receive input from a sensor that may be internal or external to the smart wearable device 410. Although at least one of the sensors may be biological, acquiring biological input from the user, other sensors may also be used to collect input, such as an environmental sensor that may collect input related to the context in which the output will be conveyed 420. The smart wearable device may use this received input to determine an optimal output form 430. If the smart wearable device's native or default output form is different from the determined optimal output form, the smart wearable device may then transcribe the output into the optimal form using one or more transcription engines 440. Once the optimal output is achieved, the smart wearable device may convey the optimized output to a user itself 450 or the smart wearable may transmit the optimized output to an alternative device 460 and that device may then convey the optimized output 450.

[0039] A smart wearable device may reduce barriers to using the device by providing outputs specifically tailored to the user. Additionally, it will enable a single model of device to be used in a variety of ways and by a broader population and may also make wearable devices' outputs (especially in case of health/fitness monitoring) useful for both the consumer wearer ("B2C" output type) as well as potential healthcare professionals ("B2B" output).

[0040] In one embodiment, the smart wearable device may measure, via GPS or other mechanism, the distance travelled by a user during a run. If the wearer has a personal trainer helping the wearer train for a marathon, for example, the distance information can be communicated to the trainer's wearable or non-wearable device in the optimized format of a map displaying the running route. The map information can provide richer detail to the trainer who can use this information to develop better training routines for the wearer in training.

[0041] In another embodiment, the watch and biological sensor components of the smart wearable device can measure the pulse rate of the wearer. Instead of displaying the pulse rate data on the screen, the smart wearable device can communicate the pulse rate to the wearer aurally or with a haptic output. Although communicating a wearer's actual heart rate by haptic feedback may be overwhelming (e.g. 140 beats per minute in haptic feedback or a tone sounding 140 times in one minutes), the wearable device can be programmed to determine which of two or three bands the wearer's pulse rate fits within and generate a tone or haptic response specific to that particular band. An example of the pulse bands could be: < 100 beats/min; 100-120 beats/min; 130-140 beats/min; >140 beats/min. In some cases, the aural mode may be optimum because the user has selected this mode as the preferred communication mode (rather than looking at the watch display). On the other hand, a low light environment could be determined by the programming and automatically switch the device to an aural or haptic output mode so that the bright display doesn't distract the wearer or drain power from the battery unnecessarily by running the bright display.

[0042] In another embodiment of the smart wearable device, the user could rate the output optimization decision that the smart wearable device has made and the smart wearable device may then improve its automated output transcription.

[0043] Embodiments of the present technology may be described with

reference to flowchart illustrations of methods and systems according to embodiments of the technology, and/or algorithms, formulae, or other computational depictions, which may also be implemented as computer program products. In this regard, each block or step of a flowchart, and combinations of blocks (and/or steps) in a flowchart, algorithm, formula, or computational depiction can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions embodied in computer-readable program code logic. As will be appreciated, any such computer program instructions may be loaded onto a computer, including without limitation a general purpose computer or special purpose computer, or other programmable processing apparatus to produce a machine, such that the computer program instructions which execute on the computer or other programmable processing apparatus create means for implementing the functions specified in the block(s) of the flowchart(s).

[0044] Accordingly, blocks of the flowcharts, algorithms, formulae, or computational depictions support combinations of means for performing the specified functions, combinations of steps for performing the specified functions, and computer program instructions, such as embodied in computer-readable program code logic means, for performing the specified functions. It will also be understood that each block of the flowchart illustrations, algorithms, formulae, or computational depictions and combinations thereof described herein, can be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer-readable program code logic means.

[0045] Furthermore, these computer program instructions, such as

embodied in computer-readable program code logic, may also be stored in a computer-readable memory that can direct a computer or other programmable processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the block(s) of the flowchart(s). The computer program instructions may also be loaded onto a computer or other programmable processing apparatus to cause a series of operational steps to be performed on the computer or other programmable processing apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable processing apparatus provide steps for implementing the functions specified in the block(s) of the flowchart(s), algorithm(s), formula(e), or computational depiction(s).

[0046] It will further be appreciated that "programming" as used herein

refers to one or more instructions that can be executed by a processor to perform a function as described herein. The programming can be embodied in software, in firmware, or in a combination of software and firmware. The programming can be stored local to the device in non- transitory media, or can be stored remotely such as on a server, or all or a portion of the programming can be stored locally and remotely.

Programming stored remotely can be downloaded (pushed) to the device by user initiation, or automatically based on one or more factors, such as, for example, location, a timing event, detection of an object, detection of a facial expression, detection of location, detection of a change in location, or other factors. It will further be appreciated that as used herein, that the terms processor, central processing unit (CPU), and computer are used synonymously to denote a device capable of executing the programming and communication with input/output interfaces and/or peripheral devices.

[0047] From the discussion above it will be appreciated that the technology can be embodied in various ways, including but not limited to the following:

[0048] 1 . A smart wearable device, the device comprising: (a) a housing, wherein the housing encases components of a wearable smart device; (b) one or more sensors, wherein at least one sensor is a biological sensor configured to acquire biological input; (c) one or more output forms; (d) a memory; (e) one or more communications interfaces; (f) a processor; and (g) programming residing in a non-transitory computer readable medium, wherein the programming is executable by the computer processor and configured to: (i) receive input from the one or more sensors, wherein the input may be acquired automatically or manually entered by a user and wherein at least some of the input is related to the user's biology; (ii) use the received input to determine an optimal output form, wherein at least some of the input used to determine the optimal output form is related to the user's biology; and (iii) if the device's native output form is not already in the determined optimal output form, transcribe the output into the determined optimal output form using one or more transcription engines.

[0049] 2. The device of any preceding embodiments, further comprising: one or more environmental sensors, wherein at least one environmental sensor is configured to acquire contextual input and wherein said programming is further configured to: receive input from the one or more environmental sensors, wherein at least some of the input used to determine the optimal output form is related to the context in which the smart wearable device is operating.

[0050] 3. The device of any preceding embodiments, wherein said programming is further configured to: transmit the optimized output to another smart wearable or non-wearable device, wherein the other smart wearable device or non-wearable device is configured to convey the optimal output form to the user.

[0051] 4. The device of any preceding embodiments, wherein said

programming is further configured to: use information inferred from third party data sources or past personal preferences to determine the optimal output form.

[0052] 5. The device of any preceding embodiments, wherein the one or more communications interfaces are selected from the group consisting of a wired communications interface, a wireless communications interface, a cellular communications interface, a WiFi communications interface, a near field communications interface, an infrared communications interface, a ZigBee communications interface, a Z-Wave communications interface and a Bluetooth communications interface.

[0053] 6. The device of any preceding embodiments, wherein said

programming is further configured to: select an optimal combination of transcription engines using embedded dedicated intelligence and processing algorithms, wherein the selection is made based on input from the user sensed in real-time and the user's characteristics.

[0054] 7. The device of any preceding embodiments, wherein said

programming is further configured to:(a) receive a feedback input from the user to rate the quality of the determined optimal output form; and (b) use the feedback input as a learning parameter to iteratively improve its determination of optimal output forms.

[0055] 8. The device of any preceding embodiments, wherein the

transcription engine is accessed by the smart wearable device through a stand-alone wireless connection or tethered through a wireless-enabled non-wearable device.

[0056] 9. The device of any preceding embodiments, wherein the

transcription engine is a natively-embedded application or queried remotely through a cloud-based access. [0057] 10. The device of any preceding embodiments, wherein one or more transcription engines are selected from the group consisting of a text to speech and speech to text engine, a natural language processing engine, an image generating engine, a sound generating engine, a vibration generating engine, a smell generating engine and an integrated third party application programming interface.

[0058] 1 1 . The device of any preceding embodiments, wherein the smart wearable device has a platform selected from the group consisting of hand worn devices, finger worn devices, wrist worn devices, head worn devices, arm worn devices, leg worn devices, ankle worn devices, foot worn devices, toe worn devices, watches, eyeglasses, rings, bracelets, necklaces, articles of jewelry, articles of clothing, shoes, hats, contact lenses, and gloves.

[0059] 12. A computer implemented method for determining the most

optimal output form from a smart wearable device, the method comprising: (a) providing a smart wearable device, the smart wearable device

comprising: (i) a housing, wherein the housing encases components of a wearable smart device; (ii) one or more sensors, wherein at least one sensor is a biological sensor configured to acquire biological input; (iii) one or more output forms; (iv) a memory; (v) one or more communications interfaces; and (vi) a processor; (b) receiving input from the one or more sensors associated with a smart wearable device, wherein at least one sensor is a biological sensor configured to acquire biological input and wherein the input may be acquired automatically or manually entered by a user; (c) using the received input to determine an optimal output form, wherein at least some of the input used to determine the optimal output form is related to the user's biology; and (d) if the output information is not already in the determined optimal output form, transcribing output information into the determined optimal output form using one or more transcription engines; (e) wherein said method is performed by executing programming on at least one computer processor, said programming residing on a non-transitory medium readable by the computer processor.

[0060] 13. The method of any preceding embodiments, further comprising: receiving input from one or more environmental sensors associated with the smart wearable device, wherein at least one environmental sensor is configured to acquire contextual input and wherein at least some of the input used to determine the optimal output form is related to the context in which the smart wearable device is operating.

[0061] 14. The method of any preceding embodiments, wherein the one or more communications interfaces are selected from the group consisting of a wired communications interface, a wireless communications interface, a cellular communications interface, a WiFi communications interface, a near field communications interface, an infrared communications interface, ZigBee communications interface, a Z-Wave communications interface and a Bluetooth communications interface.

[0062] 15. The method of any preceding embodiments, further comprising: selecting an optimal combination of transcription engines using embedded dedicated intelligence and processing algorithms, wherein the selection is made based on input from the user sensed in real-time and the user's characteristics.

[0063] 16. The method of any preceding embodiments, further comprising:

(a) receiving a feedback input from the user to rate the quality of the determined optimal output form; and (b) using the feedback input as a learning parameter to iteratively improve its determination of optimal output forms.

[0064] 17. The method of any preceding embodiments, wherein the

transcription engine is accessed by the smart wearable device through a stand-alone wireless connection or tethered through a wireless-enabled non-wearable device.

[0065] 18. The method of any preceding embodiments, wherein the

transcription engine is a natively-embedded application or queried remotely through a cloud-based access.

[0066] 19. The method of any preceding embodiments, wherein one or more transcription engines are selected from the group consisting of a text to speech and speech to text engine, a natural language processing engine, an image generating engine, and an integrated third party application programming interface.

[0067] 20. The method of any preceding embodiments, wherein the smart wearable device has a platform selected from the group consisting of hand worn devices, finger worn devices, wrist worn devices, head worn devices, arm worn devices, leg worn devices, ankle worn devices, foot worn devices, toe worn devices, watches, eyeglasses, rings, bracelets, necklaces, articles of jewelry, articles of clothing, shoes, hats, contact lenses, and gloves.

[0068] Although the description above contains many details, these should not be construed as limiting the scope of the technology but as merely providing illustrations of some of the presently preferred embodiments of this technology. Therefore, it will be appreciated that the scope of the present technology fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the present technology is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather "one or more." All structural, chemical, and functional equivalents to the elements of the above-described preferred embodiment that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present technology, for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public

regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element herein is to be construed under the provisions of 35 U.S.C. 1 12 unless the element is expressly recited using the phrase "means for" or "step for".