Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VIRTUALIZED USER-INTERFACE DEVICE
Document Type and Number:
WIPO Patent Application WO/2022/020452
Kind Code:
A1
Abstract:
A virtualized transformative user-interface device includes a monolithic non-transformable body having a plurality of outside-facing surfaces and a plurality of display panels respectively disposed along at least some of the plurality of outside-facing surfaces. The device detects whether mechanical user input forces are consistent with an intent to move or transform the virtualized transformative user-interface device, and in response to a positive determination, the device emulates transformative movement, via changing graphical images on the plurality of display panels.

Inventors:
FILIN MAXIM (US)
OSIPOV ILYA (US)
ORLOV SEMYON
Application Number:
PCT/US2021/042549
Publication Date:
January 27, 2022
Filing Date:
July 21, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CUBIOS INC (US)
International Classes:
G06F3/01; G06F3/044; G06F3/0484; G06F3/14
Foreign References:
US20120302303A12012-11-29
US20190358549A12019-11-28
US20180311566A12018-11-01
US20180161668A12018-06-14
US20140223378A12014-08-07
Attorney, Agent or Firm:
BRAGINSKY, Vadim (US)
Download PDF:
Claims:
Claims:

1. A virtualized transformative user-interface device, comprising: a monolithic non-transformable body having a plurality of outside-facing surfaces; a plurality of display panels respectively disposed along at least some of the plurality of outside-facing surfaces; a plurality of sensors arranged in communication with the display panels such that at least one of the plurality of sensors responds by changing its electrical state whenever a mechanical force is applied proximate a corresponding at least one of the display panels; and processor circuitry operative to:

(a) cause displays of a first set of graphical images on the plurality of display panels;

(b) read a set of electrical states of the plurality of sensors;

(c) determine whether the set of electrical states represent mechanical user input forces consistent with an intent to move or transform the virtualized transformative user-interface device, and

(d) in response to a positive determination in (c), cause displays of a second set of graphical images on the plurality of display panels to emulate transformative movement of a transformative device.

2. The device of claim 1, wherein the second plurality of images are animated to provide an appearance of movement of a transformative user-interface device.

3. The device of claim 1 , wherein at least one of the plurality of sensors is arranged in mechanical communication with one of the plurality of display panels to facilitate detection of a mechanical force applied to the display panel.

4. The device of claim 1, further comprising an accelerometer operatively coupled with the processor circuitry and operative to facilitate detection of movement of the device.

5. The device of claim 1, further comprising a gyroscope operatively coupled with the processor circuitry and operative to facilitate detection of rotation of the device.

6. The device of claim 1, further comprising a video camera operatively coupled with the processor circuitry.

7. The device of claim 1, further comprising a speaker operatively coupled with the processor circuitry.

8. The device of claim 1, further comprising a vibration actuator operatively coupled with the processor circuitry and operative to emulate transformative movement of a transformative user- interface device in response to the positive determination in (c).

9. The device of claim 1, wherein at least one of the sensors is a force sensor.

10. The device of claim 1, wherein at least one of the sensors is a resistive touch sensor. 11 The device of claim 1, wherein at least one of the sensors is an optical sensor.

12. The device of claim 1, wherein at least one of the sensors is a capacitive proximity sensor.

13. The device of claim 1, wherein the device is generally shaped as a cube comprising six display panels disposed on its faces.

14. The device of claim 1, wherein at least two of the plurality of sensors are arranged on a sensor plate immediately adjacent to a corresponding display panel of the plurality display panels such that the sensors are operative to detect mechanical force applied to the display panel.

15. The device of claim 1, wherein at least a portion of the sensors are integrated into a cube edge defined by two adjacent display panels of the plurality of display panels.

16. The device of claim 1, further comprising: a core; and a plurality of axles, each axle of the plurality having a proximal end and a distal end; wherein: the distal end is arranged in mechanical communication with a display panel of the plurality of display panels; the proximate end is arranged in mechanical communication with the core; and sensors from the plurality are arranged to facilitate detection of force acting between each axle of the plurality of axles and the core.

17. The device of claim 16, wherein the core comprises sleeves arranged to receive the proximal ends of the axles from the plurality of axles.

18. The device of claim 16, wherein the device is generally shaped as a cube comprising six display panels disposed on its faces.

19. The device of claim 16, wherein the core comprises at least a portion of the sensors.

20. A method for operating a virtualized transformative user-interface device, the method comprising: providing a monolithic non-transformable device having a plurality of outside-facing surfaces and a plurality of display panels respectively disposed along at least some of the plurality of outside facing surfaces; autonomously causing displays of a first set of graphical images on the plurality of display panels; autonomously reading a set of electrical states of a plurality of sensors of the device; autonomously determining whether the set of electrical states represent mechanical user input forces consistent with an intent to move or transform the virtualized transformative user-interface device, and in response to a positive determination in (c), autonomously causing displays of a second set of graphical images on the plurality of display panels to emulate transformative movement of a transformative device.

Description:
VIRTUALIZED USER-INTERFACE DEVICE

PRIOR APPLICATION

[0001] This Application claims the benefit of U.S. Provisional Application Serial No. 63/054,272, filed July 21, 2020, entitled “VOLUMETRIC MIXED REALITY DEVICE,” the disclosure of which is incorporated by reference herein.

TECHNICAL FIELD

[0002] Aspects of the embodiments relate generally to electronics, transducer, and data-processing technologies and, more particularly, to handheld computing devices comprising user-interfaces.

BACKGROUND

[0003] Augmented- or mixed-reality games available in the market generally use either camera or geolocation as real-world inputs. Under the first approach, a game processes video camera images of its surrounding “real” environment and superimposes additional, “virtual” elements on them. For example, a cell phone game called “Mosquitos” released circa 2004 displayed a phone camera image on the screen of the phone, and overlaid images of giant mosquitoes on it; the player’s objective was to shoot the mosquitos using superimposed crosshairs.

[0004] Under the second approach, geolocation is used to combine virtual objects and geography or topography of the real world. In late 2016, the multiplayer “Pokemon Go” game had gained a significant following; this game employs both techniques, namely, it superimposes virtual objects on camera-captured images and links events and objects to the real-world map using geolocation. [0005] Recently, significant developments have happened in so-called “Transreality Puzzles”, a subset of the Mixed Reality devices, whereby a user interacts with a transformable input device physically via positioning, slanting, or turning its elements, thus affecting events in virtual space, with virtual objects being correlated to physical ones.

[0006] Virtual objects in transreality puzzles may be displayed on a separate display like a monitor display or a wearable VR/AR headset communicatively coupled to the transformable input device, with the latter receiving mechanical inputs from the user. In some configurations, virtual objects may be displayed, and be subjected to manipulation or transformations on a display or a plurality of displays placed on the outside surfaces of the transformable input device itself.

[0007] The unique experience delivered by such transreality puzzles is based on integrating active three-dimensional fine-motor user inputs with purposely-engineered sensory, visual and haptic feedback.

[0008] Presently available transreality gaming devices comprise multiple moving parts requiring mutual rotations or positional shifts, thus leading to significant production costs and limited reliability defined by mechanical movement, moving surface contamination, electrical connection complexity, as well as hazards due to the presence of small mechanical parts.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings.

[0010] FIG. 1 is a perspective-view diagram illustrating a virtualized transformative user-interface device according to some embodiments. [0011] FIG. 2A is a process flow diagram illustrating basic functionality of the device of FIG. 1 according to some examples.

[0012] FIG. 2B is a diagram illustrating exemplary user interactions with a volumetric mixed reality device according to some embodiments.

[0013] FIG. 3 is a simplified exploded-view diagram of a display sensor according to an example.

[0014] FIG. 4 is a simplified perspective-view diagram illustrating some interior components of a device according to some embodiments.

[0015] FIG. 5 is a simplified perspective-view diagram illustrating a core for use in a device according to some embodiments.

[0016] FIG. 6 is a front elevational-view diagram illustrating a sleeve for use in a device according to some embodiments.

[0017] FIG. 7 is a simplified perspective-view diagram illustrating a device according to some examples.

[0018] FIGS. 8A-8D are perspective-view diagrams illustrating various devices in accordance with principles described in the present disclosure.

[0019] FIG. 9 is a simplified perspective-view diagram illustrating an example of user interaction with a volumetric mixed reality device according to some embodiments.

SUMMARY

[0020] According to some aspects of this disclosure, a virtualized transformable user-interface electronic device is built as a robust monolithic body with no actual externally-observable moving parts. Advantageously, these aspects may mitigate some limitations of transreality puzzle-type devices. [0021] A virtualized transformative user-interface device according to one type of embodiment includes a monolithic non-transformable body having a plurality of outside-facing surfaces and a plurality of display panels respectively disposed along at least some of the plurality of outside facing surfaces. The device detects whether mechanical user input forces are consistent with an intent to move or transform the virtualized transformative user-interface device, and in response to a positive determination, the device emulates transformative movement, via changing graphical images on the plurality of display panels.

[0022] In some embodiments, the device has a cubic form factor with six flat panel displays on its exterior faces, and includes additional sensor or actuator components supporting user interactivity, e.g. microphone, accelerometer, temperature sensor, light sensor, camera, vibration actuator or speaker.

[0023] In some embodiments, user input may be collected via touch force sensor arrays either directly integrated with displays (touch screen), or a sensor plate with force sensors placed underneath the displays. In related embodiments, force sensors may be placed in device edges defined by the displays on its faces, or on the core of the device mechanically connected to faces or edges using axles. In all cases, the quantity, locations, and arrangement of force sensors are arranged to facilitate automated determination of magnitude and direction of mechanical force applied to device faces and edges by a user engaged interactively with the device.

[0024] Additional sensory inputs and processing may combine multi-modal sensing, e.g., camera, accelerometer, pressure, with processing that combines such inputs to discern gestures, mimicry, other higher-level user behaviors. DETAILED DESCRIPTION

[0025] The present disclosure relates to a volumetric mixed-reality electronic device appearing from the exterior as a robust, monolithic body with no external moving parts.

[0026] A virtualized transformative user-interface device according to embodiments described herein is equipped with data-processing circuitry, such as a microprocessor-based system that includes a processor core, memory, non-volatile data storage, input/output circuitry, and interface circuitry that is coupled to input devices (such as sensors), and output devices such as displays, or a sound system (e.g., amplifier, speaker). The microprocessor-based system may comprise a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), or a combination of various types of processor architectures to support the types of algorithms utilized to implement the device’s functionality. In addition, suitable interface circuitry, such as analog- to-digital converter (ADC) or digital-to-analog converter (DAC) circuitry may be included and suitably coupled to the processing circuitry. In one type of embodiment, some, most or all of the aforementioned components may be combined in a System-on-a-Chip (SoC) integrated circuit.

[0027] Further, the microprocessor-based system may include data-communications circuitry, such as a WiFi modem and radio compliant with an IEEE 802.11 standard, an IEEE 802.15 standard (“Bluetooth”), a 3GPP standard (LTE/5G), or the like, suitably interfaced with a processor circuit. Further, the device may include suitable connectors, flex or other cabling, power sources, and related electronics such as a system bus or other internal interface between various hardware devices.

[0028] Instructions executable by the microprocessor-based system may be stored in non-volatile memory, such as flash EEPROM or other suitable storage media. Certain processing may be carried out locally on-board the deice, whereas other processing may be carried out remotely on a peer device or on one or more servers consistent with a cloud-computing model, utilizing inputs collected locally by the device, and transmitting processed results based on that input to the device. In related embodiments, certain instructions may be transmitted to the device from a server, peer device, or other source, to be executed locally on the device.

[0029] In some implementations, certain machine-leaming-based algorithms are executed on the device, which are dynamically tuned under a supervised-learning regime, an un-supervised learning regime, a reinforcement learning regime, or a combination of such paradigms. Accordingly, certain parameters (e.g., weights, offsets, loss function, etc.) may be developed and adjusted over time, whether locally by the device, or remotely (e.g., in the cloud) and transmitted to the device.

[0030] A mixed reality electronic device according to some embodiments has an electronic display, which may be a thin-film transistor (TFT) device, e.g., an active matrix video display such as a LCD , LED, OLED, or similar, situated at the outer surface of the device. In some configurations the device has a plurality of electronic displays situated at its outer surfaces. The display(s) present content, such as movable graphical objects with which the user may interact via manipulation of the device, hand gestures, or some combination thereof. For instance, the user may produce readable input by pressing or swiping on or over the display(s) with a finger. This input may be interpreted as directions for moving the graphical object over the surface of the display as part of solving a puzzle, or transforming a transformable geometric shape.

[0031] Further, the virtualized transformative user-interface device may be equipped with one or more sensors to detect other physical actions of the user according to some embodiments.

[0032] In some embodiments the device comprises an accelerometer, the output of which is provided to a processor programmed to detect user gestures, including shaking and similar actions. Further, some embodiments of the device utilize an accelerometer (e.g., gyroscope) to detect device orientation and movement in space. [0033] An advanced feature according to some embodiments includes recognition of user hand gestures. The hand gestures may be detected through a number of presently existing, or future- arising, technologies, including, but not limited to, strain or pressure sensing, resistive touch sensing, capacitive sensing, or optical detection. One shared principle of such various sensing techniques exploits a change their physical state, typically electrical properties, when a human hand or finger is placed in direct contact or in proximity of the sensing device. The change in electrical properties of the sensing device is processed to be interpreted by the processor-based circuitry of the device.

[0034] In related embodiments, one or more video cameras may be incorporated into a device and adapted to detect the position of the user’s eyes. This input may be used in many ways, including, but not limited to, control of the content, but also energy-saving and battery life extension through dimming of surfaces not visible to the user.

[0035] Furthermore, the device may include instructions to integrate inputs from the various sensors, recognize the pattern of particular user gestures, and correlate the gestures to a set of allowed transformations of the content displayed on the electronic display or displays.

[0036] Further, in some embodiments the device comprises an output to provide sensory feedback other than visual, e.g. audio circuits, speakers, vibration motor and its controls for haptic feedback.

[0037] Referring to FIG. 1, an embodiment of a virtualized transformative user-interface device is illustrated. Device 0100 is generally shaped like a cube, with display touch sensors 0110 (e.g., force-sensitive input assemblies, capacitive sensors, membrane sensors, etc.) disposed on each of its faces (“display sensors” hereinafter). Only display touch sensors disposed on faces visible directly to the viewer are shown for clarity. The device is built as a robust monolithic body with no moving parts (as observable from the outside), which mitigates certain drawbacks of mechanically-transformable devices. Although the device has no extemally-observable moving parts, it may have moving parts internally, such as vibration actuators, gyroscopes or other accelerometers, piezoelectric sensors or actuators, certain microelectromechanical (MEMS) sensors or actuators, speaker, microphone, or the like.

[0038] Referring to FIG. 2A, basic functionality of the device according to an example embodiment is illustrated as a flow diagram. The basic operation of this example includes storing current video content and displaying it on display or plurality of displays at 210; integrating input from a plurality of sensors at 202; recognizing user’s gesture at 204 based on the input at 202 and on classification criteria applied at 203; recognizing user-intended transformation of the video content displayed on the display or plurality of displays at 206 may be accomplished, in part, by comparing interpreted gesture of the user (or series of gestures) to stored patterns of sensory responses, which may be part of classification criteria at 203. At 208, the current video content may be transformed and displayed.

[0039] Referring to FIG. 2B, an example of game content is illustrated according to an example. Display touch sensors 0110 are used as sensors to detect user interaction with content on the displays. The content comprises movable graphical objects 0112 and 0114 intended to cause a user to interact with them through hand gestures, including, but not limited to, pressing on object 0112 with a finger, moving the object 0114 over the surface of the display, or the like. Content scripts in some embodiments of the present disclosure include moving objects as part of solving a puzzle, or transforming virtualized transformable geometric shapes into various emulated configurations.

[0040] In some embodiments the displays 0130 are 3D displays. The device may be configured to create an illusion of a volumetric object with objects placed inside.

[0041] FIG. 3 is an exploded view of a display touch sensor 0110 illustrating principal components supporting its functionality according to an example. The touch sensor 0110 of FIG. 3 includes cover glass 0120, active-matrix display array (AMD) 0130 with an input flex circuit cable 0132 connecting AMD 0130 to the driving electronics, sensor plate 0140, with sensors 0142 disposed on it, and a flex cable 0144. In some embodiments, sensor plate 0140 comprises four distinct sensor devices 0142. In some embodiments, the sensor devices are tensoresistors, i.e. electronic components with electrical resistance dependent on the mechanical force applied (e.g., piezoresistors). When a pressure is applied to the movable object 112 as shown in FIG. 1, the four tensoresistors disposed as illustrated in FIG. 3 provide four measurements of force magnitude in known sensor locations. Measuring these values in real time provides enough information to map digital representations of hand mechanical action, i.e. point of application, direction and magnitude of mechanical force applied to the surface of the device through e.g. a finger or a palm, touching the display sensor.

[0042] Recognizing user intent based on dynamic measurements collected using sensors, including individual variations in fine motor movements, presents significant challenges. This complexity has been recently addressed by means of artificial intelligence (hereinafter “AT’) and deep learning (hereinafter “DL”) as techniques to train the software to recognize patterns in datasets derived from the collected input signals. At present, one available type of implementation for executing AI/DL functions is a Tensor Processing Unit (TPU). In some embodiments of the present disclosure, the hardware executing the AI and DL training, which may be a computationally-intensive process, is remote from the device, such as in the cloud, and communicates with the device via communications circuitry such as the examples described above.

[0043] Another embodiment of the present disclosure is depicted in FIGS. 4-6. A device 0100 comprises a core 0166, and a plurality of axles 0160. Each of the plurality of axles 0160 comprises a distal end 0162 and a proximal end 0164. The distal end 0162 is mechanically linked to display 0130, and the proximal end 0164 is mechanically linked to the core 0166, as illustrated in FIG. 4. One embodiment of core 0166 is shown in FIG. 5. Core 0166 comprises six sleeves 0180, one sleeve aligned orthogonally with a corresponding display 0130. A front elevational view of sleeve 0180 is shown in FIG. 6 according to an example. Each sleeve 0180 comprises a plurality of sensors 0182 and a plurality of axle retainer members 0184. Each of the plurality of axle retainer members 0184 comprises a sensor surface 0188 adjacent sensor 0182, and an axle retainer surface 0186 adj acent the proximal end 0164 of the axle 0160. The axle retainer member 0184 is arranged to transmit force from the axle 0160 mechanically coupled to display 0130. This example embodiment includes four of each sensor and axle retainer members per axle, though other configurations are also contemplated. The whole arrangement is intended to detect in real time the magnitude and direction of force that user applies to each screen.

[0044] In other related embodiments of the present disclosure the number of display screens differs from six, and the number of axles per active matrix display screen may differ as well. Further, retaining the axles, transmitting force from axle to sensor, and collecting enough measurements from the sensors to determine force magnitude and direction may require different numbers of sensors and axle retainer members per axle.

[0045] In some embodiments of the present disclosure sensors may be disposed along the edges of the device, as shown in FIG. 7. In this examplary embodiment the device is of generally cubical shape, and two sensors are disposed per edge. In other embodiments (not shown) there may be more or fewer sensors situated along each of the edges.

[0046] One application for volumetric mixed reality devices of the present disclosure is emulating user experiences similar to those provided by transformable electronic devices, using a monolithic, non-transformable, electronic device such as one described in accordance with any of the foregoing embodiments. [0047] The images displayed on the electronic display or displays emulate the appearance and functionality of a transformable input device with elements that could be re-positioned, slanted, or turned.

[0048] In one embodiment, as shown in FIGS. 8A-8B, a volumetric mixed reality device may be configured to emulate a 2x2x2-cubelet transreality puzzle such as in the examples disclosed in Russian Federation Patent No. RU2644313C1, the disclosure of which is incorporated by reference herein.

[0049] Other related embodiments may emulate e.g. a 3x3x3 Rubik’s Cube-style transreality puzzle as disclosed in U.S. Patent No. 8,465,356 (not shown), or a 4x4x4 puzzle as shown in FIGS. 8C- 8D.

[0050] In some embodiments, the sensors built into the non-transformable devices of FIGs. 8A-8D, or similar devices, are arranged to detect the user’s physical actions, typically hand gestures, intended to transform the device if it were transformable. Accordingly, the images displayed on the display screens are animated in response to the sensor inputs to emulate transformative movement of a transformative device. For instance, the animation may depict re-positioning, slanting, pushing, compressing or turning the virtualized movable elements.

[0051] Furthermore, a set of instructions may be programmed into the device to integrate the inputs from multiple the sensors, recognize the pattern of a particular user gesture, correlate the gesture to a predefined transformation of the virtualized transformative device.

[0052] FIG. 9 shows an example of a gesture employed to interact with a virtualized transformative device which comprises sensors 0190 incorporated into its edges. The generally cubic form of the device defines three axes of symmetry. Each of these three axes connects centers of opposing cube faces. One of the three axes is XI -X2, connecting points XI and X2, are geometric centers of an opposing pair of device faces. [0053] The user is applying force on the edges of the device as if to transform an actual transformative device by rotating one group of four virtual cubelets around axis XI -X2 relative to other group of four virtual cubelets as shown by arrows A1 and A2. This gesture, reflecting an intent to transform the virtualized transformable device, manifests in application of rotating momenta shown as A1 and A2 at the points of the user’s hand contact to the device, with forces measured by sensors 0190.

[0054] Accordingly, although the device of FIG. 9 is not actually transformative in a mechanical sense, the device is programmed to emulate mechanical transformation to provide visual and, optionally, sensory feedback to the user which depicts such mechanical transformation.

[0055] Thus, a virtualized transformative electronic devices differs substantially from actual mechanically-transformable input devices employed in transreality puzzles in that they are not transformable, i.e. incapable of having their parts repositioned, slanted, or turned.

[0056] The term virtualized in the present context means that the device is operative to receive users’ dynamic input and provide visual and, optionally, sensory feedback without significant relative movement of rotatable or shiftable parts like e.g. rotating cubelets in Rubik’s cube -type puzzles referenced above, or joysticks.

[0057] The above description is intended to be illustrative, and not restrictive. For example, the above- described examples (or one or more aspects thereof) may be used in combination with others. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with a claim standing on its own as a separate embodiment. The scope of the embodiments disclosed herein is to be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.