Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AREA SCANNING AND IMAGE PROJECTION
Document Type and Number:
WIPO Patent Application WO/2017/127078
Kind Code:
A1
Abstract:
An example system, including a scanner to scan an area, and a computing unit to identify a projection surface in the scanned area based on at least one criteria. The at least one criteria is related to an image selected by a user and the scanned area. The example system also includes a projector unit attachable to the scanner and to project the image to the projection surface. The projector unit receives the image from the computing unit and instructions to project the image onto the projection surface

Inventors:
MEHANDJIYSKY DIMITRE (US)
HOGGARTH MARCUS (GB)
GODFREYWOOD JACK (GB)
MASSARO KEVIN (US)
CAO ZHENG (US)
PLUMBE CIAN (GB)
Application Number:
PCT/US2016/014281
Publication Date:
July 27, 2017
Filing Date:
January 21, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO LP (US)
International Classes:
G06T19/00; G06F3/00; H04N5/225; H04N5/74
Domestic Patent References:
WO2012023004A12012-02-23
Foreign References:
US20150222842A12015-08-06
US20100315491A12010-12-16
US20120057174A12012-03-08
US20090141196A12009-06-04
Attorney, Agent or Firm:
WASSON, Robert, D. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system, comprising:

a scanner to scan an area;

a computing unit to identify a projection surface in the scanned area based on at least one criteria, the at least one criteria being related to an image selected by a user and the scanned area; and

a projector unit attachable to the scanner and to project the image to the projection surface,

wherein the projector unit receives the image from the computing unit and instructions to project the image onto the projection surface.

2. The system of claim 1, wherein the scanner and the projector unit are located in a housing unit.

3. The system of claim 2, wherein the processor is in the housing unit with the scanner and the projector unit

4. The system of claim t, wherein the processor is externally attached to the scanner and the projector unit.

5. The system of claim 1, wherein the scanner provides mapping of objects present in the scanned area.

6. The system of claim 5, wherein the mapping of the objects comprises location, dimension and color of the objects.

7. The system of claim 6, the at least one criteria includes the location, dimension and color of the objects.

8. The system of claim 1 , wherein the scanner is a camera.

9. The system of claim 1 , wherein the scanner is to rotate up to 360 degrees to scan the area, and the projector unit is to rotate up to 360 degrees to project the image.

10. The system of claim 1, wherein the processor is to request a confirmation from the user when the projection surface is identified.

11. The system of claim 1, wherein the image on tie projection surface is manipulated based on input from the user.

12. A system, comprising:

a 3D camera to scan an area;

a computing unit communicatively attachable to the camera to:

receive an image from a user,

receive data related to the scanned area, and

identify a projection surface based on the image and the data related to the scanned area; and

a projector to project the image onto the projection surface.

13. The system of claim 12, wherein the processor is in a mobile device such as a mobile phone, tablet or pnablet.

14. A processor-implemented method, comprising:

scanning, by a scanner, an area;

receiving, by a processor, an image selected by a user,

identifying, by a processor, a projection surface in the scanned area based on at least one criteria related to the received image and the scanned area; and

displaying, by a projector unit, the image on the projection area.

15. The method of claim 14, further comprising performing alignment and calibration methods to refine the projection surface.

Description:
AREA SCANNING AND IMAGE PROJECTION BACKGROUND

[0001] Augmented reality (AR) includes a direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated digital information such as text, graphics, sound, etc. in AR, the real-world environment of a user can be interactive and/or digitally manipulated. Systems mat can be used to provide AR utilize various technologies including, but not limited to, optical imaging and optical projection technology mat can collect information about, and then augment, a real-world environment

BRIEF DESCRIPTION OF THE DRAWINGS

[0002] For a detailed description of various examples, reference will now be made to the accompanying drawings in which:

[0003] Figure 1 is a block diagram of an example system in accordance with the principles disclosed herein;

[0004] Figure 2 is a schematic view of the system of Figure 1 in accordance with the principles disclosed herein; and

[0006] Figure 3 is a flowchart of an example method executable by a system of Figure 1 in accordance with the principles disclosed herein.

NOTATION AND NOMENCLATURE

[00063 Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, computer companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms Including" and "comprising" are used in an open-ended fashion, and thus should be interpreted to mean "including, but not limited to... ." Also, the term 'couple" or "couples" is intended to mean either an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical or mechanical connection, through an indirect electrical or mechanical connection via other devices and connections, through an optical etecrtrical connection, or through a wireless electrical connection. As used herein the term "approximately' * means plus or minus 10%. in addition, as used herein, the phrase "user input device" refers to any suitable device for providing an input by a user, into an electrical system such as, for example, a mouse, keyboard, a hand (or any finger thereof), a stylus, a pointing device, etc.

DETAILED DESCRIPTION

[0007] The following discussion is directed to various examples of the disclosure. Although one or more of these examples may be preferred, the examples disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims, in addition, one skilled in the art will understand that the following description has broad application, and tie discussion of any example is meant only to be descriptive of that example, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that example.

[0008] Referring now to Figure 1, a system 100 in accordance with the principles disclosed herein is shown, in this example, system 100 generally comprises a subsystem 110, which comprises a scanner (e.g., a camera 160) and a projector unit 180, and are communicatively connected to a computing device 150. Computing device 150 may comprise any suitable computing device while still complying with the principles disclosed herein. For example, in some implementations, the device 150 may comprise an electronic display, a smartphone, a tablet, a phablet, an all-in-one computer (i.e., a display that also houses the computer's board), a smart watch or some combination thereof. In one implementation, the subsystem 110 and the computing device 150 may be connected (as shown in Figure 1). In another implementation, the subsystem 110 may be located in the computing device 150. Further, in one implementation, the subsystem 110 may be comprise a display. In other implementations, the display may be in the computing device 150. For example, a user of the system 100 may use the display in the subsystem 110 to interact with the system 100. In another example, the user may use the display In the computing device 150 to control the camera 160 and the projector 180.

[0009] The projector unit 180 may comprise any suitable digital light projector assembly for receiving data from the computing device 150 and projecting an image or images that correspond with that input data. The projector may comprise a laser scan projector or digital light processing (DLP) projector or a liquid crystal on silicon (LCoS) projector which are advantageously compact and power efficient projection engines capable of multiple display resolutions and sizes, such as, for example, standard XGA (1024 x 768) resolution 4:3 aspect ratio or standard WXGA (1280 x 800) resolution 16:10 aspect ratio. The projector unit 180 is further connected to the computing device 150 in order to receive data therefrom for producing light and images. The projector unit 180 may be connected to the computing device 150 through any suitable type of connection while still complying with the principles disclosed herein. For example, in some implementations, the projector unit 180 is electrically coupled to the computing device 150 through an electric conductor, WI-FI, BLUETOOTH®, an optical connection, ah ultrasonic connection, or some combination thereof. Any suitable wireless (or wired electrical coupling) connection may be used between the subsystem 110 and the computing device 150 (if they are not in one unit) such as, for example. WI-FI, BLUETOOTH®, ultrasonic, electrical cables, electrical leads, electrical spring-loaded pogo pins with magnetic holding force, or some combination thereof, while stiff complying with the principles disclosed herein.

[0010] In one example, the projector unit 180 may comprise a pico projector. Such pico project may be used with a mobile device, such as a mobile phone. For example, the projector unit 180 may be a pico projector, and the computing device 150 may be a mobile phone. In such example, the pico projector device detachably attached to a mobile phone without a connecting wire. A mini projecting module associated with the pico projector can be installed in the phone to show an image in the phone onto a larger screen outside. In another example, it may not be necessary for the phone to carry the pico projector module. Applications of pico projector module to the phone may be hot spots. Further, the pico projector device used with the computing device 150 may include: a housing; a connector, partially exposed from the housing, for connecting with the mobile phone; and a pico projector module, provided in the housing, for projecting an image under control of the computing device 150 based on an image signal from the computing device 150 through a connector.

[0011] During operation of system 100, the projector unit 180 projects an image onto a projection surface 190. The projection surface 190 may comprise any suitable surface in an environment and may not be limited in size and shape. More specifically, the projection surface may be a wall in a bedroom, a counter in a kitchen, area under a bed (as will be described in more detail in reference to Figure 3), a desk in an office and/or alike, in one implementation, the projection surface 190 may be selected based on a set of criteria. Such criteria may include size, texture, presence of obstacles, and alike, in some implementations, the selection of a surface may be declined if the criteria is not met For example, tie camera 160 may be searching for a flat surface, and/or an open area with no obstacles, and/or an area with a specific set of dimensions and/or alike. In another example, a surface that may be matched with a theme may be searched. More specifically, a user may select a specific theme. The system 100 searches for a surface suitable with the theme of the game. For example, the user may select scary as the theme. Accordingly, the system 100 may select a surface under an object (e.g., space under a bed, as will be described in more detail in reference to Figure 3). The search continues unless an area that meet the required criteria is identified. In other implementations, various alignment and calibration (e.g. keystone correction) may be applied to modify the projection surface to meet the criteria required for the qualification of the projection surface. For example, if obstacles (e.g., objects, shadow) are detected within the projection surface, various methods consistent with the manner described herein may be applied to qualify the area as a suitable projection surface in view of the criteria. If the surface is found to be uneven or otherwise unsuitable, various methods of alignment and calibration (e.g. keystone correction) may be applied. In another implementation, the background of the identified projection surface may be optionally, digitally removed within the resulting image projected onto the surface. [0012} As described in more detail above, during operation of system 100, the camera 160 scans the surrounding of the system 100 and identifies a suitable projection area (e.g., the projection surface 190) for the projector unit 180 to project data received from the computing device 150. The camera 160 may be a 3D image camera. The camera 160 is a networkabie camera and the data/information collected by the camera 160 can be provided to the computing device 150 via a wireless connection. In one implementation, the camera 160 scans the surrounding in 360° panorama to provide up to a 360° field of view. More specifically, a full panoramic view may be provided with electronic panning and point and click zoom to allow an almost instantaneous movement between widely spaced points of interest. Furthermore, the camera 160 may comprise longer-range, narrow field of view optics to zoom in on specific areas of interest. The camera 160 may also be implemented, for example, as a binocular-type vision system, such as a portable handheld or head/helmet mounted device to provide a panoramic wide field of view.

[0013] In one implementation, the camera 160 can identify the acquired images in terms of their importance by algorithms entered into evaluation and monitoring modules that may be in the computing device 150 or camera 160 and standard positions stored therein. This will be described in more detail in reference to Figure 3. in another implementation, the camera 160 may be operable during day and night conditions by utilizing technologies including thermal imagers. In some other implementation, the camera 160 may comprise a plurality of cameras.

[0014] In one implementation, the data being projected may comprise still or moving images, videos, web pages (e.g., weather, email, and social media), applications (e.g., music player, instant messenger, photo/video application, and home system control panel), or user interface of the computing device 150. Further, in other examples, the data may be dynamic. More specifically, the data may provide augmentation in semantic context with environmental elements (e.g., live direct or indirect view of a physical, real-world environment whose elements are supplemented by computer-generated sensory input such as sound, video, graphics or GPS data). In one example, the projection surface may be the kitchen wall, and the data projected may be related to a recipe, in such example, the projection unit may project data related to ingredients of a recipe (e.g., 1 cup of milk, 2 eggs) onto the projection surface (e.g., waif or counter in a kitchen) that contains die physical objects or pictures of the ingredients (e.g., milk, eggs). The text "1 cup" may be projected next to the milk carton on the kitchen counter, and/or the text "2 eggs" may be projected next to the picture eggs on the kitchen wall.

[0016] in one implementation, the camera 160 may communicate the identification of the projection surface 190 to the computing device 150 to instruct the projection unit 180 to project in the identified area. In another implementation, the camera 160 may communicate the identification of the projection surface directly to the projection unit 180, which as a result, projects the data received from the computing unit 150 onto the identified area, in another implementation, the computing device 150 may choose to communicate the selection of the projection surface 190 to a user and request input (e.g., confirmation) from the user before proceeding with the projection of any data. If the user chooses to reject the selected area, the camera may rescan the surrounding of the system 100 to identify another projection surface based on the same or different criteria. In these examples, the user may communicate with the computing device 150 via gesture and/or voice commands. To support this, the camera 160 in the system 100 may be utilized. Moreover, the system 100 may comprise a microphone or similar device that is arranged to receive sound inputs (e.g., voice) from the user during operation. It should be noted while a camera is discussed in this specific implementation, other types of scanners may be incorporated in die system 100.

[0016] Referring now to Figure 2, a system 200 in accordance with the principles disclosed herein is shown. Similar to the subsystem 110 discussed in reference to Figure 1, the system 200 generally comprises a projector unit and a camera, which are placed in a housing unit as shown in Figure 2. In this implementation, the housing unit is a portable rectangular box. The projector unit and the camera are substantially hidden inside the housing unit when the subsystem 110 is viewed from a viewing surface. In other implementations, the housing unit may be any suitable structure for supporting the components while still complying with the principles disclosed herein. The system 200 may come in any shape and size. Moreover, In some implementations, the projector unit and the camera comprise a tilt mechanism (e.g., hinge) that includes an axis of rotation such that the projector unit and the camera may rotate up to 360 degrees. In one example, the system 200 rotates to attain an optimal viewing angle for the camera or an optimal projection angle for the projector unit. The projector unit may rotate to project images, interfaces, games and alike onto surrounding surfaces. The camera may rotate to record the surround space in 3D. More specifically, as described in more detail in reference to Figure 1, the camera scans the surrounding of the system 200 to identify the location of a suitable projection surface 210 for the system 200. In one implementation, the camera may be a sensor bundle, which includes a plurality of sensors and/or cameras to measure and/or detect various parameters occurring oh or near the surface 210 during operation. For example, the bundle includes an ambient light sensor, a camera (e.g., a color camera), a depth sensor or camera, and a three dimensional (30) user interface sensor. Ambient light sensor is arranged to measure the intensity of light of the environment surrounding system 200, in order to, in some implementations, adjust the camera's and/or sensor's exposure settings, and/or adjust the intensify of the tight emitted from other sources throughout system such as, for example, the projector unit The camera may, in some instances, comprise a color camera which is arranged to take either a still image or a video of an object and/or document disposed in the surrounding areas. Depth sensor generally indicates when a 3D object is in the surrounding areas, in particular, depth sensor may sense or detect the presence, shape, contours, motion, and/or the 3D depth of an object (or specific featured) of an object) placed in the surrounding areas during operation. Thus, in some implementations, sensor may employ any suitable sensor or camera arrangement to sense and detect a 3D object and/or the depth values of each pixel (whether infrared, color, or other) disposed in the sensor's field-of-view (FOV). For example, in some implementations sensor may comprise a single infrared (IR) camera sensor with a uniform flood of IR light, a dual IR camera sensor with a uniform flood of IR light structured light depth sensor technology, time-of-fiight (TOF) depth sensor technology, or some combination thereof. User interface sensor includes any suitable device or devices (e,g M sensor or camera) for tracking a user input device such as, for example, a hand, stylus, pointing device, etc. In some implementations, sensor includes a pair of cameras which are arranged to stereoscopicaily track the location of a user input device (e.g.. a stylus) as it is moved by a user about the in the surrounding areas. In other examples, sensor may also or alternatively include an infrared camera(s) or sensors) that is arranged to detect infrared light that is either emitted or reflected by a user input device. It should further be appreciated that bundle may comprise other sensors and/or cameras either in lieu of or in addition to sensors previously described.

[0017] In addition, as will explained in more detail below, each of the sensors within the bundle is communicatively coupled to the computing device such that data generated within the bundle may be transmitted to the computing device and commands issued by the computing device may be communicated to the sensors during operations. As is explained above, any suitable electrical and/or communicative coupling may be used to couple sensor bundle to the computing device such as for example, an electric conductor, WI-FI, BLUETOOTH®, an optical connection, an ultrasonic connection, or some combination thereof. In this specific example, the computing device is positioned in the system 200 along with the camera and the projector. In other implementations, the computing device may be detachable from the system 200.

[0018] The surface 210 may comprise any suitable area determined by the computing device in order to allow the user to interact with software being executed by the computing device that the system 200 is in communication with. The suitable surface 210 may be determined based on predefined criteria (e.g., conditions, algorithms). For example, the suitable surface 210 may be determined by the type of image the user chooses to project. For example, it may be defined that if the user chooses to project an animal, the animal image is to be projected under a detected object in the surrounding space. In particular, in some examples, the sensors within the camera may detect an object in the surrounding area and identify its location, dimensions, and in some instances, color. Such information about the object may be used with the predetermined criteria when identifying the suitable surface to project the image chosen by the user. Such information gathered by the camera may then be routed to the processor in the computing unit Thereafter, the processor directs projector unit to project the image onto the surface 220.

[0019] in one implementation, the user selects an image for projection. For example, in this example, the user selects to project an image of a crocodile 220. The camera scans the surrounding space and records empty spaces, occupied spaces, and objects within the occupied spaces as described in more detail above. Then, the objects in the surrounding space (e.g., bed, table, door, etc.) can be mapped. Accordingly, based on the predefined criteria, the system may identify an object 230 that meets the criteria (e.g., certain size and shape, certain location) for where the image of the crocodile 220 is to be projected. Then, the system instructs the projector unit to project the image 220 under the identified object 230. in some implementations, the surface 210 may utilize known touch sensitive technologies such as, for example, resistive, capacitive, acoustic wave, infrared, strain gauge, optical, acoustic pulse recognition, or some combination thereof while still complying with the principles disclosed herein. Accordingly, the user may be able to interact with the object being projected onto the surface. In other examples, the user may interact with the object being projected via the computing device. In another example, the user may use an augmented reality eyewear to interact with the object being projected.

[00203 to some examples, the system 200 may project a plurality of images onto a plurality of projection surfaces. Each image on each surface may be controlled by one or different users. These systems may communicate with one another either directly or via the computing units that they are communicatively connected to. More specifically, the system 200 may be used to create a shared digital workspace or gaming space for remote collaboration between one or more users. Another system (with a projector unit and a camera) may be communicatively linked to the system 200 through any suitable connection such as, for example, an electric conductor, WI-FI, BLUETOOTH®, an optical connection, an ultrasonic connection, or some combination thereof, such that information and/or data may pass freely between the systems. During collaboration between users, images may be projected on projection surfaces associated with the systems. Further, one user may interact with the projected image on the projection surface of a first system by pointing with a hand (or other object). The camera or sensors within the bundle may sense this interaction in the manner previously described and then capture an image or video of hand, which is then projected onto the surface of the second system such that the user of the second system may view the interaction between the hand of the user of the first system. During collaboration between the users, digital content that is generated by software executing on either computing device may be displayed on both the surfaces, via the projector units, such that bom users may each view and interact with the shared digital content in a cooperative fashion.

[0021] it should be noted that in other implementations, the system 300 may be utilized for many other area including- education, film, gaming, healthcare, fashion and alike. Other examples may be provided while still complying with the principles disclosed herein.

[0022] In some implementations, the system 200 may be portable. For example, the user may choose to use the system 200 as a wearable accessory, More specifically, the user may choose to carry the system around in an environment (e.g., home). Further, the computing unit may be a portable device that is attached to or inside of the system 200 and that moves with the user (e.g., mobile device, tablet, smart watch and alike), in other examples, the system 200 may have a permanent location in an environment (e.g., a room in a house). In either example, the system 200 maintains connection with tile computing unit. When the operation of the system 200 is initiated, the system 200 may confirm active connection with the computing device, scan the surrounding and identify a projection surface, and project the image from the computing device when provided.

[0023] Referring now to Figure 3, a flowchart of an example method executable by a system similar to the systems 100-200 described in reference to Figures 1-2 is shown in accordance with the principles disclosed herein. At block 310, the camera scans surrounding area for location awareness. More specifically, the camera maps the environment (e.g., the room, the surrounding up to a certain distance) and determines unoccupied (empty) and occupied areas, including the objects within the occupied areas. Further, the camera may determine the location, size and color of the scanned objects. At block 320, the system receives, an image selected by a user. More specifically, the selection of the image may comprise identifying a category of images (e.g., animals) or a specific animal (e.g., dog). Further, the selection of the image may comprise choosing a theme (e.g., game, movie, book, etc.) for the image. At block 330, the system identifies a projection space based on at feast one criteria related to the mapping of the area and the image selected by the user. At block 340, a projector unit in the system projects the image. In further implementations, the system may capture input related to the projected image. In one implementation, such input may be from a user, and may be captured via a microphone or a camera, and may be communicated to the computing device. The computing device may update the image based on the input, and transmits it back to the system to be projected on the projection space.

[0024] The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that tie following claims be interpreted to embrace ail such variations and modifications.