Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LIGHT PAINTING LIVE VIEW
Document Type and Number:
WIPO Patent Application WO/2014/035642
Kind Code:
A1
Abstract:
Methods and apparatus, including computer program products, for a light painting live view. A method includes, in a device comprising at least a processor, a memory, a display and a camera device having an on-screen viewfinder, accessing the camera, capturing individual frames of footage, each of the captured frames being displayed through the on screen viewfinder in cumulative succession, rendering the captured frames on a graphical processing unit (GPU), sending the captured frames through a shader program, generating at least two images, a first image saved to the memory and a second image displayed on the display, and rendering the first image into the second image to generate a final image.

Inventors:
WARNBERG RYAN HARRISON (US)
MCSWAIN MICHELLE KIRSTIN (US)
Application Number:
PCT/US2013/054454
Publication Date:
March 06, 2014
Filing Date:
August 12, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MRI LIGHTPAINTING LLC (US)
International Classes:
G03B15/02
Foreign References:
US20090273686A12009-11-05
US20070031062A12007-02-08
CN102497508A2012-06-13
Other References:
ROGGE, LAUREN.: "Integration of visual effects into the Virtual Video Camera system", 16 December 2009 (2009-12-16), Retrieved from the Internet [retrieved on 20131027]
Attorney, Agent or Firm:
KOZIK, Kenneth, F. (321 Summer StreetBoston, MA, US)
Download PDF:
Claims:
1. A method comprising:

in a device comprising at least a processor, a memory, a display and a camera device having an on-screen viewfinder, accessing the camera;

capturing individual frames of footage, each of the captured frames being displayed through the on-screen viewfinder in cumulative succession;

rendering the captured frames on a graphical processing unit (GPU);

sending the captured frames through a shader program;

generating at least two images, a first image saved to the memory and a second image displayed on the display; and

rendering the first image into the second image to generate a final image.

2. The method of claim 2 further comprising:

compressing the final image; and

converting the compressed final image to a Joint Photographic Experts Group (JPEG) file.

3. The method of claim 2 further comprising projecting the JPEG file on the display.

4. The method of claim 1 wherein the shader program receives input from the camera and outputs a progress frame.

5. The method of claim 1 wherein the device is a smartphone or tablet computer.

6. The method of claim 1 wherein the device is selected from the group consisting of a DSLR camera, a smartphone, a tablet computer, a personal data assistants, a digital televisions, a computers, a laptops, a device with an integrated digital camera, a wearable device, and a device with a digital camera. a digital camera, and a personal data assistant.

7. The method of claim 1 wherein generating the least two images comprises: an image/name stage; and

blending modes stage.

8. The method of claim 7 wherein the image/name stage comprises:

storing image data coming from the camera in a buffer in the memory;

using the stored image as an input to the shader program; and

outputting an intermediate image from the shader program to the display, the intermediate image blended with the input images.

9. The method of claim 8 wherein the blending modes stage comprises:

blending pixels of the intermediate output image with pixels of the input image; and replacing previous values of pixels with pixels of the intermediate output image.

10. The method of claim 8 wherein the input is a OpenGL texture.

11. A method comprising:

in a device comprising at least a processor, a memory, a display and a camera device having an on-screen viewfinder, executing a light painting live view process in conjunction with the camera to provide a long exposure camera that displays a creation of an exposure in real time.

12. The method of claim 11 wherein the device is a smartphone or tablet computer.

13. The method of claim 11 wherein the device is selected from the group consisting of a DSLR camera, a smartphone, a tablet computer, a personal data assistants, a digital televisions, a computers, a laptops, a device with an integrated digital camera, a wearable device, and a device with a digital camera.

14. An apparatus comprising:

a processor;

a memory; a display; and

a camera device having an on-screen viewfinder;

the memory comprising a light painting live view process, the light painting live view process comprising:

accessing the camera;

capturing individual frames of footage, each of the captured frames being displayed through the on-screen viewfinder in cumulative succession;

rendering the captured frames on a graphical processing unit (GPU);

sending the captured frames through a shader program;

generating at least two images, a first image saved to the memory and a second image displayed on the display; and

rendering the first image into the second image to generate a final image.

15. The apparatus of claim 14 wherein the light painting live view process further comprises: compressing the final image; and

converting the compressed final image to a Joint Photographic Experts Group (JPEG) file.

16. The apparatus of claim 15 wherein the light painting live view process further comprises projecting the JPEG file on the display.

17. The apparatus of claim 14 wherein the shader program receives input from the camera and outputs a progress frame.

18. The apparatus of claim 14 wherein generating the least two images comprises:

an image/name stage; and

blending modes stage.

19. The apparatus of claim 18 wherein the image/name stage comprises:

storing image data coming from the camera in a buffer in the memory;

using the stored image as an input to the shader program; and outputting an intermediate image from the shader program to the display, the intermediate image blended with the input images.

20. The apparatus of claim 19 wherein the blending modes stage comprises:

blending pixels of the intermediate output image with pixels of the input image; and replacing previous values of pixels with pixels of the intermediate output image.

21. The apparatus of claim 19 wherein the input is a OpenGL texture.

Description:
LIGHT PAINTING LIVE VIEW CROSS REFERENCE TO RELATED APPLICATIONS

[001] This application claims the benefit of U.S. Provisional Application No.

61/693,795, filed August 28, 2012. The disclosure of the prior application is considered part of and is incorporated by reference in the disclosure of this application.

BACKGROUND OF THE INVENTION

[002] The present invention generally relates to devices having a camera feature, and more particularly to a light painting live view.

[003] Like cameras, smartphones, such as the Apple iPhone®, Samsung Galaxy®, Blackberry Q10® and the like, and tablet computers running, for example, Google's

Android® operating system (O/S) and Apple's iOS® O/S, include among their features, built-in cameras for taking photos. Applications executing in the smartphones and tablet computers enable control of the built-in cameras, including light painting.

[004] In general, light painting is a photographic technique, often performed at night or in a dark area, where a photographer can introduce different lighting elements during a single long exposure photograph, light painting enables the capture of light trails, light graffiti tags, and so forth.

SUMMARY OF THE INVENTION

[005] The following presents a simplified summary of the innovation in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention nor delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later. [006] The present invention provides methods and apparatus, including computer program products, for a light painting live view.

[007] In general, in one aspect, the invention features a method including, in a device including at least a processor, a memory, a display and a camera device having an on-screen viewfinder, accessing the camera, capturing individual frames of footage, each of the captured frames being displayed through the on-screen viewfinder in cumulative succession, rendering the captured frames on a graphical processing unit (GPU), sending the captured frames through a shader program, generating at least two images, a first image saved to the memory and a second image displayed on the display, and rendering the first image into the second image to generate a final image

[008] In another aspect, the invention features a method including, in a device including at least a processor, a memory, a display and a camera device, executing a light painting live view process in conjunction with the camera to provide a long exposure camera that displays a creation of an exposure in real time.

[009] In still another aspect, the invention features an apparatus including a processor, a memory, a display, and a camera device, the memory including a light painting live view process, the light painting live view process including accessing the camera, capturing individual frames of footage, each of the captured frames being displayed through the onscreen viewfinder in cumulative succession, rendering the captured frames on a graphical processing unit (GPU), sending the captured frames through a shader program, generating at least two images, a first image saved to the memory and a second image displayed on the display, and rendering the first image into the second image to generate a final image.

[0010] These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed. BRIEF DESCRIPTION OF THE DRAWINGS

[0011] The invention will be more fully understood by reference to the detailed description, in conjunction with the following figures, wherein:

[0012] FIG. 1 is a block diagram of an exemplary smartphone.

[0013] FIG. 2 is a flow diagram of an exemplary light painting live view process.

DETAILED DESCRIPTION

[0014] The subject innovation is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well- known structures and devices are shown in block diagram form in order to facilitate describing the present invention.

[0015] As used in this application, the terms "component," "system," "platform," and the like can refer to a computer-related entity or an entity related to an operational machine with one or more specific functionalities. The entities disclosed herein can be either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal). [0016] In addition, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise, or clear from context, "X employs A or B" is intended to mean any of the natural inclusive permutations. That is, if X employs A, X employs B, or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. Moreover, articles "a" and "an" as used in the subject specification and annexed drawings should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.

[0017] As shown in FIG. 1, an exemplary device 10 includes at least a processor 15, a memory 20, a display unit 25, a camera 30 and a graphical processing unit (GPU) 35.

Example devices 10 include DSLR cameras, smartphones, tablet computers, personal data assistants, digital televisions, computers, laptops, devices with an integrated digital camera such as Nintendo® DS, wearable devices, devices with a digital camera, and so forth. The GPU 35 is an electronic circuit designed to rapidly manipulate and alter memory 20 to accelerate a creation of images in a frame buffer intended for output to the display unit 25.

[0018] The memory 20 can include at least an operating system (O/S) 40, such as Windows®, Linux®, Google's Android®, Apple's iOS®, or a proprietary OS, and a light painting live view process 100.

[0019] Light painting is a photographic technique in which exposures are made by moving a hand-held light source or by moving the camera. The term light painting also encompasses images lit from outside the frame with hand-held light sources. By moving the light source, the light can be used to selectively illuminate parts of the subject or to "paint" a picture by shining it directly into the camera lens. Light painting requires a slow shutter speed, usually a second or more. Light painting can take on the characteristics of a quick pencil sketch.

[0020] Light painting by moving the camera, also called camera painting, is the antithesis of traditional photography. At night, or in a dark room, the camera can be taken off the tripod and used like a paintbrush. An example is using the night sky as the canvas, the camera as the brush and cityscapes (amongst other light sources) as the palette. Putting energy into moving the camera by stroking lights, making patterns and laying down backgrounds can create abstract artistic images. [0021] Light painting can be done interactively using a webcam. The painted image can already be seen while drawing by using a monitor or projector.

[0022] Another technique used in the creation of light art is the projection of images on to irregular surfaces (faces, bodies, buildings, and so forth), in effect "painting" them with light. A photograph or other fixed portrayal of the resulting image is then made.

[0023] The light painting live view process 100 executes in conjunction with the camera 30 to provide a long exposure camera that displays the creation of the exposure in real time.

[0024] The device 10 can support a variety of applications, such as a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a blogging application, a web browsing application, a digital music player application, and/or a digital video player application.

[0025] The light painting live view process 100 is a light painting application. In light painting, a user can use a light source to draw shapes and patterns in front of a camera set to a long exposure. The light painting live view process 100 enables the user behind the camera 30 within the device 10 (or tablet computer) to watch the shapes or patterns that are being created, as they are being created. In prior approaches, the user must wait until the end of the exposure to see what has been made or created.

[0026] As shown om FIG. 2, the light painting live view process 100 accesses (105) the camera, which captures individual frames of footage, each of the captured frames displayed on a viewfinder in cumulative succession.

[0027] While frames are being captured by the camera, the light painting live view process 100 renders (110) the captured frames on a graphical processing unit (GPU), which is a user-facing camera "viewfinder" feature of the light painting live view process 100.

[0028] For every frame that is being captured to create an image, the light painting live view process 100 also sends (115) them through a shader program (also referred to as a vertex and fragment program) into graphical processing unit (GPU). In general, a shader is a computer program that is used to do shading, produce special effects and/or do postprocessing. Shaders calculate rendering effects on graphics hardware with a high degree of flexibility. Most shaders are coded for a graphics processing unit (GPU), though this is not a strict requirement. The position, hue, saturation, brightness, and contrast of all pixels, vertices, or textures used to construct a final image can be altered on the fly, using algorithms defined in the shader, and can be modified by external variables or textures introduced by the program calling the shader.

[0029] Sending (115) the captured frames through the shader creates two images, one image saved (120) to the device's memory and the other image displayed (125) by light painting live view process 100 for the user to see as if they were watching a video. The light painting live view process 100 uses frames from the camera as the input of the shader program and a progress frame as the output of the shader program. Through additive blending, one image is rendered (130) into the other by the light painting live view process 100, i.e., the image that is being drawn progressively is rendered to the display.

[0030] Once the user signals the light painting live view process to stop, the light painting live view process 100 converts (135) the image that is rendered into the memory to a Joint Photographic Experts Group (JPEG) file and projects (140) the JPEG file as a final image on the display.

[0031] As described above, a user initiates the light painting live view process 100, which generates a home screen graphical user interface (GUI). The GUI includes a main navigation bar that includes a pictorial rendering of a small camera. When the small camera is tapped, the light painting live view process 100 opens up to the camera built into the device's memory. The camera screen appears as though it's a video screen, ready for capture. The navigation bar shows a button to tap to begin image capture.

[0032] A video capture session is initiated and anything that passes in front of the camera will leave a trail, similar to a long exposure on a single-lens reflex/digital single-lens reflex (SLR/DSLR) camera. The difference is that the user sees the trail as it is created, in real time, like a mixture of a stop motion video and an Etch-A-Sketch®.

[0033] This is viewed facing through the viewfinder on of the light painting live view process 100, which is a screen that accesses the forward facing camera on the device.

Anything viewed by that camera is seen through the light painting live view process 100 viewfinder.

[0034] Exposures can be set for one second, or they can run as long as the user has memory in their device to store the image/video data. The exposure can also be stopped by tapping the same button used to start the exposure. [0035] The user can move their camera around to capture trails, or they can make their own trails with a light of their own.

[0036] For every frame that is being captured to create the image, the captured frame is sent through a shader program into the GPU.

[0037] A GL_MAX blend operation, which specifies how source and destination colors are combined, is responsible for producing the light painting, but to control the output a fragment shader program is used. The fragment shader is run on each pixel of an image, producing for each input pixel a corresponding output pixel. The fragment shader supports an "Ambient Light Amount" feature of the capture settings. By taking a brightness parameter between 0 and 1, the fragment shader enables throttling the affect of light input on the painting.

[0038] The following is one example of fragment shader source code:

[0039]

[0040] precision mediump float;

[0041] varying vec2 v_uv;

[0042] niform sampler2D u_diffuseTexture;

[0043] uniform float u_brightness;

[0044]

[0045] void main(void)

[0046]

[0047] // sample color

[0048] vec4 color = texture2D(u_diffuseTexture, v_uv);

[0049] // calculate luminance intensity

[0050] float lumlntensity = color.x * 0.299 + color.y * 0.587 + color.z * 0.114;

[0051] // clamp and exaggerate luminance intensity

[0052] lumlntensity = min(1.0, lumlntensity);

[0053] lumlntensity = lumlntensity * lumlntensity;

[0054] lumlntensity = max(u_brightness, lumlntensity);

[0055] // output final color

[0056] gl_FragColor = color * lumlntensity;

[0057] } [0058]

[0059] The light painting live view process 100 then generates images in stages:

[0060] Image Stages/Names Stage

[0061] 1. Raw Image - this is the image data coming from the device's video camera, frame-by- frame, stored in a buffer managed by the operating system.

[0062] 2. Input Image - this is the image used as an input to the fragment shader program, stored in an OpenGL texture. A texture is an OpenGL Object that contains one or more images that all have the same image format. A texture can be used in two ways. It can be the source of a texture access from a shader, or it can be used as a render target. The raw image is copied into the input image.

[0063] 3. Intermediate Output Image - this is the output of the fragment shader program, stored in an OpenGL texture. The input image is rendered into the intermediate output image, using a custom OpenGL frame buffer backed by an OpenGL texture. In general, frame buffer objects are a mechanism for rendering to images other than the default OpenGL Default frame buffer. They are OpenGL Objects that allow you to render directly to textures, as well as blitting from one frame buffer, to another.

[0064] 4. Preview Image - this is the output of the fragment shader program, shown on the device's display. The input image is rendered to the screen, using the default OpenGL frame buffer backed by the device's display.

[0065] 5. Output Image - this is the output of copying and compressing the data from the intermediate output image to a JPEG representation. The output image may be saved to the device's display's camera roll, shared via email, Facebook® or Twitter®, or uploaded to a server.

[0066] Through additive blending, one image is rendered into the other in the order laid out above. The image that is being drawn progressively is rendered to the display.

[0067] Blending Modes Stage

[0068] To produce a light painting, the pixels of the intermediate output image are blended with the pixels of the input image. The output of that blending process is then used to replace the previous value of each pixel of the intermediate output image.

[0069] The OpenGL blend mode "GL_MAX" is used to blend the pixels. The maximum of the two pixel values is the output of the operation. [0070] The following describes the effect of the GL_MAX blend mode on pixel values (taken from the OpenGL documentation at

http ://www . opengl . org/sdk/doc s/man/xhtml/glBlendEquation . xml) :

[0071] Mode

[0072] RGB Components

[0073] Alpha Component

[0074] GL_MAX

[0075] Rr = max R s R d

[0076] Gr = max G s G d

[0077] Br = max B s B d

[0078] Ar = max A s A d

[0079]

[0080] When all done, the output image is displayed:

[0081] 5. Output Image - this is the output of copying and compressing the data from the intermediate output image to a JPEG representation. The output image may be saved to the device's display's camera roll, shared via email, Facebook® or Twitter®, or uploaded to the server.

[0082] While the above describes a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary, as alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, or the like. References in the specification to a given embodiment indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic.

[0083] While given components of the system have been described separately, one of ordinary skill will appreciate that some of the functions may be combined or shared in given instructions, program sequences, code portions, and the like.

[0084] The foregoing description does not represent an exhaustive list of all possible implementations consistent with this disclosure or of all possible variations of the implementations described. A number of implementations have been described.

Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the systems, devices, methods and techniques described here. Accordingly, other implementations are within the scope of the following claims.

[0085] What is claimed is: