Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR PRODUCING SPECIAL EFFECTS IN DIGITAL PHOTOGRAPHY
Document Type and Number:
WIPO Patent Application WO/2013/153252
Kind Code:
A1
Abstract:
Images are taken in a series with differing focus settings so that in at least one first image, a target object is in focus and in at least one second image, objects neighboring the target object are out of focus so as to blur the neighboring objects. The at least one first and second images are combined into a combined image with: a first sub-image formed from a corresponding portion of the at least one first image independently of a corresponding portion of the at least one second image; a second sub-image formed from a corresponding portion of the at least one second image independently of a corresponding portion of the at least one first image; and a third sub-image between the first sub-image and the second sub-image formed by merging pixels of matching position from the at least one first image and from the at least one second image.

Inventors:
NENONEN PETRI (FI)
VARTIAINEN MARKUS (FI)
SUKSI MATTI (FI)
ILMONIEMI MARTTI (FI)
Application Number:
PCT/FI2012/050363
Publication Date:
October 17, 2013
Filing Date:
April 13, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA CORP (FI)
NENONEN PETRI (FI)
VARTIAINEN MARKUS (FI)
SUKSI MATTI (FI)
ILMONIEMI MARTTI (FI)
International Classes:
G02B7/28; H04N5/262; G06T7/00; H04N5/232
Foreign References:
US20090040321A12009-02-12
US20060098970A12006-05-11
US20090160963A12009-06-25
US20110193980A12011-08-11
US20120057070A12012-03-08
Attorney, Agent or Firm:
NOKIA CORPORATION et al. (Jussi JaatinenKeilalahdentie 4, Espoo, FI)
Download PDF:
Claims:
WHAT IS CLAIMED IS

1. An apparatus, comprising:

an interface configured to exchange information with a camera unit; and

a processor configured to cause taking images in a series and with differing focus settings with the camera unit so that in at least one first image, a target object is in focus and in at least one second image, objects neighboring the target object are imaged out of focus so as to cause blur in the neighboring objects;

the processor being further configured to combine the at least one first image and the at least one second image to form a combined image so that:

a first sub-image is formed for the combined image from a corresponding portion of the at least one first image independently of a corresponding portion of the at least one second image;

a second sub-image is formed for the combined image from a corresponding portion of the at least one second image independently of a corresponding portion of the at least one first image; and a third sub-image between the first sub-image and the second sub-image is formed for the combined image by merging pixels of matching position from the at least one first image and from the at least one second image.

2. The apparatus of claim 1, wherein the target image object is defined as an object appearing in the focused depth of the camera unit at a given moment of time.

3. The apparatus of claim 2, wherein the given moment of time is the time when a user expresses a desire to take a photograph with simulated diorama effect.

4. The apparatus of any of preceding claims, wherein the processor is further configured to form a gradual transition between the first sub-image and the second sub-image in the image by blending images of different focus settings with gradient weights about border region surrounding the desired at least one image object.

5. The apparatus of any of preceding claims, wherein processor is further configured to form a gradual transition between the first sub-image and the second sub-image by mixing pixels of two or more images with varying weights so that pixels closer to the target object are formed with higher weight from an image in which the target object is in focus and pixels farther from the target object are formed with greater weight from an image in which the second sub-image is out-of- focus. 6. The apparatus of any of preceding claims, wherein the processor is further configured to receive focusing information from the camera unit.

7. The apparatus of claim 6, wherein the processor is further configured to determine the in focus image from the focusing information.

8. The apparatus of claim 6 or 7, wherein the processor is further configured to determine in focus image blocks based on the focusing information.

9. The apparatus of any of preceding claims, wherein the processor is further configured to apply an edge preserving smoothing filter on the image at the target image object. 10. The apparatus of any of preceding claims, wherein the processor is further configured to enhance colors of the formed image.

11. The apparatus of any of preceding claims, wherein the apparatus is built into a camera unit. 12. A method comprising:

exchanging information with a camera unit;

causing taking images in a series and with differing focus settings with the camera unit so that in at least one first image, a target object is in focus and in at least one second image, objects neighboring the target object are imaged out of focus so as to cause blur in the neighboring objects;

combining the at least one first image and the at least one second image to form a combined image so that:

a first sub-image is formed for the combined image from a corresponding portion of the at least one first image independently of a corresponding portion of the at least one second image; a second sub-image is formed for the combined image from a corresponding portion of the at least one second image independently of a corresponding portion of the at least one first image; and

a third sub-image between the first sub-image and the second sub-image is formed for the combined image by merging pixels of matching position from the at least one first image and from the at least one second image.

13. The method of claim 12, wherein the target image object is defined as an object appearing in the focused depth of the camera unit at a given moment of time.

14. The method of claim 13, wherein the given moment of time is the time when a user expresses a desire to take a photograph with simulated diorama effect.

15. The method of any of claims 12 to 14, further comprising forming a gradual transition between the first sub-image and the second sub-image in the image by blending images of different focus settings with gradient weights about border region surrounding the desired at least one image object.

16. The method of any of claims 12 to 14, further comprising forming a gradual transition between the first sub-image and the second sub-image by mixing pixels of two or more images with varying weights so that pixels closer to the target object are formed with higher weight from an image in which the target object is in focus and pixels farther from the target object are formed with greater weight from an image in which the second sub-image is out-of- focus.

17. The method of any of claims 12 to 16, further comprising receiving focusing information from the camera unit.

18. The method of claim 17, wherein the processor is further configured to determine the in focus image from the focusing information.

19. The method of claim 17 or 18, wherein the processor is further configured to determine in focus image blocks based on the focusing information. 20. The method of any of claims 12 to 19, wherein the processor is further configured to apply an edge preserving smoothing filter on the image at the target image object.

21. The method of any claims 12 to 20, wherein the processor is further configured to enhance colors of the formed image.

22. A computer program comprising:

code for causing exchanging information with a camera unit;

code for causing taking images in a series and with differing focus settings with the camera unit so that in at least one first image, a target object is in focus and in at least one second image, objects neighboring the target object are imaged out of focus so as to cause blur in the neighboring objects; and code for combining the at least one first image and the at least one second image to form a combined image so that:

a first sub-image is formed for the combined image from a corresponding portion of the at least one first image independently of a corresponding portion of the at least one second image; a second sub-image is formed for the combined image from a corresponding portion of the at least one second image independently of a corresponding portion of the at least one first image; and

a third sub-image between the first sub-image and the second sub-image is formed for the combined image by merging pixels of matching position from the at least one first image and from the at least one second image;

when executed by an apparatus.

23. The computer program of claim 22, further comprising computer code for causing performing any of claims 13 to 21, when executed by the apparatus.

24. A memory medium comprising the computer program of claim 22.

Description:
METHOD AND APPARATUS FOR PRODUCING SPECIAL EFFECTS IN DIGITAL PHOTOGRAPHY TECHNICAL FIELD

[0001] The present application generally relates to producing of special effects in digital photography.

BACKGROUND

[0002] In photographing, images are formed with very different objectives. Sometimes, it is desired that so-called depth of field (DOF) is very long, extending from nearest objects to the farthest objects. This is often desired in landscape images. However, often the photographer desires to compress the DOF so as to emphasize some image objects. Typically, a smaller DOF is created by using larger lens aperture (i.e. smaller / number). While every objective has exactly one exactly sharp focal plane at a distance that depends on the lens aperture and other properties of the camera, the blur builds gradually and does not become perceivable within the DOF.

[0003] In portrait and macro images, the DOF is typically shortened by use of relatively large lens apertures and by short range. There are also some professional photographers who have taken portrait images using very special objectives that can tilt and shift with relation to the camera's exposing frame of film or image sensor. Such objectives are referred to as tilt and shift objectives. Tilt and shift objectives are among the most expensive objectives. Tilt and shift objectives yet enable photographing tall buildings so that the tops of the buildings do not seem to turn towards each other. While this geometric error can also be corrected by digital processing, the use of a tilt-shift objective with correct settings results in higher accuracy by removing the need to stretch image areas.

[0004] Tilt and shift objectives are also sometimes used to produce a so-called diorama effect or "diorama illusion". In the diorama effect, a life-size object or scene is made to look like a photograph of a miniature scale model. In miniature model photographs, it is easy to produce strong blur in front of and behind the focal plane because of the basic laws of optics and because of short range to the target. In life-size photographing, the lens aperture cannot be proportionally increased as much as the distances do in comparison to macro imaging. Hence, when an image is taken e.g. from a helicopter or tall building, the DOF is relatively far greater than in macro imaging. Tilt-shift objectives yet produce a wedge shaped DOF when the objective is tilted.

SUMMARY

[0005] Various aspects of examples of the invention are set out in the claims.

[0006] According to a first example aspect of the present invention, there is provided an apparatus comprising:

[0007] an interface configured to exchange information with a camera unit;

[0008] a processor configured to cause taking images in a series and with differing focus settings with the camera unit so that in at least one first image, a target object is in focus and in at least one second image, objects neighboring the target object are imaged out of focus so as to cause blur in the neighboring objects; [0009] the processor being further configured to combine the at least one first image and the at least one second image to form a combined image so that:

[0010] a first sub-image is formed for the combined image from a corresponding portion of the at least one first image independently of a corresponding portion of the at least one second image;

[0011 ] a second sub-image is formed for the combined image from a corresponding portion of the at least one second image independently of a corresponding portion of the at least one first image; and

[0012] a third sub-image between the first sub-image and the second sub-image is formed for the combined image by merging pixels of matching position from the at least one first image and from the at least one second image.

[0013] The target object may be defined as an object appearing in the focused depth of the camera unit at a given moment of time. The given moment of time may be the time when a user expresses a desire to take a photograph with simulated diorama effect.

[0014] The processor may be further configured to form a gradual transition between the first sub-image and the second sub-image in the image by blending images of different focus settings with gradient weights about border region surrounding the desired at least one image object. The processor may be configured to form a gradual transition between the first sub-image and the second sub-image by mixing pixels of two or more images with varying weights so that pixels closer to the target object are formed with higher weight from an image in which the target object is in focus and pixels farther from the target object are formed with greater weight from an image in which the second sub-image is out-of- focus.

[0015] According to a second example aspect of the present invention, there is provided a method comprising:

[0016] exchanging information with a camera unit;

[0017] causing taking images in a series and with differing focus settings so that in at least one first image, a target object is in focus and in at least one second image, objects neighboring the target object are imaged out of focus so as to cause blur in the neighboring objects;

[0018] combining the at least one first image and the at least one second image to form a combined image so that:

[0019] a first sub-image is formed for the combined image from a corresponding portion of the at least one first image independently of a corresponding portion of the at least one second image;

[0020] a second sub-image is formed for the combined image from a corresponding portion of the at least one second image independently of a corresponding portion of the at least one first image; and

[0021] a third sub-image between the first sub-image and the second sub-image is formed for the combined image by merging pixels of matching position from the at least one first image and from the at least one second image.

[0022] According to a third example aspect of the present invention, there is provided a computer program comprising:

[0023] code for causing exchanging information with a camera unit; [0024] code for causing taking images in a series and with differing focus settings so that in at least one first image, a target object is in focus and in at least one second image, objects neighboring the target object are imaged out of focus so as to cause blur in the neighboring objects; and

[0025] code for combining the at least one first image and the at least one second image to form a combined image so that:

[0026] a first sub-image is formed for the combined image from a corresponding portion of the at least one first image independently of a corresponding portion of the at least one second image;

[0027] a second sub-image is formed for the combined image from a corresponding portion of the at least one second image independently of a corresponding portion of the at least one first image; and

[0028] a third sub-image between the first sub-image and the second sub-image is formed for the combined image by merging pixels of matching position from the at least one first image and from the at least one second image;

[0029] when executed by an apparatus.

[0030] According to a fourth example aspect of the present invention, there is provided a memory medium comprising the computer program of the third example aspect.

[0031] According to a fifth example aspect of the present invention, there is provided a method comprising:

[0032] taking images in a series with differing focus settings so that in at least one first image, a target object is in focus and in at least one second image, objects neighboring the target object are out of focus so as to blur the neighboring objects;

[0033] combining the at least one first and second images into a combined image with:

[0034] a first sub-image formed from a corresponding portion of the at least one first image independently of a corresponding portion of the at least one second image;

[0035] a second sub-image formed from a corresponding portion of the at least one second image independently of a corresponding portion of the at least one first image; and

[0036] a third sub-image between the first sub-image and the second sub-image formed by merging pixels of matching position from the at least one first image and from the at least one second image.

[0037] Any foregoing memory medium may comprise a digital data storage such as a data disc or diskette, optical storage, magnetic storage, holographic storage, opto-magnetic storage, phase-change memory, resistive random access memory, magnetic random access memory, solid-electrolyte memory, ferroelectric random access memory, organic memory or polymer memory. The memory medium may be formed into a device without other substantial functions than storing memory or it may be formed as part of a device with other functions, including but not limited to a memory of a computer, a chip set, and a sub assembly of an electronic device.

[0038] Different non-binding example aspects and embodiments of the present invention have been illustrated in the foregoing. The foregoing embodiments are used merely to explain selected aspects or steps that may be utilized in implementations of the present invention. Some embodiments may be presented only with reference to certain example aspects of the invention. It should be appreciated that corresponding embodiments may apply to other example aspects as well. BRIEF DESCRIPTION OF THE DRAWINGS

[0039] For a more complete understanding of example embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:

[0040] Fig. 1 shows a schematic system for use as a reference with which some example embodiments of the invention can be explained;

[0041] Fig. 2 shows a block diagram of an apparatus of an example embodiment of the invention;

[0042] Fig. 3 shows a block diagram of a camera unit of an example embodiment of the invention;

[0043] Fig. 4 shows a flow chart illustrating basic operations in a process according to an example embodiment;

[0044] Fig. 5 shows an example of an image with a focus grid illustrating focus measurement blocks of an autofocus unit;

[0045] Fig. 6 shows the image of Fig. 5 with a target area in focus;

[0046] Fig. 7 shows an image taken from the view of Fig. 5 with non-target area out of focus;

[0047] Fig. 8 shows weight factors for the blurred part of the image and the smooth transition of the weight factors;

[0048] Fig. 9 shows a final image in which the non-focus blurred surroundings are merged with the crisp image of the target object and the result is color enhanced; and

[0049] Fig. 10 shows a schematic diagram illustrating forming of the smooth transition between the image of the target image object and its blurred surroundings. DETAILED DESCRIPTION OF THE DRAWINGS

[0050] An example embodiment of the present invention and its potential advantages are understood by referring to Figs. 1 through 10 of the drawings.

[0051] Fig. 1 shows a schematic system 100 for use as a reference with which some example embodiments of the invention can be explained. The system 100 comprises a device 110 such as a camera phone or a digital camera having a camera unit 120 with a field of view 130. The system 100 further comprises a display 140. Fig. 1 also shows a target image object 150 that is being imaged by the camera unit 120. Fig. 1 also shows two other image objects: a proximate object 160 and a distant object 170 both clearly at a spatial distance with relation to the target image object 150.

[0052] The three objects in Fig. 1 will be used to describe different example embodiments which enable simulating of a diorama effect. Some of these embodiments employ only circuitries within the camera unit 120 while some other embodiments use circuitries external to the camera unit 120. Before further explaining the operations, let us introduce some example structures with which at least some of the described example embodiments can be implemented.

[0053] Fig. 2 shows a block diagram of an apparatus 200 of an example embodiment of the invention. The apparatus 200 is suited for operating as the device 110. The apparatus 200 comprises a communication interface 220, a processor 210 coupled to the communication interface module 220, and a memory 240 coupled to the processor 210. The memory 240 comprises a work memory and a non- volatile memory such as a read-only memory, flash memory, optical or magnetic memory. In the memory 240, typically at least initially in the non-volatile memory, there is stored software 250 operable to be loaded into and executed by the processor 210. The software 250 may comprise one or more software modules and can be in the form of a computer program product that is software stored in a memory medium. The apparatus 200 further comprises a camera unit 260 and a viewfinder 270 each coupled to the processor.

[0054] It shall be understood that any coupling in this document refers to functional or operational coupling; there may be intervening components or circuitries in between coupled elements.

[0055] The communication interface module 220 is configured to provide local communications over one or more local links. The links may be wired and/or wireless links. The communication interface 220 may further or alternatively implement telecommunication links suited for establishing links with other users or for data transfer (e.g. using the Internet). Such telecommunication links may be links using any of: wireless local area network links, Bluetooth, ultra-wideband, cellular or satellite communication links. The communication interface 220 may be integrated into the apparatus 200 or into an adapter, card or the like that may be inserted into a suitable slot or port of the apparatus 200. While Fig. 2 shows one communication interface 220, the apparatus may comprise a plurality of communication interfaces 220.

[0056] The processor 210 is, for instance, a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a graphics processing unit, an application specific integrated circuit (ASIC), a field programmable gate array, a microcontroller or a combination of such elements. Figure 2 shows one processor 210, but the apparatus 200 may comprise a plurality of processors.

[0057] As mentioned in the foregoing, the memory 240 may comprise volatile and a nonvolatile memory, such as a read-only memory (ROM), a programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), a random-access memory (RAM), a flash memory, a data disk, an optical storage, a magnetic storage, a smart card, or the like. In some example embodiments, only volatile or non-volatile memory is present in the apparatus 200. Moreover, in some example embodiments, the apparatus comprises a plurality of memories. In some example embodiments, various elements are integrated. For instance, the memory 240 can be constructed as a part of the apparatus 200 or inserted into a slot, port, or the like. Further still, the memory 240 may serve the sole purpose of storing data, or it may be constructed as a part of an apparatus serving other purposes, such as processing data. Similar options are thinkable also for various other elements.

[0058] A skilled person appreciates that in addition to the elements shown in Figure 2, the apparatus 200 may comprise other elements, such as microphones, displays, as well as additional circuitry such as further input/output (I/O) circuitries, memory chips, application-specific integrated circuits (ASIC), processing circuitry for specific purposes such as source coding/decoding circuitry, channel coding/decoding circuitry, ciphering/deciphering circuitry, and the like. Additionally, the apparatus 200 may comprise a disposable or rechargeable battery (not shown) for powering the apparatus when external power if external power supply is not available.

[0059] It is also useful to realize that the term apparatus is used in this document with varying scope. In some of the broader claims and examples, the apparatus may refer to only a subset of the features presented in Fig. 2 or even be implemented without any one of the features of Fig. 2. In one example embodiment term apparatus refers to the processor 210, with an input for the processor 210 configured to receive information from the camera unit and an output for the processor 210 configured to provide information to the camera unit for adjusting focus setting.

[0060] Fig. 3 shows a block diagram of a camera unit 260 of an example embodiment of the invention. The camera unit 260 comprises an objective 261, an autofocus unit 262 configured to adjust focusing of the objective 261, an optional mechanical shutter 263, an image sensor 264 and an input and/or output 265. The camera unit 260 is configured in one example embodiment to output autofocus information from the autofocus unit 262. In one example embodiment, the camera unit is also configured to receive through the I/O 265 instructions e.g. from the processor 210 for the autofocus unit 262.

[0061] The camera unit 260 further comprises, in one example embodiment, an effect processor 266 communicatively connected to the autofocus unit 262 and to the image sensor 264. When implemented, the effect processor 266 can enable simulating the diorama effect within the camera unit 260. The effect processor can be any type of a processor e.g. such as the alternatives described with reference to Fig. 2.

[0062] Fig. 4 shows a flow chart illustrating basic operations in a process according to an example embodiment. First, two or more images are captured with different focus settings so that:

• In step 410, one image is taken with the target area i.e. the target image object 150 in focus. This can be arranged as a normal autofocus operation with an autofocus target spot on the target image object 150. For better detaching the surrounding parts of the image, the focus can be driven slightly closer to the camera while still keeping the target image object 150 in the depth of field when the target image object 150 is far.

Thus, the objects behind the target image object 150 become more blurred. On the other hand, when the target image object 150 is near the camera, the focus may be set slightly behind the target image object 150.

• In step 420, another image is taken with significant blur, e.g. maximum possible blur, in other areas i.e. in the other image objects 160, 170. For instance, the autofocus unit

262 brings the objective 261 or lens in closest possible focus i.e. to macro focus to blur the distant object 170.

• In an optional step 430, additionally one or more images are taken in an example embodiment with different focus setting for causing significant blur in other image objects. For instance, the autofocus unit 262 moves the objective 261 to its most distant possible focus, i.e. infinity focus, to blur objects in macro range. In Fig. 1, the proximate object 160 becomes blurred in such an image.

[0063] The autofocus unit 262 is used to measure and provide for further use focus values from a focus block grid or other multiple block arrangement of focus value blocks for each of the captured images. In one example embodiment, the autofocus unit 262 produces 440 focus measurements indicative of how well the pixels inside each block are in focus e.g. based on contrast between adjacent pixels and gives focus values for each block. These focus measurements can also be suited for normal autofocus operations.

[0064] The areas to be blurred are selected 450 based on the areas in focus and comparing the focus values in captured images and based on the spatial location of areas. Often, the target object 150 resides somewhere in the middle of image in spatial direction. In one example embodiment, the target object is identified on locking focus as the object at which a focus setting point is directed. [0065] A diorama effect image is formed by merging 460 the image having the target area in focus and one or more of the other captured images. In an example embodiment, the merging employs pixelwise weighted averaging to smooth the boundaries between the target image object 150 and other parts of the merged image. At simplest, two images are merged, i.e. one with the target in focus and a most blurred one (typically, taken in the macro range).

[0066] The weight is selected for each pixel so that:

[0067] The area detected to be in the target area has weight 1.0 for the image in focus and weight 0.0 for the blurred images;

[0068] The area spatially far from the target area, as determined based on the focus measurements, has weight 1.0 for the most blurred image and weight 0.0 for the other images. The spatial distance considered to be far is, depending on embodiment, e.g. based on a predetermined threshold or computed/adjusted dynamically based on the areas detected to be in focus at each image. The most blurred images can be simply determined based on the autofocus measurements and the used focus settings as the ones where the difference between correct focus setting and the used focus setting has the greatest blurring impact.

[0069] In one example embodiment, additional image processing operations are applied 470 to result image for enhancing miniature appearance. These additional image processing operations can include one or more of the following:

• emphasizing contrast, saturation and colors over the entire image; and

· applying moderate edge preserving smoothing filter on the in focus target area.

[0070] Fig. 5 shows an example of an image with a focus grid illustrating focus measurement blocks of an autofocus unit.

[0071] Fig. 6 shows the image of Fig. 5 with a target area in focus. Focus blocks corresponding to the target image object 150 are shown as a target image object grid 610. The focus values are stored.

[0072] Fig. 7 shows an image taken from the view of Fig. 5 with non-target area out of focus. Fig. 7 also shows the used autofocus block in the upper left corner at a proximate branch of a tree. This example is thus taken with macro focus. Once the image is taken, the focus values are stored for the focused blocks. In another example embodiment, the focus values are stored for all the blocks. The focus values can subsequently be used for selecting the image from which a given block will be taken into a final image to gain target image with sharp focused objects and relatively strong blur around the focused objects.

[0073] Fig. 8 shows weight factors for the blurred part of the image and the smooth transition of the weight factors. The weight factor 1 area is represented by solid black color and in the transition or gradient zone, the region surrounding the target image object 150 goes through darkening shades of grey to black as representation of the smooth change from crisp target image object 150 to blurred surroundings.

[0074] Fig. 9 shows a final image in which the non-focus blurred surroundings are merged with the crisp image of the target object 150 and the result is color enhanced. The final image has a good a diorama effect with realistic blur that naturally depends on the spatial positions of the objects in the image independent of the objects' location within the image. Moreover, the creation of the blur by use of the autofocus unit 262 produces the blur without heavy computational operations. Only smoothing the boundary regions requires little combining of pixel values, but the number of pixels concerned is greatly smaller than the total number of pixels in the image.

[0075] Fig. 10 shows a schematic diagram illustrating forming of the smooth transition between the image of the target image object 150 and its blurred surroundings. A portion of the target image object grid 610 is shown on an illustration of the target image object 150. The weight for each pixel of an in focus image is calculated with a smoothing function:

[0076] W = f(d x ,d y ), wherein W is weight, and d x and d y are distance from the nearest reference point that corresponds to the target image object 150. The reference points can refer to, for example, a centerline of the focus blocks that cover the desired image. Alternatively, the reference points can refer to borderlines of the focus blocks. The choice of the reference points can be taken into account in the smoothing function f. For instance, when the centerline of focus blocks is used for defining the reference points, the weight W of a given pixel of the in focus image can be such as:

' d y _ · scale factor

W max^ 1 , 1 f 0CU s block width ^

[0077] when at least one of d x ,d y is greater than 0, wherein max(valuel ;value2) refers to the greater of valuel and value2, the focus block width is the width of the focus block (square focus blocks) expressed in common units with d x ,d y , and scale factor is a factor used to determine the width of a smoothing zone. The scale factor is e.g. 0.35 to cause that the edges of the desired image remain fairly sharp and the full blur is reached at about the distance of one focus block width from the border of the outmost focus blocks of the target image object 150. However, with the edge preserving smoothing filter application, the boundary region of the target image object 150 can be slightly blurred without excessive subjective impairing of the image and thus the scale factor can also be greater, e.g. 0.5 to 0.75.

[0078] The weight for the blurred image in the gradient zone should be 1 - W at each pixel so that the general brightness of the image remains unchanged. It is also understood that the weight function used in the foregoing example is only one example; in another example, the function is any function so selected that the weight of in focus image pixels decreases when distance increases when going further away from in focus area and thus the pixels of blurred image become more prevailing than the pixels of the in focus image. In another example, linear weighting is applied. For example,

W x = min(l ; max(o ;^ ¾ ) ) (2) W y = min(l ; max(o ; ) ) (3)

W = min(W x ; W y ) (4), wherein

[0079] parameters di to cU are distances from centerline of target object blocks to the exterior border of the target object blocks and further to the border of blurred area as shown in Fig.10, and function mm(valuel ;value2) produces the minimum of its arguments (e.g. valuel and value2).

[0080] Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is that miniature lookalike photos can be produced without necessarily using any special hardware. Also heavy computation can be avoided by using optical blurring with the autofocus unit. Another technical effect of one or more of the example embodiments disclosed herein is that both near and far objects can be excluded from in focus target area with automatic masking that requires low computational complexity. Another technical effect of one or more of the example embodiments disclosed herein is that the photographs need not be carefully designed as with using a tilt-shift lens where the camera orientation combined with the tilt-shift settings determines the objects that appear crisp and the objects that become blurred. Yet another technical effect of one or more of the example embodiments disclosed herein is that the blur obtained is very natural and close to the effect of using a real tilt-shift objective.

[0081] Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on the camera unit, a host device that uses the camera unit or even on a plug-in module. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer-readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, with two examples of a suited apparatus being described and depicted in Figs. 2 and 3. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.

[0082] If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the previously described functions may be optional or may be combined.

[0083] Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.

[0084] It is also noted herein that while the foregoing describes example embodiments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the present invention as defined in the appended claims.