Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR IMAGE PROCESSING
Document Type and Number:
WIPO Patent Application WO/2019/120592
Kind Code:
A1
Abstract:
Method and apparatus for processing a digital image file, comprising providing a digital image including image data for a plurality of pixels; detecting an area within the digital image; recording an identification of the detected area in a metadata field associated with the digital image file, which metadata field is dedicated to identifying an image area.

Inventors:
EKSTRÖM JÖRGEN (SE)
EKBERG BJÖRN (SE)
Application Number:
PCT/EP2017/084565
Publication Date:
June 27, 2019
Filing Date:
December 22, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SONY MOBILE COMMUNICATIONS INC (JP)
SONY MOBILE COMM AB (SE)
International Classes:
G11B27/034; G06T7/10; G06T7/60
Domestic Patent References:
WO2007103883A22007-09-13
Foreign References:
US20090022473A12009-01-22
EP1463318A12004-09-29
US20150172563A12015-06-18
US20070274519A12007-11-29
Other References:
None
Attorney, Agent or Firm:
NEIJ & LINDBERG AB (SE)
Download PDF:
Claims:
CLAIMS

1. Method for processing a digital image file, comprising

providing a digital image including image data for a plurality of pixels; detecting an area within the digital image;

recording an identification of the detected area in a metadata field associated with the digital image file, wherein the metadata field is dedicated to identifying an image area.

2. The method of claim 1, comprising

determining pixel data associated with said detected area.

3. The method of claim 1 or 2, comprising

identifying pixel data of a number of pixels forming part of a contour of said detected area.

4. The method of any preceding claim, wherein said identification data includes an identification of a shape of said detected area.

5. The method of any preceding claim, wherein detecting an area includes

processing the digital image to detect at least one predetermined area shape. 6. The method of any preceding claim, wherein the steps of detecting an area and recording an identification are automatically carried out by a processing device.

7. The method of any preceding claim, wherein detecting an area includes

displaying the digital image on a display;

detecting a user-selected area shape defining said detected area.

The method of claim 7, further including presenting one or more selectable area shapes on the display.

9. The method of any preceding claim , including

detecting a modification of the detected area shape.

10. The method of any of claims 6-9, including

identifying pixel data associated with the detected area shape.

11. The method of any preceding claim, wherein said identification includes pixel data for a number of successive contour intersection points.

12. The method of any preceding claim, comprising

determining a perspective of the detected area;

recording identification of said perspective in a metadata field of the digital image.

13. The method of any preceding claim, comprising

recording an identification of a right to insert alternative image data in said detected area, in a metadata field of the digital image.

14. The method of any preceding claim, wherein providing a digital image

includes capturing an image with an imaging device.

15. The method of any preceding claim, comprising

storing the metadata with the recorded identification.

16. The method of any preceding claim, comprising

providing alternative image data in said detected area;

storing the image data file.

17. A computer program product comprising program code, which code may be executed by a data processing device for processing a digital image file including image data for a plurality of pixels, to provide a digital image of the digital image file;

detect an area within the digital image;

record an identification of the detected area in a metadata field associated with the digital image file in an image storage, which metadata field is dedicated to identifying an image area.

18. The computer program product of claim 17, comprising code which is

executable to

record an identification of a right to insert alternative image data in said detected area, in a metadata field of the digital image.

19. The computer program product of any of the preceding claims 17-19,

comprising code which is executable to carry out any of the steps of claims 1- 16.

20. A digital image processing apparatus, comprising

data memory for storing the computer program code of the computer program product of any of the preceding claims 17-19; and

a processing device configured to execute said computer program code.

21. The digital image processing apparatus of claim 20, wherein said data memory is a non-volatile memory.

Description:
METHOD AND APPARATUS FOR IMAGE PROCESSING

Technical field

A method and an apparatus for processing a digital image is provided, whereby the digital image is processed to be conveniently adapted for further post-processing. More specifically, a digital image may be processed to control and alleviate

management of insertion of alternative image data in the digital image.

Background

Digital image processing is a term widely used to denote adaptation of a digital image. The purpose and indeed the methods may vary, and include objective or subjective improvement of the image, so as to modify e.g. sharpness, color, saturation, blurring and other properties of the image. Such properties may be effects caused upon capturing the image or digitalization of an analog image, such as the quality of the camera used to capture the image, or the conditions at the time of capturing, or later.

One field of digital image processing relates to providing additional or alternative image data in a digital image. In such a process, a digital object may be added to a digital image, e.g. by pasting the digital object into the image. In such image processing, the content of the digital image may be modified so as to visibly include a quite different picture. As an example, a person or other object may be added to or removed from a digital image, which may entirely change the context of the image.

Today, an extensive selection of digital image processing software and apparatuses are available on the market. This allows both for professional

photographers and consumers to modify digital images to their needs or expectations.

Summary

A problem associated with image processing is associated with management of controlling further augmentation of a digital image. This may relate both to the extension of possibilities of modifying images, and rights with regard to a digital image. An owner or rights owner of a digital image may want to limit or prohibit to what extent a digital image may be modified by image processing, or control under which circumstances this is done. Various solutions targeting this object are provided herein.

According to one aspect, a method is provided for processing a digital image file, comprising

providing a digital image including image data for a plurality of pixels; detecting an area within the digital image;

recording an identification of the detected area in a metadata field associated with the digital image file, which metadata field is dedicated to identifying an image area.

In one embodiment, the method comprises determining pixel data associated with a detected area.

In one embodiment, the method comprises identifying pixel data of a number of pixels forming part of a contour of said detected area.

In one embodiment, said identification data includes an identification of a shape of said detected area.

In one embodiment, detecting an area includes processing the digital image to detect at least one predetermined area shape.

In one embodiment, the steps of detecting an area and recording an identification are automatically carried out by a processing device.

In one embodiment, detecting an area includes

displaying the digital image on a display;

detecting a user-selected area shape defining said detected area.

In one embodiment, the method comprises presenting one or more selectable area shapes on the display.

In one embodiment, the method comprises detecting a modification of the detected area shape.

In one embodiment, the method comprises identifying pixel data associated with the detected area shape.

In one embodiment, said identification includes pixel data for a number of successive contour intersection points. In one embodiment, the method comprises determining a perspective of the detected area; and recording identification of said perspective in a metadata field of the digital image.

In one embodiment, the method comprises recording an identification of a right to insert alternative image data in said detected area, in a metadata field of the digital image.

In one embodiment, providing a digital image includes capturing an image with an imaging device.

In one embodiment, the method comprises storing the metadata with the recorded identification.

In one embodiment, the method comprises providing alternative image data in said detected area; and storing the image data file.

According to a second aspect, a computer program product comprising program code is provided, which code may be executed by a data processing device for processing a digital image file including image data for a plurality of pixels, to

provide a digital image of the digital image file;

detect an area within the digital image;

record an identification of the detected area in a metadata field associated with the digital image file in an image storage, which metadata field is dedicated to identifying an image area.

In one embodiment, the computer program product comprises code which is executable to

record an identification of a right to insert alternative image data in said detected area, in a metadata field of the digital image.

In one embodiment, the computer program product comprises to carry out any of the method steps outlined above.

According to a third aspect, a digital image processing apparatus is provided, comprising

data memory for storing the computer program code of the computer program product of any of the preceding embodiments above; and

a processing device configured to execute said computer program code.

In one embodiment, said data memory is a non-volatile memory. Brief description of the drawings

Exemplary embodiments will be described below with reference provided to the drawings, on which

Fig. 1 schematically illustrates a method for processing a digital image according to an embodiment;

Fig. 2 schematically illustrates entities associated with an apparatus for processing an image according to an embodiment;

Figs 3A-3C schematically illustrate a digital image in various steps of a method for processing an image according to an embodiment;

Fig. 4 schematically illustrates a step of processing a digital image according to an embodiment;

Fig. 5 schematically illustrates modification of a digital image based on stored digital image data according to an embodiment;

Fig. 6 schematically illustrates modification of a digital image based on stored digital image data according to another embodiment; and

Fig. 7 schematically illustrates metadata fields of a digital image file according to various embodiments.

Detailed description

The invention will be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.

It will be understood that, when an element is referred to as being“connected” to another element, it can be directly connected to the other element or intervening elements may be present. In contrast, when an element is referred to as being“directly connected” to another element, there are no intervening elements present. Fike numbers refer to like elements throughout. It will furthermore be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present invention. As used herein, the term“and/or” includes any and all combinations of one or more of the associated listed items.

Well-known functions or constructions may not be described in detail for brevity and/or clarity. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense expressly so defined herein.

Embodiments of the invention are described herein with reference to schematic illustrations of idealized embodiments of the invention. As such, variations from the shapes and relative sizes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments of the invention should not be construed as limited to the particular shapes and relative sizes of regions illustrated herein but are to include deviations in shapes and/or relative sizes that result, for example, from different operational constraints and/or from manufacturing constraints. Thus, the elements illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the actual shape of a region of a device and are not intended to limit the scope of the invention.

Various embodiments disclosed herein are associated with processing a digital image file, wherein the digital image file includes a digital image, defined by at least image data for a plurality of pixels. This image data may include values of e.g.

luminance and color, for individual pixels or fields of pixels, according to the established art. The digital image file may further include metadata associated with the digital image. Such metadata may comprise text information pertaining to an image file, and may in one embodiment be embedded into the digital image file. In another embodiment, the metadata may be contained in a separate file that is associated with the digital image file, such as a sidecar file. In various embodiments, the image metadata may include details relevant to the image itself as well as information about its production. Some metadata may be generated automatically by the device capturing the image. Additional metadata may be added after capture and be edited through dedicated software or general image editing software. Metadata may also be added directly on some digital cameras.

The image metadata may comprise technical metadata, such as camera details and settings such as aperture, shutter speed, ISO number, focal depth, dots per inch (DPI), etc. Other automatically generated metadata may include camera brand and model, date and time when the image was created and GPS location where it was created. The image metadata may also comprise descriptive metadata, which may be added manually through imaging software by the photographer or someone managing the image. This may include the name of the image creator, keywords related to the image, captions, titles and comments, among many other possibilities. The image metadata may further include administrative metadata, which may be added manually or automatically. Such metadata may include usage and licensing rights, restrictions on reuse, contact information for the owner of the image. Several standardized formats of metadata exist, including: Information Interchange Model (IPTC), Extensible Metadata Platform (XMP), EXchangable Image File (Exif), Dublin Core Metadata Initiative (DCMI) and Picture Licensing Universal System (PLUS).

Fig. 1 broadly discloses an embodiment of a method according to the invention.

In a step 10, the method may comprise providing a digital image including image data. In various embodiment, this may be carried out by retrieving the digital image from an image data memory, or capturing an image with an imaging device, such as a digital camera. The digital image may comprise image data for a plurality of pixels.

In a step 20, the method may include detecting an area within the digital image. This step may in various embodiments be carried out automatically or semi- automatically by executing image processing software, e.g. configured for pattern recognition, to detect borders of an area within the image, such as by means of edge detection. Detecting an area may include processing the digital image to detect at least one predetermined area shape. Furthermore, the step of detecting an area may alternatively or additionally involve detecting user interaction, so as to e.g. select a detected area, or to modify or choose an area shape, size or position in the image.

In various embodiments, detecting an area may include determining an identification of the detected area. In one embodiment, identification of the detected area may include one or more of position data associated with a position of the detected image within the digital image, shape data associated with the detected area, border data associated with the detected area, perspective data associated with a depth angle or projection of the image portion within the detected area. In various embodiments, the step of detecting an area within the digital image may include determining pixel data associated with a detected area. This may include identifying pixel data of a number of pixels forming part of a contour of said detected area. Such pixel data may include intersection points of a contour encompassing or defining the detected area.

In a step 30, the method may comprise recording an identification of the detected area in a metadata field associated with the digital image file, which metadata field is dedicated to identifying an image area. The step of recording may include automatically entering or documenting data determined with respect to the detected area, in one or more dedicated data fields of a metadata file or metadata record. In various

embodiments, the identification may include pixel data for a number of successive contour intersection points, such that an order of pixels defining a contour of a detected area is determined. The identification may alternatively, or additionally, include identification data associated with a shape of the area. Step 30 may be combined with a step of storing the metadata with the recorded identification, in a separate data file or embedded in the digital image file. In various embodiments, the identification may include a definition of a shape and number of pixels or position points referring to position in the image. In one exemplary embodiment, n pixels and an identification of a polygon with n sides will define both size, shape and position of the polygon area, such as a three-sided triangle, a four-sided tetragon etc. In one embodiment, the identification may include definition of a circle or an oval, with three or more pixels or image positions defined. In various embodiments, the identification recorded in the metadata may alternatively, or additionally, include definition of a magnitude of side lengths or cross-section in combination with identification of shape type. In various embodiments, the identification may include a definition of perspective associated with the detected area. This may be related to rotation within a plane of the image. In one embodiment, the definition of perspective may be associated with a depth angle of the imaged area, as will be outlined with reference to Fig. 5.

Fig. 2 illustrates, by way of example, various entities associated with an apparatus for processing an image according to an embodiment. It should be understood that the drawing of Fig. 2 depicts functional entities or elements rather than physical units or objects. In one aspect, Fig. 2 illustrates an image processing device comprising at least a data processing device 201, and a data memory 202 for storing computer program code. In various embodiments, the data memory is a non-volatile memory. The data processing device 201 may comprise a central processing unit, and may operate as a control unit by executing computer program code stored in the data memory 202.

The apparatus may include or be connected to an image data storage 204, configured for storing digital image files and/or metadata associated with such digital image files. The image data storage 204 may comprise or be connectable to a remote storage by means of a communication link. As such, the image data storage 204 may partly or fully be accessible for the processing device 204 through a wireless or wired data link over the Internet or a local network. The apparatus may further include or be connectable to an imaging device 205, such as a digital camera. A display 203 may further be included or connectable to the apparatus, for displaying digital images generated from digital image files. In various embodiments, the apparatus may form part of a computer device, a mobile phone or a tablet, and may as such include all of the entities functionally depicted in Fig. 2.

In various embodiments, a computer program product comprising program code is provided, which code may be executed by a data processing device such as processing device 201 for processing a digital image file including image data for a plurality of pixels, to provide a digital image of the digital image file, detect an area within the digital image, and record an identification of the detected area in a metadata field associated with the digital image file in an image storage, which metadata field is dedicated to identifying an image area.

Figs 3A to 3C schematically illustrate an example of processing of a digital image according to various embodiments according to the process of Fig. 1, which may be carried out by means of a computer program product executed by an apparatus as shown in Fig. 2.

In Fig. 3A, a digital image 30 is shown, which may be provided from an image data memory 204 or from an imaging device 205. In this example, the image depicts a person standing beside a poster or billboard 31.

In Fig. 3B, an area 32 within the image 30 is detected. In one embodiment, this may be accomplished by executing computer program code of a computer program product, configured to detect edges or borders in the image. In various embodiments, the computer program product may be configured to detect, or to single out by filtering, edges of areas of predetermined shapes, such as closed areas defined primarily or entirely by substantially straight lines, such as the poster 31 in the image 30.

In one embodiment, detection of the area 31 may be accomplished by detecting user selection of that area 31. This may include presenting the image 30 on a display 203, and detecting input of intersection points or borders by a user, e.g. by using a cursor control or touch input directly on a touch sensitive display 203.

In one embodiment, detecting the area 32 may include executing computer program code to detect and present detected comer positions or borders representing a contour of an area, and detecting user interaction to modify or select the detected area 31.

In Fig. 3B, the detected area 32 is illustrated by means of a dashed line, which is one exemplary way of presenting the detected area 32. Alternative embodiments of presenting the detected area may include deleting or substituting the area inside its contour with a blank space or otherwise mark it by e.g. shading or coloring.

Fig. 7 illustrates an exemplary representation of some metadata fields for image metadata associated with an image 30, according to various embodiments. It should be noted that in different embodiments only some of the metadata fields may be included, and in some embodiments, metadata values may be recorded in only some of the metadata fields. The metadata may include fields 71 providing a definition of the type of metadata, such as a tag identification. The metadata for an image 30 may include a large number of metadata fields, in addition to those shown in Fig. 7. For one or more of the tags, values 72 may be recorded associated with the image 30.

In various embodiments, one or more metadata fields are dedicated to identifying an image area 32. Examples of such metadata fields are provided in Fig. 7.

One exemplary metadata field is an area tag 711. The corresponding value field 712 may be populated to record pixel data or other position data representing position within the digital image 30. The value may include identified pixel data of a number of pixels forming part of a contour of the detected area 32, such as four pixels or positions (i,j);(k,l);(m,n);(o;p), representing coordinates within the digital image 30. In this context, an area 32 within the digital image 30 is selected portion of the digital image 30, and the area tag is thus not populated to define outer limits of the digital image 30. In various embodiments, the area value field 712 may be the only included or populated field to record metadata for the detected area 32. The area value may thus identify the detected area 32, such that the recorded pixels represent intersection points provided in order, between which straight borders define the detected area 32.

One exemplary metadata field is a shape tag 721. The corresponding value field 722 may be populated to record an identification of the area 32 with regard to its shape. In one embodiment, this may include a type of shape, such as form, preferably selected from a number of predetermined selectable shape types, such as e.g. circle, oval, polygon, or more specific identification of shape, e.g. rectangle. In one embodiment, the shape value field 722 may include further include an identification. In various embodiments, and identification of form and size may be combined with pixel or position values, either recorded as Pos in the Shape value field 722, or in a separate field such as area value field 712.

One exemplary metadata field is an area perspective tag 731. The corresponding value field 732 may be populated to record an identification of perspective of the detected area. In one embodiment, this field 732 may be used to record a depth angle, projection data, transform data or the like, associated with the detected area 32. For a shape such as a e.g. a trapezoid, the perspective value field 732 may define a perspective between the non-parallel sides of the detected area 32. In another embodiment, the perspective value field 732 may simply include an identification flag, indicating whether or not the detected area 32 is angled, meaning that the area 32 is not parallel to the image plane of the digital image. Embodiments related to the area perspective tag will be described with reference to Fig. 5.

One metadata field may associated with the detected area 32 may be usability tag 741. The corresponding value field 742 may be populated to record recording an identification of a right to insert alternative image data in said detected area. The value field 742 may include an identification flag, indicating whether or not the detected area 32 is usable by users to modify the detected area portion 32. In various embodiments, a value field 742 may alternatively, or additionally, include a reference to terms for such use, which may be legal, administrative or financial, or a combination thereof.

Returning to Fig. 3B, once the area 32 is detected, an identification of the detected area may be recorded in a metadata field associated with the digital image file, such as in one or more fields indicated in Fig. 7. A user may subsequently want to modify the digital image 30, to modify or substitute image data within the area 32. The possibility to use the detected area, and possibly the right to do so, may be determined by accessing the metadata file associated with the digital image 30, which may be embedded within the digital image file of the digital image 30. Such accessing of the metadata file conveniently specifies the borders of the area 32, e.g. by identification of contour borders or intersection points, size, position in the metadata, such that the detected area may be used as a window in which insertion or modification of image data may be performed.

Fig. 3C schematically illustrates a further processed digital image 30, in which the image data within the detected area 32 has been substituted with other image data. In various embodiments, the image metadata may also comprise a field or tag, for which a value may be recorded, identifying whether modification or substitution of the image data has been made, either generally or specifically associated with an area identified in the metadata.

Fig. 4 illustrates an embodiment related to the step of detecting an area in a digital field, which at least partly involves user interaction. In one embodiment, automatic detection of areas may identify several identifiable area borders in a digital image. In such an embodiment, user interaction may be detected to sense selection of one or more of those areas, to define as the usable detected area. In one embodiment, the computer program product or the processing device may be configured to present usable shapes 41, 42, 43 on a display 203, concurrently with presenting a digital image 40. Detection of an area within a digital image, to be recorded in metadata, may be carried out by a user who is an owner or rights owner associated with the image, or by any user if the image is not protected or associated with any limited use. A user may be allowed to select a shape, and place it over the digital image 40 to define an area. In the exemplary embodiment, a user has selected a rectangular shape, and placed it over a portion of the digital image 40 to define a detected area 411. Selection of shape may e.g. be carried out by using cursor control input, or by touch and drag on a touch-sensitive display 203. The computer program product may also allow a user to modify a selected shape, e.g. by pulling corner positions, to define the limits of the detected area, so as to cover a desired image portion.

Fig. 5 illustrates an embodiment where a digital image 502 includes a detected area 503 indicated by dashed lines in the drawing, and defined in metadata associated with the image 502. In this image 502, the detected area 503 is an area portion of a wall as seen from a perspective angle. As such, the detected area is a planar area which is not parallel to the image plane. In one embodiment, this may be recorded in the metadata associated with the image 502, such as in a perspective value field 732. This may e.g. be recorded by automatically measuring and storing the angle between the upper and lower limits of the trapezoid shape of the area 503. In another embodiment, corner intersection points or pixels of the detected area 503 may be recorded in e.g. an area value field 712 in the metadata, whereas an identification is further recorded in the metadata that the shape is rectangular. This may be corded in a shape value field 722, or in a field specifically dedicated to indicated that the area represents a surface seen from a particular perspective angle, such as value field 732.

Fig. 5 illustrate further processing of an image 502, in which a detected area 503 is recorded in metadata. The further processing involves providing alternative image data 501 to be included in the detected area 503. As an example, a company or user may want to insert a logo or advertisement 501 in the image 502, thus processing the original image to appear as though the inserted image data was in fact provided on the wall shown in the picture of digital image 502. In various embodiments, recorded metadata associated with the image may define or restrict whether such insertion is allowed, as described. Furthermore, by applying an embodiment of the computer program product or apparatus for processing a digital image, as outlined herein, the detected area 503 may be substituted with the alternative image data 501. In the embodiment of Fig. 5, metadata associated with a perspective of the detected area 503 is thereby accessed from the metadata, and the alternative image data is adapted to comply with the accessed perspective data. As schematically illustrated in Fig. 5, any text or shapes in the alternative image data will then be modified to comply with the detected perspective. This alternative image data or image 501 may e.g. be automatically modified to the recorded perspective data by linear morphing or stretching, to accommodate outer limits of the alternative image 501 to the borders of the detected area 503, and linearly compressing or stretching the represented image between the area borders. This may also include reforming the alternative image data 501 to be inserted to add or remove pixels to different extent throughout the alternative image data 501, since in the exemplary embodiment the visible image will occupy more pixels at the right end than the left end when angled, relative to the original alternative image data 501. The processing as exemplified in Fig. 5 may also include storing the image data file including the modified image data. This may also include recording an identification in the metadata that the image has been modified, or even identify that particularly the detected area is modified. The metadata may be stored embedded within the image data file.

Fig. 6 schematically illustrates an embodiment where an area 602, which is substantially identifiable as having a predetermined shape, is detectable within a digital image 601, but where that predetermined shape is incomplete or dismpted. A computer program product may be configured to detect this dismption and accommodate for it. In one example, a generally rectangular shape 603 of the area 602 may be automatically detected by e.g. edge detection and pattern recognition, or a rectangular shape may be user-selected and shaped to adapt to the generally rectangular borders 603 seen in the image 601, as described above. The computer program product may further be configured to detect a deviation from the generally rectangular shape, and calculate e.g. a number of intersection points to define this disruption. This way, a detected area 602 may be identified, and recorded in metadata of the image, as described with reference to Fig. 7. When a user subsequently acts or executes a computer program product to further add alternative image data 604 into the image 601, the alternative image data may not only be resized and potentially angled to fit the generally rectangular shape 603 of the area 602, but also be filtered by cropping the alternative image 604 to be inserted, to adapt to the disruption and accommodate for it. The computer program product may further be executed to insert the filtered alternative image data at the place of the detected area 602. As seen in the example of Fig. 6, this may create the effect that the inserted image data in fact occupies the area of the generally detected shape 603, but is partly obscured by an object.

The embodiments described herein may be applied to still images or to digital video. Furthermore, metadata storage may be done at capturing time or at rendering time. In other words, in one embodiment the detection of an area and recording an identification of that area may be carried out automatically upon capturing the image. Also, a predetermined set in a digital imaging apparatus used for capturing the image may prescribe whether such image detection is to be active. A setting may also configure the digital imaging apparatus to automatically record a metadata field 742 related to usability or allowability of a detected fill area, for modification or insertion of alternative image data within that area. Various embodiments provide e.g. filtering ads depending on dimensions of an area in a picture. Advertising is often done

outside/around content and photos. By augmenting the photos, ads or any info could be included in the photo at a later point. This disclosure provides solutions for annotating digital images with data about where external information, ads or any other info could be augmented into the image. As an example, when a photo is taken by a user, areas of this photo could be defined, either manually or automatically, which could be used as ad space. By including this information e.g. in the photo XMP or Exif data, anyone would be able to augment the photo by utilizing this data.

One example of carrying out an embodiment may include:

1. User takes a photo of a soccer game. In the background there is a billboard with some ad/info from company X.

2. The billboard is marked (by user or by pattern recognition) and coordinates and perspective is calculated and inserted into photo XMP or Exif data.

3. User posts image to his online photo album, run by company Y.

4. User (or someone else) visits the album. Y augments an ad into the billboard.

Embodiments of methods, computer program products and image processing apparatuses have been described by reference to the drawings, which serve as an explanation of how the invention may be put to practice, but these examples shall not be construed as limitations of the invention, as set out in the claims.




 
Previous Patent: GAS GENERATOR FOR A SAFETY SYSTEM

Next Patent: FOOD COMPOSITION