Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR A MULTI-CAMERA UNIT
Document Type and Number:
WIPO Patent Application WO/2018/158494
Kind Code:
A1
Abstract:
There are disclosed various methods, apparatuses and computer program products for a multi-camera unit. In some embodiments the method comprises receiving at least two streams of images of a 360 degree view captured by at least a first multi- camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi-camera unit is at least partially visible in the content captured by the first multi-camera unit;and replacing the presence of the second multi-camera unit in the scene captured by the first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit. The replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

Inventors:
AFLAKI BENI, Payman (Atomikatu 1 F 23, Tampere, 33720, FI)
ROIMELA, Kimmo (Annalankatu 17 C 9, Tampere, 33710, FI)
AKSU, Emre Baris (Virrenkatu 3 D 30, Tampere, 33800, FI)
Application Number:
FI2018/050047
Publication Date:
September 07, 2018
Filing Date:
January 23, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (Karaportti 3, Espoo, 02610, FI)
International Classes:
G06T5/00; G03B37/04; G06K9/00; G06T1/00; G06T5/50; G06T7/70; G06T15/00; G06T19/20; H04N5/247; H04N13/10; H04N13/20
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (Ari Aarnio, IPR DepartmentKarakaari 7, Espoo, 02610, FI)
Download PDF:
Claims:
CLAIMS

1. A method comprising:

receiving at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi-camera unit is at least partially visible in the content captured by the first multi- camera unit;

replacing the presence of the second multi-camera unit in the scene captured by the first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

2 The method according to claim 1 wherein the replacing comprises utilizing the scene captured from at least a third available camera unit, wherein the third camera unit captures the same scene as the first multi-camera unit and the second multi-camera unit.

3. The method according to claim 2, wherein the scene captured by at least a third available camera unit is a scene captured by a third multi-camera unit capturing three- dimensional images.

4. The method according to claim 2, wherein the scene captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images.

5. The method according to any of the claims 1 to 4 comprising:

copying content captured by the second multi-camera unit to the respective location in the captured content of the first multi-camera unit. 6. The method according to any of the claims 1 to 5 comprising:

determining the location where the second multi-camera unit is visible in a content captured by the first multi-camera unit;

examining which image captured by a camera of the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit; and

replacing pixels of the determined location in the content captured by the first multi-camera unit with pixels of the image captured by the camera of the second multi- camera unit. 7. The method according to any of the claims 1 to 6 further comprising:

zooming the image captured by the camera of the second multi-camera unit before replacing the pixels.

8. The method according to any of the claims 1 to 5 comprising:

replacing an area smaller than the area covered by the second multi-camera unit with pixels of one or more images captured by the second multi-camera unit; and

replacing the remaining area covered by the second multi-camera unit by interpolating pixels of one or more images captured by the second multi-camera unit and pixels of one or more images captured by the first multi-camera unit.

9. The method according to any of the claims 1 to 8, wherein the captured content is two-dimensional, the method further comprising:

transforming the two-dimensional content to a three-dimensional scene;

making corrections and changes in the three-dimensional reconstruction; and back-projecting the three-dimensional reconstruction to a two-dimensional representation.

10. The method according to any of the claims 1 to 9 further comprising:

obtaining depth information of the scene; and

utilizing the depth information in determining at least one of the following:

mutual distance of at least the first multi-camera unit and the second multi- camera unit;

which image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.

11. An apparatus comprising at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by the first multi-camera unit;

replace the presence of the second multi-camera unit in the scene captured by the first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

12. The apparatus according to claim 11, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to: utilize the scene captured from at least a third available camera unit, the third camera unit capturing the same scene as the first multi-camera unit and the second multi- camera unit.

13. The apparatus according to claim 12, wherein the scene captured by at least a third available camera unit is a scene captured by a third multi-camera unit capturing three- dimensional images.

14. The apparatus according to claim 12, wherein the scene captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images. 15. The apparatus according to any of the claims 1 1 to 14, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

copy content captured by the second multi-camera unit to the respective location in the captured content of the first multi-camera unit.

16. The apparatus according to any of the claims 1 1 to 15, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

determine the location where the second multi-camera unit is visible in a content captured by the first multi-camera unit;

examine which image captured by a camera of the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit; and

replace pixels of the determined location in the content captured by the first multi- camera unit with pixels of the image captured by the camera of the second multi-camera unit.

17. The apparatus according to any of the claims 1 1 to 16, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

zoom the image captured by the camera of the second multi-camera unit before replacing the pixels.

18. The apparatus according to any of the claims 1 1 to 15, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

replace an area smaller than the area covered by the second multi-camera unit with pixels of one or more images captured by the second multi-camera unit; and

replace the remaining area covered by the second multi-camera unit by

interpolating pixels of one or more images captured by the second multi-camera unit and pixels of one or more images captured by the first multi-camera unit.

19. The apparatus according to any of the claims 1 1 to 18, wherein the captured content is two-dimensional, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

transform the two-dimensional content to a three-dimensional scene;

make corrections and changes in the three-dimensional reconstruction; and back-project the three-dimensional reconstruction to a two-dimensional representation.

20. The apparatus according to any of the claims 1 1 to 19, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

obtain depth information of the scene; and

utilize the depth information in determining at least one of the following:

mutual distance of at least the first multi-camera unit and the second multi- camera unit; wherein the image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.

21. An apparatus comprising:

means for receiving at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi- camera unit and the second multi-camera unit capture images of the same scene and the second multi-camera unit is at least partially visible in the content captured by the first multi-camera unit;

means for replacing the presence of the second multi-camera unit in the scene captured by the first multi-camera unit based on captured content of at least the first multi- camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

22. A computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform:

receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by the first multi-camera unit;

replace the presence of the second multi-camera unit in the scene captured by the first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

23. The computer readable storage medium according to claim 22 stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

utilize the scene captured from at least a third available camera unit, the third camera unit capturing the same scene as the first multi-camera unit and the second multi- camera unit.

24. The computer readable storage medium according to claim 23, wherein the scene captured by at least a third available camera unit is a scene captured by a third multi- camera unit capturing three-dimensional images.

25. The computer readable storage medium according to claim 23, wherein the scene captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images.

26. The computer readable storage medium according to any of the claims 22 to 25 stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

copy content captured by the second multi-camera unit to the respective location in the captured content of the first multi-camera unit.

27. The computer readable storage medium according to any of the claims 22 to 26 stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

determine the location where the second multi-camera unit is visible in a content captured by the first multi-camera unit; examine which image captured by a camera of the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit; and

replace pixels of the determined location in the content captured by the first multi- camera unit with pixels of the image captured by the camera of the second multi-camera unit.

28. The computer readable storage medium according to any of the claims 22 to 27 stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

zoom the image captured by the camera of the second multi-camera unit before replacing the pixels.

29. The computer readable storage medium according to any of the claims 22 to 26 stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

replace an area smaller than the area covered by the second multi-camera unit with pixels of one or more images captured by the second multi-camera unit; and

replace the remaining area covered by the second multi-camera unit by

interpolating pixels of one or more images captured by the second multi-camera unit and pixels of one or more images captured by the first multi-camera unit.

30. The computer readable storage medium according to any of the claims 22 to 29 stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

transform the two-dimensional content to a three-dimensional scene;

make corrections and changes in the three-dimensional reconstruction; and back-project the three-dimensional reconstruction to a two-dimensional representation.

31. The computer readable storage medium according to any of the claims 22 to 30 stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

obtain depth information of the scene; and

utilize the depth information in determining at least one of the following:

mutual distance of at least the first multi-camera unit and the second multi- camera unit;

wherein the image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.

Description:
Method and Apparatus for a Multi-Camera Unit

TECHNICAL FIELD

[0001 ] The present invention relates to a method for a multi-camera unit, an apparatus for a multi-camera unit, and computer program for a multi-camera unit.

BACKGROUND

[0002] This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued.

Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.

[0003] 360-degree viewing camera devices with multiple lenses per viewing direction are becoming more and more popular and affordable for both consumer and

professional usage. Moreover, such multi-camera captured scenes can be reconstructed in three-dimensional (3D) if the camera location and pose information is known. Such a reconstruction's quality and coverage may depend on the distribution of the cameras and their capture capabilities.

[0004] A multi-camera unit comprises two or more cameras capable of capturing

images and/or video. The cameras may be positioned in different ways with respect to each other camera. For example, in a two-camera unit the cameras may be located at a short distance from each other and they may view to the same direction so that the two- camera unit can provide a stereo view of the environment. In another example, the multi-camera unit may comprise more than two cameras which are located in an omnidirectional manner. Hence, the viewing angle of such a multi-camera unit may be even 360°. In other words, the multi-camera unit may be able to view practically around the multi-camera unit.

[0005] Each camera of the multi-camera unit may produce images and/or video

information i.e. visual information. The plurality of visual information captured by different cameras may be combined together to form an output image and/or video. For that purpose an image processor may use so called extrinsic parameters of the multi- camera unit, such as orientation and relative position of the cameras, and possibly intrinsic parameters of the cameras to control image warping operations which may be needed to provide a combined image in which details captured with different cameras are properly aligned. In other words, two or more cameras may capture at least partly same areas of the environment, wherein the combined image should be formed so that same areas from images of different cameras should be located at the same location.

[0006] In a scenario where a plurality of multi-cameras are used to capture a scene, considering the 360° capturing zone of each multi-camera, one or more of the multi- cameras is in the viewing field of another multi-camera. This may bring a sub-optimal viewing experience to a user, observing other cameras capturing the content around them by simply looking around and seeing them.

SUMMARY

[0007] Various embodiments provide a method and apparatus for a multi-camera unit.

In accordance with an embodiment, there is provided a method for a multi-camera unit.

In accordance with an embodiment, areas in images captured by a first multi-camera unit and which are blocked by a second multi-camera unit are modified on the basis of images captured by the blocking, second multi-camera unit.

[0008] Various aspects of examples of the invention are provided in the detailed

description.

[0009] According to a first aspect, there is provided a method comprising:

receiving at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi-camera unit is at least partially visible in the content captured by the first multi- camera unit;

replacing the presence of the second multi-camera unit in the scene captured by first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit; wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

[0010] According to a second aspect, there is provided an apparatus comprising at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by the first multi-camera unit;

replace the presence of the second multi-camera unit in the scene captured by the first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

[001 1 ] According to a third aspect, there is provided an apparatus comprising at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

means for receiving at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi- camera unit and the second multi-camera unit capture images of the same scene and the second multi-camera unit is at least partially visible in the content captured by the first multi-camera unit; means for replacing the presence of the second multi-camera unit in the scene captured by the first multi-camera unit based on captured content of at least the first multi- camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

[00.12] According to a fourth aspect, there is provided a computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform:

receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by the first multi-camera unit;

replace the presence of the second multi-camera unit in the scene captured by the first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

BRIEF DESCRIPTION OF THE DRAWINGS

[0013] For a more complete understanding of example embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:

[0014] Figure la shows an example of a multi-camera unit as a simplified block

diagram, in accordance with an embodiment;

[0015] Figure lb shows a perspective view of a multi-camera unit, in accordance with an embodiment; [0016] Figure 2a illustrates an example in which a first multi-camera unit captures a scene where a part of a view of the first multi-camera unit is blocked by a second multi- camera unit, in accordance with an embodiment;

[0017] Figure 2b illustrates an example in which a first multi-camera unit captures a scene where a part of a view of the first multi-camera unit is blocked by at least a second multi-camera unit and in which also a third multi-camera unit is viewing the same scene, in accordance with an embodiment;

[0018] Figure 2c illustrates an example in which a first multi-camera unit captures a scene where a part of a view of the first multi-camera unit is blocked by at least a second multi-camera unit and in which also a third camera unit is viewing the same scene, in accordance with an embodiment;

[0019] Figure 3 a illustrates an example of an image captured by the first multi-camera unit of the setup of Figure 2a in which the second multi-camera unit is visible, in accordance with an embodiment;

[0020] Figure 3b illustrates the image of Figure 3a modified so that the area where the second multi-camera unit was visible is replaced with image information based on at least an image captured by the second multi-camera unit, in accordance with an embodiment;

[0021] Figure 4a illustrates another example of an image captured by the first multi- camera unit of the setup of Figure 2a in which the second multi-camera unit is visible, in accordance with an embodiment;

[0022] Figure 4b illustrates the image of Figure 4a modified so that the area where the second multi-camera unit was visible is partly replaced with image information based on an image captured by the second multi-camera unit a and partly with image information based on images captured by both the first multi-camera unit and the second multi- camera unit, in accordance with an embodiment;

[0023] Figures 5a— 5c show yet another example of eliminating a second multi- camera unit from an image captured by the first multi-camera unit, in accordance with an embodiment;

[0024] Figure 6 shows a flowchart of a method of correcting captured images, in accordance with an embodiment; [0025] Figure 7 shows a flowchart of a method of a three-dimensional reconstruction from multiple views of a first multi-camera unit, in accordance with another embodiment;

[0026] Figure 8 shows a simplified block diagram of a system comprising a plurality of multi-camera units, in accordance with an embodiment;

[0027] Figure 9 shows a schematic block diagram of an exemplary apparatus or electronic device;

[0028] Figure 10 shows an apparatus according to an example embodiment;

[0029] Figure 11 shows an example of an arrangement for wireless communication comprising a plurality of apparatuses, networks and network elements.

DETAILED DESCRIPTON OF SOME EXAMPLE EMBODIMENTS

[0030] The following embodiments are exemplary. Although the specification may refer to "an", "one", or "some" embodiment(s) in several locations, this does not necessarily mean that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.

[003 1 ] Figure la illustrates an example of a multi-camera unit 100, which comprises two or more cameras 102. In this example the number of cameras 102 is eight, but may also be less than eight or more than eight. Each camera 102 is located at a different location in the multi-camera unit and may have a different orientation with respect to other cameras 102. As an example, the cameras 102 may have an omnidirectional constellation so that it has a 360° viewing angle in a 3D-space. In other words, such multi-camera unit 100 may be able to see each direction of a scene so that each spot of the scene around the multi-camera unit 100 can be viewed by at least one camera 102.

[0032] Without losing generality, any two cameras 102 of the multi-camera unit 100 may be regarded as a pair of cameras 102. Hence, a multi-camera unit of two cameras has only one pair of cameras, a multi-camera unit of three cameras has three pairs of cameras, a multi-camera unit of four cameras has six pairs of cameras, etc. Generally, a multi-camera unit 100 comprising N cameras 102, where N is an integer greater than one, has N(N-l)/2 pairs of cameras 102. Accordingly, images captured by the cameras 102 at a certain time may be considered as N(N-l)/2 pairs of captured images.

[0033] The multi-camera unit 100 of Figure la may also comprise a processor 104 for controlling the operations of the multi-camera unit 100. There may also be a memory 106 for for storing data and computer code to be executed by the processor 104, and a transceiver 108 for communicating with, for example, a communication network and/or other devices in a wireless and/or wired manner. The user device 100 may further comprise a user interface (UI) 110 for displaying information to the user, for generating audible signals and/or for receiving user input. However, the multi-camera unit 100 need not comprise each feature mentioned above, or may comprise other features as well. For example, there may be electric and/or mechanical elements for adjusting and/or controlling optics of the cameras 102 (not shown).

[0034] The multi-camera unit 100 of Figure la may also comprise devices 128 to

calculate the ranging information i.e. the depth of the scene. Such sensors enable the device to calculate all the respective depth information of scene content from the multi- camera unit. Such information results in creating a depth map and may be used in the subsequent processes of this application.

[0035] A depth map image may be considered to represent the values related to the distance of the surfaces of the scene objects from a reference location, for example a view point of an observer. A depth map image is an image that may include per-pixel depth information or any similar information. For example, each sample in a depth map image represents the distance of the respective texture sample or samples from the plane on which the camera lies. In other words, if the z axis is along the shooting axis of the cameras (and hence orthogonal to the plane on which the cameras lie), a sample in a depth map image represents the value on the z axis.

[0036] Since depth map images are generated containing a depth value for each pixel in the image, they can be depicted as gray-level images or images containing only the luma component. Alternatively chroma components of the depth map images may be set to a pre-defined value, such as a value indicating no chromaticity, e.g. 128 in typical 8-bit chroma sample arrays, where a zero chromaticity level is arranged into the middle of the value range. Alternatively, chroma components of depth map images may be used to contain other picture data, such as any type of monochrome auxiliary pictures, such as alpha planes.

[0037] In the cases where a multi-camera unit (a multi-camera device) is in use, another approach to represent the depth values of different views in the stereoscopic or multiview case is to report the disparity between pixels of each view to the adjacent view instead of the actual depth values. The following equation shows how depth values are converted to disparity: where:

D = disparity value

f = focal length of capturing camera

1 = translational difference between cameras

d = depth map value

N = number of bits representing the depth map values

Znear and Zfar are the respective distances of the closest and farthest objects in the scene to the camera (mostly available from the content provider), respectively.

[0038] The semantics of depth map values may for example include the following:

Each luma sample value in a coded depth view component represents an inverse of real-world distance (Z) value, i.e. 1/Z, normalized in the dynamic range of the luma samples, such as to the range of 0 to 255, inclusive, for 8-bit luma representation. The normalization may be done in a manner where the quantization 1/Z is uniform in terms of disparity. Each luma sample value in a coded depth view component represents an inverse of real-world distance (Z) value, i.e. 1/Z, which is mapped to the dynamic range of the luma samples, such as to the range of 0 to 255, inclusive, for 8-bit luma representation, using a mapping function f(l/Z) or table, such as a piece-wise linear mapping. In other words, depth map values result in applying the function f(l/Z). Each luma sample value in a coded depth view component represents a real- world distance (Z) value normalized in the dynamic range of the luma samples, such as to the range of 0 to 255, inclusive, for 8-bit luma representation. Each luma sample value in a coded depth view component represents a disparity or parallax value from the present depth view to another indicated or derived depth view or view position.

[0039] Figure la also illustrates some operational elements which may be implemented, for example, as a computer code in the software of the processor, in a hardware, or both. An occlusion determination element 114 may determine which areas of a panorama image are blocked (occluded) by other multi-camera unit(s); a 2D to 3D converting element 116 may convert 2D images to 3D images and vice versa; and an image reconstruction element 118 may reconstruct images so that occluded areas are reconstructed using image information of the blocking multi-camera unit 100. In accordance with an embodiment, the multi-camera units 100 comprise a location determination unit 124 and an orientation determination unit 126, wherein these units may provide the location and orientation information to the system. The location determination unit 124 and the orientation determination unit 126 may also be implemented as one unit. The operation of the elements will be described later in more detail. It should be noted that there may also be other operational elements in the multi- camera unit 100 than those depicted in Figure la and/or some of the above mentioned elements may be implemented in some other part of a system than the multi-camera unit 100.

[0040] Figure lb shows as a perspective view an example of an apparatus comprising the multi-camera unit 100. In Figure lb seven cameras 102a— 102g can be seen, but the multi-camera unit 100 may comprise even more cameras which are not visible from this perspective. Figure lb also shows two microphones 112a, 112b, but the apparatus may also comprise one or more than two microphones.

[0041] In accordance with an embodiment, the multi-camera unit 100 may be controlled by another device (not shown), wherein the multi-camera unit 100 and the other device may communicate with each other and a user may use a user interface of the other device for entering commands, parameters, etc. and the user may be provided information from the multi-camera unit 100 via the user interface of the other device.

[0042] Some terminology regarding the multi-camera unit 100 will now be shortly

described. A camera space, or camera coordinates, stands for a coordinate system of an individual camera 102 whereas a world space, or world coordinates, stands for a coordinate system of the multi-camera unit 100 as a whole. An optical flow may be used to describe how objects, surfaces, and edges in a visual scene move or transform, when an observing point moves between from a location of one camera to a location of another camera. In fact, there need not be any actual movement but it may virtually be determined how the view of the scene might change when a viewing point is moved from one camera to another camera. A parallax can be regarded as a displacement or difference in the apparent position of an object when it is viewed along two different lines of sight. The parallax may be measured by the angle or semi-angle of inclination between those two lines.

[0043] Intrinsic parameters 120 may comprise, for example, focal length, image sensor format, and principal point. Extrinsic parameters 122 denote the coordinate system transformations from 3D world space to 3D camera space. Equivalently, the extrinsic parameters may be used to define the position of a camera center and camera's heading in world space.

[0044] Figure 8 is a simplified block diagram of a system 800 comprising a plurality of multi-camera units 130, 140, 150. It should be noted here that different multi-camera units are referred with different reference numbers for clarity, although each multi- camera unit 130, 140, 150 may have similar elements than the multi-camera unit 100 of Figure la. Furthermore, the individual cameras of each multi-camera unit 130, 140, 150 will be referred by different reference numerals 132, 132a— 132g, 142, 142a— 142g, 152, 152a— 152g, although each camera may be similar to the cameras 102a— 102g of the multi-camera unit 100 of Figure la. The reference numerals 132, 142, 152 will be used when any of the cameras of the multi-camera unit 130, the multi-camera unit 140, and the multi-camera unit 150 will be referred to, respectively.

Correspondingly, reference numerals 132a— 132g, 142a— 142g, 152a— 152g, will be used when a particular camera of the multi-camera unit 130, the multi-camera unit 140, and the multi-camera unit 150 will be referred to, respectively. Although Figure 8 only depicts three multi-camera unit 130, 140, 150, the system may have two multi-camera units 130, 140 or more than three multi-camera units. It is assumed that the system 800 has information about the location and orientation of each of the multi-camera units 130, 140, 150 of the system. The location and orientation information may have been stored into a camera database 810. This information may have been entered manually or the system 800 may comprise elements which can determine the location and orientation of each of the multi-camera units 130, 140, 150 of the system. If the location and/or the orientation of any of the multi-camera units 130, 140, 150 changes, the changed location and/or orientation information may be updated in the camera database 810. The system 800 may be controlled by a controller 802, which may be a server or another appropriate element capable of communicating with the multi-camera units 130, 140, 150 and the camera database 810.

[0045] In accordance with an embodiment, the location and/or the orientation of the multi-camera units 130, 140, 150 may not be stored into the database 810 but only to each individual multi-camera unit 130, 140, 150. Hence, the location and/or the orientation of the multi-camera units 130, 140, 150 may be requested from the multi- camera units 130, 140, 150 when needed. As an example, if the first multi-camera unit 130 needs to know the location and orientation of second multi-camera unit 130, the first multi-camera unit 130 may request that information from the second multi-camera unit 140. If some information regarding the second multi-camera unit 140 is still needed, the first multi-camera unit 130 may request the missing information from the controller 802, for example.

[0046] In the following, an example method of correcting captured images in a system 800 comprising a plurality of multi-camera units 130, 140, 150 will be described in more detail with reference to the flow diagram of Figure 6. Figure 2a illustrates an example in which a first multi-camera unit 130 captures a scene where a part of a view of the first multi-camera unit 130 is blocked by a second multi-camera unit 140. In this illustration one or more of the cameras 132 of the first multi-camera unit 130 has a view to the scene 204 where the second multi-camera unit 140 is in that view.

[0047] The multi-camera unit 130, 140 knows the intrinsic/extrinsic parameters of the cameras 132, 142 mounted in the multi-camera unit 130, 140 (block 602 in Figure 6). These cameras 132, 142 will also be called as internal cameras 132, 142 in this specification. The mutual location and orientation of the internal cameras 132, 142 of the same multi-camera unit 100 will normally remain the same. These parameters and orientation information of the multi-camera unit 100 may be used to determine views of individual cameras 132a— 132g, 142a— 142g of the multi-camera unit 130, 140 (block 604).

[0048] The first multi-camera unit 130 may obtain information of other multi-camera units of the system from the camera database 810. Also the second multi-camera unit 140 and possible other multi-camera units 100 may obtain information of other multi- camera units of the system from the camera database 810.

[0049] The first multi-camera unit 130 may obtain information of the location of the other multi-camera units 140, 150 e.g. from the camera database 810 (block 606) and use this information to determine in which directions, with respect to the first multi- camera unit 130, there are other multi-camera units 100. On the basis of the location information the first multi-camera unit 130 may then determine the locations in the views of the cameras 132 of the first multi-camera unit 130 which are blocked by another multi-camera unit 100. Such areas may also be called as occluded areas.

[0050] In some embodiments, the location of at least one of the multi-camera units changes during the capturing process. Such changes may be communicated between the multi-camera units in order to always keep tracking the location and direction information of other multi-camera units and having such said information available.

[0051 ] For example, the first multi-camera unit 130 may determine the scene (view) a first camera 132a of the first multi-camera unit 130 sees and compare it to the location of the second multi-camera unit 140 (block 608). If the comparison reveals that the second multi-camera unit 140 is within the scene of the first camera 132a (i.e. a picture of the second multi-camera unit 140 will in an image captured by the first camera 132a), the first multi-camera unit 130 may decide to perform reconstruction of the occluded area in the view of the first camera 132a (blocks 610, 612, 614 in Figure 6). Correspondingly, the first multi-camera unit 130 may perform similar determination for other cameras 132b— 132g of the first multi-camera unit 130 and if the second multi-camera unit 140 or another multi-camera unit 100 is within a view of any of the other cameras 132b— 132g, reconstruction of occluded areas of the views may be performed. The reconstruction of occluded areas of the views would also mean that the blocking multi-camera unit 140 will not be visible in the reconstructed image i.e. the picture of the blocking multi-camera unit 140 will also be removed from the image. [0052] Similar analyses may be performed by other multi-camera units 100 of the system 800 (block 616).

[0053] Figure 2a illustrates an example where an occluded area 208 (depicted as a

cross-hatched area) is due to the second multi-camera unit 140. The reference numeral

206 illustrates the area of the scene 204 which is not visible to the first multi-camera unit 130.

[0054] The first multi-camera unit 130 may also use intrinsic camera parameters of the first multi-camera unit 130 to determine the individual cameras 132 which are viewing at least partly towards the second multi-camera unit 140 and have at least partly blocked view. This information may be used in the reconstruction process where the occluded area is constructed from image information from the blocking multi-camera unit 100, which is the second multi-camera unit 140 in the example of Figure 2a. In the following example embodiment the second multi-camera unit 140 will be used as an example of the blocking multi-camera unit 100.

[0055] Removal of the blocking effect of a multi-camera 140 may be performed in

different ways. In accordance with an embodiment, the first multi-camera unit 130 may capture one or more images by cameras 132 of the first multi-camera unit 130. These images may be in a two-dimensional (2D) format. The occlusion determination element 114 may determine locations in the images in which another multi-camera unit 140, 150 is visible. Then, the occlusion determination element 114 may determine which multi-camera unit is in the image. Such determination is performed based on the awareness of each multi-camera device regarding the location of other multi-camera devices. The first multi-camera unit 130 may then try to obtain images from at least the second multi-camera unit 140 so that the images are captured by those cameras 142 of the second multi-camera unit 140 which are viewing the occluded scene 206. The images are received by the first multi-camera unit 130 in which this image information may be used to reconstruct the occluded area(s).

[0056] In accordance with an embodiment, it may be determined that the same scene is also captured by a third multi-camera unit 150 and/or yet another multi-camera unit. Therefore, the first multi-camera unit 130 may decide to used image information captured from more than one other multi-camera unit 140, 150 to render the occluded area 206. An example of this is depicted in Figure 2b.

[0057] In accordance with an embodiment, it may be determined that the same scene may also be captured by a camera which is not a multi-camera unit but a camera only capable of capturing two-dimensional images. Therefore, the first multi-camera unit 130 may decide to used image information captured by and received from the second multi-camera unit 140 and the third, two-dimensional image capturing camera 160. to render the occluded area 206. An example of this is depicted in Figure 2c. In other words, such a two-dimensional image capturing camera 160 may be used instead of or in addition to one or more of the possible third, fourth etc. multi-camera units.

[0058] In accordance with an embodiment, the image reconstruction element 118

replaces pixels within the occluded area in the original image with pixels from the images captured by the second multi-camera unit 140 corresponding to the occluded area. This is illustrated in Figures 3 a and 3b. Figure 3 a illustrates an example of an image captured by the first multi-camera unit 130 of the setup of Figure 2a; and Figure 3b illustrates the image of Figure 3 a modified so that the area where the second multi- camera unit 140 was visible is replaced with image information from the second multi- camera unit 140, in accordance with an embodiment. The cross-hatched area 304 in Figure 3b illustrates the reconstructed area.

[0059] The replacement may utilize the images captured by at least the second multi- camera unit 140 directly to replace the occluded parts of the image from the first multi- camera unit 130. In another embodiment, the images from the second multi-camera unit 140 are upsampled prior to filling the occluded parts of the images from the first multi-camera unit 130.

[0060] In accordance with another embodiment, the image reconstruction element 118 uses in the reconstruction process pixels both from the original image captured by the first multi-camera unit 130 and from the images captured by the second multi-camera unit 140 corresponding to the occluded area. The content of the second multi-camera unit 140 is relatively closer to the captured scene 204 compared to the first multi- camera unit 130 and hence, covers less pixels compared to the view captured from the first multi-camera unit 140. In the area 408 which covers less pixels compared to the view captured from the first multi-camera unit 140 are replaced with pixels from the images captured by the second multi-camera unit 140, and the gap 406 in between the smaller area 408 and the non-occluded area 410 may be filled by interpolating the pixels between the smaller area 408 and the non-occluded area 410. This is illustrated in Figures 4a and 4b. Figure 4a illustrates an image captured by a camera 132 of the first multi-camera unit 130 and Figure 4b illustrates the image of Figure 4a modified so that the area where the second multi-camera unit 140 is visible is partly replaced with image information based on an image captured by one or more cameras 142 of the second multi-camera unit 140 and partly with image information based on images captured by both the first multi-camera unit 130 and the second multi-camera unit 140, in accordance with an embodiment.

[0061 ] In accordance with yet another embodiment, the information from the second multi-camera unit 140 may be achieved based on the information of more than one camera 142 of the second multi-camera unit 140. In such a scenario, the content from more than one camera may be stitched to create a best presentation from the viewing direction of camera 132 of the first multi-camera unit 130.

[0062] In accordance with yet another embodiment, the image reconstruction element 118 uses in the reconstruction process pixels the images captured by the second multi- camera unit 140 corresponding to the occluded area so that some kind of zooming operation is performed so that a zoomed out version of the second multi-camera unit's 140 content will be presented in the first multi-camera unit's 130 view to compensate the physical distance between the multi-camera units 130, 140. The zooming out may be relative to the distance/orientation difference between the first and second multi- camera units 130, 140.

[0063] It may happen that the second multi-camera unit 100 is blocking views of more than one camera 132 of the first multi-camera unit 130. Similar operations may be made for images from each such camera 132 of the first multi-camera unit 130.

[0064] It may also be possible that more than one multi-camera unit 100 is blocking a view of a camera 132 of the first multi-camera unit 130. Hence, location and/or orientation information and images from each such multi-camera units 100 may be obtained and utilized in the reconstruction process. [0065] Taking into account the viewing direction, distance between cameras, and camera intrinsic/extrinsic parameters, the blocked part of each internal camera on a first multi-camera unit 130 may be covered by the captured content from one or more internal cameras of the second multi-camera unit 140.

[0066] The reconstruction operation can be performed by only using two-dimensional features, or by first transforming two-dimensional images to a three-dimensional scene, making corrections and changes in the three-dimensional reconstruction and then back- projecting to two-dimensional representation for each occluded camera 132.

[0067] The reconstruction operation can also be performed taking into account the

ranging information (depth information) of the scene. Such depth information may be utilized to synthesize a view from the available images of the second multi-camera unit 140 to be well aligned with the viewing direction of the camera 132 from the first multi-camera unit 130.

[0068] In yet another embodiment, the occluded area of views from camera 132 of the first multi-camera unit 130 may also be reconstructed not only from the cameras of the first and second multi-camera unit 140, but also from the content captured by cameras of a third multi-camera unit 150. In such a scenario, a rendering algorithm may be used to render the required view to replace the occluded area based on the image

information of the first, second and third multi-camera units. The depth information may also be used to render the views between the images from different multi-camera units taking into account any depth image based rendering (DIBR) algorithms.

[0069] It should be noted that the rendering is not limited to utilizing a limited number of cameras from a limited number of multi-camera units. Such rendering may utilize the information of one or more cameras from the same multi-camera unit, or several cameras from more than one multi-camera units. The selection of cameras and multi- camera units depends on the viewing direction of camera 132 of the first multi-camera unit and location and direction of the occluding multi-camera unit 140 and location and direction of cameras from other available multi-camera units.

[0070] Moreover, a volumetric three-dimensional scene representation generated by multiple multi-camera units 100 may be used to identify the occluding multi-camera units 100 and possible connected peripherals (tripods, cables, etc.). The information from the blocking multi-camera unit 100 can be utilized to fill in the occluded volume. The blocking multi-camera unit 100 related voxels may then be erased from the three- dimensional scene volumetric model. A final back projection operation to the occluded camera 102 of the multi-camera unit 100 provides the viewport without the occluding multi-camera unit 100. Hence, the blocking multi-camera unit 100 is removed from the scene after its lens views are utilized to fill-in in the occluded regions of the blocked multi-camera unit 100.

[0071 ] In another embodiment, illustrated in Figures 5 a— 5 c and in the flow diagram of Figure 7, a three-dimensional reconstruction from multiple views of a first multi- camera unit 130 is generated.

[0072] The following algorithm may be run to find the blocking multi-camera unit 140 and remove it from the scene. Individual multi-camera unit's characteristics and location of multi-camera units 130, 140 may be known (block 702).

[0073] A three-dimensional reconstruction from the multiple multi-camera units 130, 140 is generated (block 704 in Figure 7), resulting in three-dimensional geometry comprising modeling primitives such as points, polygons, or voxels. In the following description, only voxels will be used, but any of the other three-dimensional primitives can also be used.

[0074] The content required to perform the three-dimensional reconstruction process is not limited to the images captured by the first multi-camera unit 130 and the second multi-camera unit 140. The above described reconstruction process may also be performed based on more than two multi-camera units. As illustrated in Figure 2b, the content captured from a third multi-camera unit 150 or other available multi-camera units may be used in the said reconstruction process in addition to or instead of the content captured by the second multi-camera unit 140. For example, content captured by both the second multi-camera unit 140 and the third multi-camera unit 150 may be combined to fill the occluded area of the first multi-camera unit 130.

[0075] The selection of multi-camera units to be used in the said reconstruction process depends on the location and orientation of the camera 132 in the first multi-camera unit 130 (defining the viewing direction of camera 132 the first multi-camera unit 130) and also the location and orientation of any other cameras in the available multi-camera units.

[0076] The location of the blocking multi-camera unit 140 may be determined (block 706) by making use of the camera pose and initial camera registration data. It is assumed here that each multi-camera unit 130, 140, 150 knows its substantially exact position in space and their substantially exact position in the space is communicated between all available multi-camera units 130, 140, 150.

[0077] When the blocking multi-camera unit 140 has been determined, it is selected (block 708) and voxel elements connected with it may be extended (block 710) until a ground plane (illustrated with 506 in Figure 5 c) or connected non-camera related peripheral is reached (block 712). An example of a camera-related peripheral is a tripod 502 on the ground plane 506 on which the blocking multi-camera unit 140 may be positioned. Another example of the camera-related peripheral is a rod on another concrete surface (not shown) to which the blocking multi-camera unit 140 may be attached to. The term non-camera related peripheral means an object which does not belong to the multi-camera unit's setup but rather may be a part of the scene to be imaged.

[0078] The selected volume may be deleted from the three-dimensional scene (block 714).

[0079] The regions removed from the non-peripheral areas of the scene (e.g. holes in the ground plane when the tripod 502 is removed) may be inpainted (block 716).

[0080] The volume may be projected back to the blocked multi-camera unit's 140

lenses to obtain two-dimensional back-projection (block 718).

[0081] A panoramic video frame may be created where the picture of blocking multi- camera unit 140 is removed (block 720) and the created panoramic video (block 722) may be stored and/or provided to further processing.

[0082] The three-dimensional reconstruction step can be performed by applying multi- view geometry and photogrammetry techniques, for example.

[0083] In accordance with an embodiment, the inpainting may be performed in two- dimensional scene instead of the above mentioned three-dimensional scene. In this option, the inpainting may be performed to the two-dimensional back-projection. [0084] Images processed by the system 800 and/or the multi-camera units 100 may be still images, a stream of still images, images of a video, etc.

[0085] The following describes in further detail suitable apparatus and possible

mechanisms for implementing the embodiments of the invention. In this regard reference is first made to Figure 9 which shows a schematic block diagram of an exemplary apparatus or electronic device 50 depicted in Figure 10, which may incorporate a transmitter according to an embodiment of the invention.

[0086] The electronic device 50 may for example be a mobile terminal or user

equipment of a wireless communication system. However, it would be appreciated that embodiments of the invention may be implemented within any electronic device or apparatus which may require transmission of radio frequency signals.

[0087] The apparatus 50 may comprise a housing 30 for incorporating and protecting the device. The apparatus 50 further may comprise a display 32 in the form of a liquid crystal display. In other embodiments of the invention the display may be any suitable display technology suitable to display an image or video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the invention any suitable data or user interface mechanism may be employed. For example the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display. The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery 40 (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The term battery discussed in connection with the embodiments may also be one of these mobile energy devices. Further, the apparatus 50 may comprise a combination of different kinds of energy devices, for example a rechargeable battery and a solar cell. The apparatus may further comprise an infrared port 41 for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/Fire Wire wired connection.

[0088] The apparatus 50 may comprise a controller 56 or processor for controlling the apparatus 50. The controller 56 may be connected to memory 58 which in

embodiments of the invention may store both data and/or may also store instructions for implementation on the controller 56. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller 56.

[0089] The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a universal integrated circuit card (UICC) reader and a universal integrated circuit card for providing user information and being suitable for providing

authentication information for authentication and authorization of the user at a network.

[0090] The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network. The apparatus 50 may further comprise an antenna 60 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).

[0091] In some embodiments of the invention, the apparatus 50 comprises a camera 42 capable of recording or detecting imaging.

[0092] With respect to Figure 11 , an example of a system within which embodiments of the present invention can be utilized is shown. The system 10 comprises multiple communication devices which can communicate through one or more networks. The system 10 may comprise any combination of wired and/or wireless networks including, but not limited to a wireless cellular telephone network (such as a global systems for mobile communications (GSM), universal mobile telecommunications system

(UMTS), long term evolution (LTE) based network, code division multiple access (CDMA) network etc.), a wireless local area network (WLAN) such as defined by any of the IEEE 802.x standards, a Bluetooth personal area network, an Ethernet local area network, a token ring local area network, a wide area network, and the Internet. [0093] For example, the system shown in Figure 11 shows a mobile telephone network 11 and a representation of the internet 28. Connectivity to the internet 28 may include, but is not limited to, long range wireless connections, short range wireless connections, and various wired connections including, but not limited to, telephone lines, cable lines, power lines, and similar communication pathways.

[0094] The example communication devices shown in the system 10 may include, but are not limited to, an electronic device or apparatus 50, a combination of a personal digital assistant (PDA) and a mobile telephone 14, a PDA 16, an integrated messaging device (IMD) 18, a desktop computer 20, a notebook computer 22, a tablet computer. The apparatus 50 may be stationary or mobile when carried by an individual who is moving. The apparatus 50 may also be located in a mode of transport including, but not limited to, a car, a truck, a taxi, a bus, a train, a boat, an airplane, a bicycle, a motorcycle or any similar suitable mode of transport.

[0095] Some or further apparatus may send and receive calls and messages and

communicate with service providers through a wireless connection 25 to a base station

24. The base station 24 may be connected to a network server 26 that allows communication between the mobile telephone network 11 and the internet 28. The system may include additional communication devices and communication devices of various types.

[0096] The communication devices may communicate using various transmission

technologies including, but not limited to, code division multiple access (CDMA), global systems for mobile communications (GSM), universal mobile

telecommunications system (UMTS), time divisional multiple access (TDMA), frequency division multiple access (FDMA), transmission control protocol-internet protocol (TCP-IP), short messaging service (SMS), multimedia messaging service

(MMS), email, instant messaging service (IMS), Bluetooth, IEEE 802.11, Long Term Evolution wireless communication technique (LTE) and any similar wireless communication technology. A communications device involved in implementing various embodiments of the present invention may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection. In the following some example implementations of apparatuses utilizing the present invention will be described in more detail.

[0097] Although the above examples describe embodiments of the invention operating within a wireless communication device, it would be appreciated that the invention as described above may be implemented as a part of any apparatus comprising a circuitry in which radio frequency signals are transmitted and received. Thus, for example, embodiments of the invention may be implemented in a mobile phone, in a base station, in a computer such as a desktop computer or a tablet computer comprising radio frequency communication means (e.g. wireless local area network, cellular radio, etc.).

[0098] In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits or any combination thereof. While various aspects of the invention may be illustrated and described as block diagrams or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non- limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.

[0099] Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.

[0100] Programs, such as those provided by Synopsys, Inc. of Mountain View,

California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication. [0101] The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention.

[0.102] In the following some examples will be provided.

[0103] According to a first example, there is provided a method comprising:

receiving at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi-camera unit is at least partially visible in the content captured by the first multi- camera unit;

replacing the presence of the second multi-camera unit in the scene captured by first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

[0104] In some embodiments the method comprises utilizing the scene captured from at least a third available camera unit, wherein the third camera unit captures the same scene as the first multi-camera unit and the second multi-camera unit.

[0105] In some embodiments of the method the scene captured by at least a third available camera unit is a scene captured by a third multi-camera unit capturing three-dimensional images.

[0106] In some embodiments of the method the scene captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images.

[0107] In some embodiments the method comprises:

copying content captured by the second multi-camera unit to the respective location in the captured content of the first multi-camera unit.

[0108] In some embodiments the method comprises:

determining the location where the second multi-camera unit is visible in a content captured by the first multi-camera unit;

examining which image captured by a camera of the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit; and

replacing pixels of the determined location in the content captured by the first multi-camera unit with pixels of the image captured by the camera of the second multi- camera unit.

[0109] In some embodiments the method comprises:

zooming the image captured by the camera of the second multi-camera unit before replacing the pixels.

[01 10] In some embodiments the method comprises:

replacing an area smaller than the area covered by the second multi-camera unit with pixels of one or more images captured by the second multi-camera unit; and

replacing the remaining area covered by the second multi-camera unit by interpolating pixels of one or more images captured by the second multi-camera unit and pixels of one or more images captured by the first multi-camera unit.

[01 1 1 ] In some embodiments the method comprises:

transforming the two-dimensional content to a three-dimensional scene; making corrections and changes in the three-dimensional reconstruction; and

back-projecting the three-dimensional reconstruction to a two-dimensional representation.

[01 12] In some embodiments the method comprises:

obtaining depth information of the scene; and

utilizing the depth information in determining at least one of the following:

mutual distance of at least the first multi-camera unit and the second multi- camera unit;

which image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.

[01 13] According to a second example, there is provided an apparatus comprising at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by at least the first multi- camera unit;

replace the presence of the second multi-camera unit in the scene captured by first multi-camera unit based on captured content of the first multi-camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units. [01 14] In some embodiments of the apparatus, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

utilize the scene captured from at least a third available multi-camera unit, the third multi-camera unit capturing the same scene as the first multi-camera unit and the second multi-camera unit.

[01 15] In some embodiments of the apparatus, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

copy content captured by the second multi-camera unit to the respective location in the captured content of the first multi-camera unit.

[01 .16] In some embodiments of the apparatus, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

determine the location where the second multi-camera unit is visible in a content captured by the first multi-camera unit;

examine which image captured by a camera of the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit; and

replace pixels of the determined location in the content captured by the first multi- camera unit with pixels of the image captured by the camera of the second multi-camera unit.

[0117] In some embodiments of the apparatus, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

zoom the image captured by the camera of the second multi-camera unit before replacing the pixels. [01 18] In some embodiments of the apparatus, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

replace an area smaller than the area covered by the second multi-camera unit with pixels of one or more images captured by the second multi-camera unit; and

replace the remaining area covered by the second multi-camera unit by

interpolating pixels of one or more images captured by the second multi-camera unit and pixels of one or more images captured by the first multi-camera unit.

[0119] In some embodiments of the apparatus, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

transform the two-dimensional content to a three-dimensional scene;

make corrections and changes in the three-dimensional reconstruction; and back-project the three-dimensional reconstruction to a two-dimensional representation.

[0120] In some embodiments of the apparatus, said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:

obtain depth information of the scene; and

utilize the depth information in determining at least one of the following:

mutual distance of at least the first multi-camera unit and the second multi-camera unit;

wherein the image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.

[0.121 ] According to a third example, there is provided an apparatus comprising:

means for receiving at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi- camera unit and the second multi-camera unit capture images of the same scene and the second multi-camera unit is at least partially visible in the content captured by the first multi-camera unit; means for replacing the presence of the second multi-camera unit in the scene captured by first multi-camera unit based on captured content of at least the first multi- camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

[0122] In some embodiments of the apparatus the scene captured by at least a third available camera unit is a scene captured by a third multi-camera unit capturing three- dimensional images.

[0123] In some embodiments of the apparatus the scene captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images.

[0124] In some embodiments the apparatus comprises:

means for copying content captured by the second multi-camera unit to the respective location in the captured content of the first multi-camera unit.

[0125] In some embodiments the apparatus comprises:

means for determining the location where the second multi-camera unit is visible in a content captured by the first multi-camera unit;

means for examining which image captured by a camera of the second multi- camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit; and

means for replacing pixels of the determined location in the content captured by the first multi-camera unit with pixels of the image captured by the camera of the second multi-camera unit.

[0126] In some embodiments the apparatus comprises:

means for zooming the image captured by the camera of the second multi-camera unit before replacing the pixels.

[0127] In some embodiments the apparatus comprises:

means for replacing an area smaller than the area covered by the second multi- camera unit with pixels of one or more images captured by the second multi-camera unit; and means for replacing the remaining area covered by the second multi-camera unit by interpolating pixels of one or more images captured by the second multi-camera unit and pixels of one or more images captured by the first multi-camera unit.

[0128] In some embodiments the apparatus comprises:

means for transforming the two-dimensional content to a three-dimensional scene; means for making corrections and changes in the three-dimensional

reconstruction; and

means for back-projecting the three-dimensional reconstruction to a two- dimensional representation.

[0129] In some embodiments the apparatus comprises:

means for obtaining depth information of the scene; and

means for utilizing the depth information in determining at least one of the following:

mutual distance of at least the first multi-camera unit and the second multi- camera unit;

which image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.

[0130] According to a fourth example, there is provided a computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform:

receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by the first multi-camera unit;

replace the presence of the second multi-camera unit in the scene captured by first multi-camera unit based on captured content of at least the first multi-camera unit and the second multi-camera unit;

wherein the replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.

[0131 ] In some embodiments the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

utilize the scene captured from at least a third available camera unit, the third camera unit capturing the same scene as the first multi-camera unit and the second multi-camera unit.

[0132] In some embodiments of the computer readable storage medium the scene

captured by at least a third available camera unit is a scene captured by a third multi- camera unit capturing three-dimensional images.

[0133] In some embodiments of the computer readable storage medium the scene

captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images.

[0134] In some embodiments the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

copy content captured by the second multi-camera unit to the respective location in the captured content of the first multi-camera unit.

[0135] In some embodiments the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

determine the location where the second multi-camera unit is visible in a content captured by the first multi-camera unit;

examine which image captured by a camera of the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit; and

replace pixels of the determined location in the content captured by the first multi- camera unit with pixels of the image captured by the camera of the second multi-camera unit. [0136] In some embodiments the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

zoom the image captured by the camera of the second multi-camera unit before replacing the pixels.

In some embodiments the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

replace an area smaller than the area covered by the second multi-camera unit with pixels of one or more images captured by the second multi-camera unit; and

replace the remaining area covered by the second multi-camera unit by interpolating pixels of one or more images captured by the second multi-camera unit and pixels of one or more images captured by the first multi-camera unit.

[0137] In some embodiments the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

transform the two-dimensional content to a three-dimensional scene;

make corrections and changes in the three-dimensional reconstruction; and back-project the three-dimensional reconstruction to a two-dimensional representation.

In some embodiments the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:

obtain depth information of the scene; and

utilize the depth information in determining at least one of the following:

mutual distance of at least the first multi-camera unit and the second multi- camera unit;

wherein the image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.