Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND DEVICE FOR GENERATING AN IMAGE
Document Type and Number:
WIPO Patent Application WO/2021/165375
Kind Code:
A1
Abstract:
A method and system are provided for generating an image from each camera array in a camera array matrix. In one embodiment, the method comprises increasing redundancy between to be captured images from the camera matrix by rotating direction of any cameras disposed in upper and lower rows of said matrix by a 90 degree angle around the roll axis. Then any cameras disposed at the corners of the matrix are rotated in an angle that is less than 90 degrees around the roll axis. Subsequently, the location of central cameras are determined and analysed so that they can be rotated and disposed in a manner that provides both horizontal and vertical compensation for any redundancies.

Inventors:
BABON FREDERIC (FR)
BOISSON GUILLAUME (FR)
LANGLOIS TRISTAN (FR)
Application Number:
PCT/EP2021/053982
Publication Date:
August 26, 2021
Filing Date:
February 18, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTERDIGITAL CE PATENT HOLDINGS SAS (FR)
International Classes:
H04N5/232; G03B37/04; H04N5/225; H04N5/247
Foreign References:
US20150278988A12015-10-01
US9699379B12017-07-04
US9497380B12016-11-15
Attorney, Agent or Firm:
INTERDIGITAL (FR)
Download PDF:
Claims:
CLAIMS:

1. A method comprising generating an image by rotating at least one camera disposed in upper and lower rows of a camera matrix by a 90 degree angle around the roll axis; rotating at least one camera disposed at the comers of said matrix in an angle that is less than 90 degrees around the roll axis; determining location of central cameras disposed in said matrix and analyzing their degree of redundancy; rotating and/or moving said central cameras both horizontally and vertically based on redundancy in central position of said array.

2. A system for capturing an image comprising: means for rotating at least one camera disposed in upper and lower rows of a camera matrix by a 90 degree angle; means for rotating at least one cameras disposed at comers of said matrix in an angle that is less than 90 degrees; a processor configured for determining location of central cameras disposed in said matrix and analyzing their degree of redundancy; means for moving said central cameras both horizontally and vertically as to compensate any redundancy in central position of said array.

3. The method of claim 1 or system of claim 2 wherein said central cameras are horizontally disposed to eliminate any image redundancy in the central portion.

4. The method of claim 1 or 3 or system of claim 2 or 3 wherein said central cameras are vertically disposed to increase the density of information provided from the upper and lower parts of the camera matrix.

5. The method of claim 1 further comprising capturing a first image from at least one of said cameras in said camera array and a second image from a second camera in said camera array adjacent to said first camera in said array. 6. The method of claim 5 comprising obtaining images from said first and second cameras, wherein said cameras have an overlapping field of view (FOV).

7. The system of claim 2 wherein at least a first image is captured from at least one of said cameras in said camera array and a second image is captured from a second camera in said camera array adjacent to said first camera in said array.

8. The system of claim 7 wherein said first and second cameras have an overlapping field of view (FOV).

9. The method of claim 6 or system of claim 8 wherein said first image and said second image overlap at least in one area.

10. The method of claim 9 or system of claim 9 further comprising comparing said first and second images and analyzing their overlapping portions to remove all redundancies. 11. The method of claim 1 or any of 3-6 further comprising synchronously capturing a supplemental image from all said cameras in said array.

12. The system of claim 2 or any of 6- 10 further comprising means for synchronously capturing a supplemental image from all said cameras in said array.

13. The method of claim 11 further comprising comparing said synchronoously captured images and analyzing their overlapping portions to remove all redundancies. 14. The method of claim 1 or any of claims 3-6 further comprising stitching portions of said images after redundancies are removed to provide a final image to generate a final image with higher resolution than each image of the images captured by each individual camera in said array. 15. The system of claim 1 or 6-10 and 12 further comprising means for stitching portions of said images after redundancies are removed to provide an image to generate a image with higher resolution than each image of the images captured by each individual camera in said array.

Description:
METHOD AND DEVICE FOR GENERATING AN IMAGE

TECHNICAL FIELD

[0001] The present embodiments relate generally to image generation and processing and more particularly to techniques for optimizing redundancy of images for camera arrays. BACKGROUND

[0002] Conventional cameras capture light from a three-dimensional scene on a two- dimensional sensor device sensitive to visible light. Light sensitive technology used in such imaging devices is often based on semiconductor technology, capable of converting photons into electrons such as, for example, charge coupled devices (CCD) or complementary metal oxide technology (CMOS). A digital image photosensor, for example, typically includes an array of photosensitive cells, each cell being configured to capture incoming light. A 2D image providing spatial information is obtained from a measurement of the total amount of light captured by each photosensitive cell of the image sensor device. While the 2D image can provide information on the intensity of the light and the color of the light at spatial points of the photosensor(s), no information is provided on the direction of the incoming light.

[0003] Other types of cameras have been recently developed that provide for a richer and more image intensive product. One such camera is a Light Field camera. LightField cameras allow to capture a real content from various point of views. The 2 major families of light-field cameras are either: the matrix of cameras; or the plenoptic cameras. A matrix of cameras can be replaced by a single camera which is used to perform many acquisitions from various point of views. The light-field being captured is therefore limited to static scene. With plenoptic cameras, an array of micro-lenses is located between the main-lens and the sensor. The micro-lenses are producing micro-images which correspond to various point of views. The matrix of micro-images collected by the sensor can be transformed into the so-called sub-aperture images which are equivalent to the acquisition obtained with a matrix of cameras. The proposed invention is described considering a matrix of cameras, but would apply equally well to the set of sub-aperture images extracted from a plenoptic camera.

[0004] Light field images might be recorded using one of the following system category: plenoptic camera or camera arrays. Camera arrays are more flexible in terms of field of view and angles covering. Classic camera arrays setup are compact cameras mounted on metal frame positioned in a common plane and pointing at the same direction. Output images from the camera arrays are often processed to compute depth images and synthetized images from a virtual point of view. The computation of these images relies on the level of redundancy between all the input images and in many instances the prior art provides a lack of redundancy in many instances that ultimately results in missing parts in the production of the final images.

[0005] Consequently, it is desirous to provide techniques that provide adequate redundancy between the input and output results without causing missing information in the final processed and produced images. SUMMARY

[0006] A method and system are provided for generating an image from one or more cameras in a camera array matrix. In one embodiment, the method comprises rotating direction of any cameras disposed in upper and lower rows of said matrix by a 90 degree angle around the roll axis, for example, based on a degree of redundancy between captured images. One or more cameras disposed at the comers of the matrix are rotated in an angle that is less than 90 degrees around the roll axis. Subsequently, the location of central cameras are determined and analysed so that they can be rotated and disposed in a manner for both horizontal and vertical compensation for any redundancies.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] Different embodiments will now be described, by way of example only, and with reference to the following drawings in which: [0008] Figure 1 is an illustration of a camera sensor format comparison with these and other setups

[0009] Figure 2 provides an example of an input image according to an embodiment;

[0010] Figure 3 provides a synthetized image according to an embodiment with all cameras of an array in a same neutral roll position; [0011] Figure 4 provides an example of a smart device; [0012] Figure 5A and 5B provides a perspective view of a synthetic scene in a quick shading and a wireframe format.

[0013] Figure 6 show a conventional (classic) 4x4 camera array format used in the prior art, [0014] Figure 7 provides 4x4 redundancy at 0.5m,

[0015] Figure 8 provides 4x4 redundancy at lm,

[0016] Figure 9 provides 4x4 redundancy at 2m and

[0017] Figure 10 provides 4x4 redundancy at 3m

[0018] Figure 11 is an illustration of 4x4 output as RGB images; [0019] Figure 12 is an illustration of 4x4 with depth estimation;

[0020] Figure 13 provides an example of a point cloud illustration of a scene comprising of several objects;

[0021] Figure 14 is an example illustration of a Z-Roll optimization using a 4x4 camera array structure; [0022] Figures 15 is an example illustration of a Z-roll redundancy at 0.5m;

[0023] Figure 16 is an illustration of a Z-roll redundancy at lm;

[0024] Figure 17 is an illustration of a Z-roll redundancy at 2m;

[0025] Figure 18 is an illustration of a Z-roll redundancy at 3m); [0026] Figures 19 is an illustration of a z-roll rendered RGB images;

[0027] Figure 20 is an illustration of a z-roll depth estimation respectively as provided by the camera images and a depth estimation software (to compute the depth maps);

[0028] Figure 21 provides the final result of the z-roll 3D point cloud. [0029] Figure 22 is an illustration of a chart having the spacing comparisons with differently arranged embodiments;

[0030] Figure 23 is an illustration providing an off-centered and z-roll configuration;

[0031] Figure 24 is an illustration providing the off-centered redundancy at 0.5m;

[0032] Figure 25 is an illustration providing the off-centered redundancy at lm; [0033] Figure 26 is an illustration providing the off-centered redundancy at 2m;

[0034] Figure 27 is an illustration providing the off-centered redundancy at 3m;

[0035] Figure 28 is an illustration of the result for rendered RGB images;

[0036] Figure 29 is an illustration for the depth estimation as applied;

[0037] Figure 30 is an illustration of a final point cloud corresponding to the off-centered + z-roll configuration as produced using the RGB and depth images;

[0038] Figure 31 is a flowchart illustration of one embodiment; and

[0039] Figure 32 is a diagram illustrating hardware configuration of a device which discloses various embodiments. DETAILED DESCRIPTION

[0040] Light-field image and video processing offers a much richer variety of image manipulation possibilities compared to traditional 2D images. However, capturing of high-quality light-fields is challenging because a huge amount of data has to be captured and managed. Often, a number of different views need to be combined together that are provided in high dynamic range, with excellent color and resolution. In addition, 2D images have to be manipulated so that they can be projected into a three-dimensional plane. In digital images, this involves providing a grid like plane representative of pixels. For every visible point in space, a 2D-image often provides the intensity of one or multiple pixels. In addition, other principals that are associated with stereoscopic images manipulation have to be considered such as providing two different views of a scene. This is because depth management is provided to a user’s eye by providing slightly shifted images (parallax) for the left and the right eye to provide the depth impression. These requirements greatly increase the visual experience but they also significantly increase the volume of data that has to be captured, managed, stored and recovered.

[0041] Light field images might be recorded using one of the following system category: plenoptic camera or camera arrays. Camera arrays are more flexible in terms of field of view and angles covering. Classic camera arrays setup are often positioned in a common plane and pointing at the same direction and they contain camera sensors with commonly used have aspect ratios. In many prior art setups the aspect ratios vary between 4/3, 3/2 for photography and 4/3, 16/9, 2.39/1 for video but in alternate settings other less common formats may exist that provide for a different setup (1/1, 2/1 etc.). Figure 1 is an illustration of a camera sensor format comparison. These formats are adapted to photo printing and display on screens.

[0042] In recent years with the advent of stereoscopic and even virtual and augmented reality applications, the contents produced by the camera arrays are then rendered on head mounted displays for 360 degrees viewing. In such cases, the camera aspect ratio is constrained in such a way to maximize the vertical and horizontal covering of the captured scene. In these and other similar applications, camera in arrays are arranged in a variety of ways: along one horizontal row, multiple rows and columns (rectangular or square shapes), all in one plane or convergent/divergent. However, in most conventional cases, the rotation along the roll axis remains the same. It might seem obvious to keep always the same neutral roll position to keep all images consistent but this approach has many drawbacks.

[0043] Output images from the camera arrays are often processed to compute depth images and synthetized images from a virtual point of view. The computation of these images relies on the level of redundancy between all the input images and we can observe that the lack of redundancy results in missing parts or artifacts in the synthetized images. Figure 2 and 3 provides an example of this. Figure 2 provides an example of an input image while Figure 3 provides a synthetized image with all cameras in the same neutral roll position. As shown there are some information missing in Figure 3. [0044] In one embodiment as will be presently discussed, if the cameras are rotated along a roll axis, the redundancy can be better distributed within the final synthetized image. As will be discussed, in one embodiment two new camera array architectures can be presented that have the same external dimensions with specific camera orientation and position to optimize the redundancy between images and thus to improve the scene coverage of the synthetized images. Prior to discussing these new architectures, however, some background information should be discussed.

[0045] Figure 4 provides a preview of a smart device. In this particular embodiment, for ease of understanding a smartphone is used but in alternate embodiments, other smart devices can be utilized.

[0046] In a first generation of smartphones that included dual cameras (around 2007), the goal was to record 3D photos. In later generation smartphones (circa 2014), devices with two cameras were provided with an objective of improving the low-light performance. Another objective was also to edit bokeh and take full advantage of the possibilities it provided. Soon the dual camera devices became standard but by 2018, many smart devices including smartphones with 3 or 4 cameras were introduced. New trends will provide camera array setup in smart devices such as smartphones that have many more (16 etc) cameras. Thus, optimized redundancy for these camera arrays are becoming more of an important concept.

[0047] Figure 5A and 5B provides a perspective view of a synthetic scene in a quick shading and a wireframe format. This illustration is used to evaluate the performance of the different rig array configurations, a synthetic scene has been modelized in a 3D rendering application (Cinema 4D). This scene contains multiple objects placed at a distance varying between 1.4 to 5.1 meters and with different angular rotations relative to the cam array. The textures of the objects are deliberately not solid colors to ease the passive depth estimation. [0048] Figure 6 show a conventional (classic) 4x4 camera array format used in the prior art, the reference camera array (4x4 shape) was arranged such that there were 4 cameras per row and 4 cameras per column. In one embodiment, this can be used as a baseline calculation for any number of cameras in a camera array. This can also be applied as a good example for a 4x4 camera array, with the understanding that the values obtained in some embodiments may not be always the same because some cameras may be different. However, in many instances, many of the cameras used in this format often possess characteristics similar to them.

[0049] Returning to the previous discussion, this shape is sometimes considered as the basic brick of the camera matrix market. In order to simulate a 16 image format, a particular (often 7cm) distance is often kept between camera centers (often 21cm distance between opposite cameras). Conventionally, many of the cameras used in this format often possess the following characteristics:

Focal length: 8mm Camera resolution: 1920x1080 pixels

Horizontal and vertical angle of view: 62° / 37°

[0050] Figures 7-10 provide different embodiment and examples of the 4x4 classic camera arrangement disposed at different redundancy arrangements. As illustrated, Figure 7 provides the classic 4x4 redundancy at 0.5m, while Figure 8 provides the classic 4x4 redundancy at lm, Figure 9 provides the classic 4x4 redundancy at 2m and finally

Figure 10 provides the classic 4x4 redundancy at 3m.

[0051] In all these arrangements, the redundancy between cameras is dependent on the distance between the cameras and the subject. Multiple simulations were performed to evaluate the level of redundancy of the classic 4x4 camera array. In each case, the redundancy is particularly concentrated in the central portion of the captured scene lacking details on the borders, and the coverage of this array configuration is constraint by the camera sensor format (here landscape only). [0052] These camera array structure can be used to obtain the 16 RGB images as direct output from the 16 cameras here but in alternate embodiments any number of cameras can be included. A depth estimation software uses these images and a calibration file as input and then compare for each pixel the RGB value from each camera to compute the depth. The result is provided in the illustrations of Figures 11 and 12. As shown, Figure 11 is an illustration of the classic 4x4 output as RGB images and Figure 12 is an illustration of the classic 4x4 with depth estimation. The RGB and the depth estimation images are then taken as input to produce a point cloud with the following condition : a 3D point is projected in space only if minimum two views estimate it at the same position with a 0.5 pixel maximum distance. [0053] Figure 13 provides a point cloud illustration of a scene comprising of several objects. To extend the coverage of the captured scene while keeping the global size of the camera array. In one embodiment a proposal is to rotate one or more cameras on the roll axis. Considering the landscape 16:9 format of the cameras, an aim is to increase the global vertical field of view. A 90 degrees rotation can be applied on the cameras in the upper and the lower row. In order to avoid a sudden break in the point cloud between the upper, middle and lower vertical parts comer cameras are not be rotated by 90 degrees. A 45 degrees rotation for these 4 comer cameras permits a rounder overall shape to be obtained with a better 3D space distribution. The following illustration helps to better understand the global shape of this Z-roll optimized camera structure.

[0054] The Z-Roll optimization using a 4x4 camera array structure is provided for ease of understanding in Figure 14. To better visualize the impact on the level of redundancy, the same simulations on distances between 0.5 meter to 3 meters were done as illustrated in Figures 15 through 18 (Figure 15: Z-roll redundancy at 0.5m; Figure 16: Z-roll redundancy at lm; Figure 17: Z-roll redundancy at 2m and Figure 18: Z-roll redundancy at 3m). Figures 19 and 20 provide the z-roll rendered RGB images and z-roll depth estimation respectively as provided by the camera images and a depth estimation software (to compute the depth maps). The result in this case is 32 images (16 RGB + 16 Depth) that are then used to produce the corresponding point cloud with the same 25 condition as before (a 3D point is kept only if a 0.5 pixel difference max is evaluated between 2 views). Figure 21 provides the final result of the z-roll 3D point cloud as illustrated. With a better distributed redundancy, there is a worthwhile difference in terms of 3D space coverage. It can be that some parts of this point cloud that are more sparse than the classical 4x4 configuration (the television and its support for example). This is due to the z-rolled cameras that reduce the redundancy of the central portion. The next embodiment configuration can help to address this issue.

[0055] A further embodiment relates to an off-centered and z-roll optimized 4x4 camera array. To counterbalance the loss of details on the central part of the point cloud when some cameras are rotated while maintaining the extended coverage of the captured scene and keeping the global size of the cam array constant, some central cameras are off- centered. The intuition behind this idea is that the 15 redundancy between cameras is still mostly concentrated in the center of the camera array. An example of an off-centered camera structure is depicted below. In this example, the four central cameras are shifted by 3 centimeters horizontally and 2 centimeters vertically. The horizontal direction is prioritized to compensate the weakened redundancy in the central portion, but the vertical direction is also slightly off-centered to 20 increase the density of 3D points in the upper and lower parts of the point cloud.

[0056] Figure 22 provides a chart illustrating the spacing comparisons with differently arranged embodiments. Figure 23 provides an off-centered and z-roll configuration is previewed below. The shift of the four cameras results in a redundancy that is spread horizontally and vertically. This new camera configuration can be compared to the original and to the z-roll only configurations by viewing the simulations of redundancy between cameras. The gain seems minor but it will be more noticeable in the final point cloud.

[0057] As before, for comparison purposes, Figures 24 to 27 provide the rendering redundancies at different distances. Figure 24 provides the off-centered redundancy at 0.5m; Figure 25 at lm; Figure 26 at 2m and Figure 27 at 3m. Applying these modifications to the camera array model in the 3D rendering application, the RGB images as shown in Figure 28 will emerge and once the depth estimation is also applied, the illustration of Figure 29 is provided. [0058] In Figure 30, the final point cloud corresponding to this off-centered + z-roll configuration is produced using the RGB and depth images. As can be seen, there is much improvement. With this latest configuration, the benefits of having an extended field of view but at the same time preserving the details in the central portion of the image. This camera array is built upon the same camera elements and keeps the same external dimensions of the classic 4x4 camera array characterized in the first chapter of this patent while having better image acquisition properties. 8 as before. In the output depth images, there are observable improvements around the borders. [0059] The embodiment above that provides for the z-roll rotated and off-centered configurations can be applicable to all sorts of camera array including smartphones or other smart devices. In addition, there is no need to rotate a camera array to capture a portrait and new possibilities can be explored in recording a video scene that starts with a vertical subject (portrait) and that ends with a rather horizontal environment (landscape). One additional advantage is that the camera array’s outer dimensions remain constant. The image redundancy can be spread to get a more constant depth estimation and an extended virtual field of view.

[0060] Figure 31 is a flowchart illustration of one embodiment that is used to generate an image from each camera array in a camera.arra . matrix.. In. step..3100 s . the redundancy between to be captured images from the camera matrix is reduced by rotating direction of any cameras disposed in upper and lower rows of said matrix by a 90 degree angle. When any degrees are discussed, it should be noted that they are meant to convey somewhat inaccurate measures so a 90 degree andgle is deemed to be substantially 90 degress and the like. In step 3200, any cameras disposed at the comers of the matrix are rotated in an angle that is less than 90 degrees. In step 3300, the location of central cameras disposed in the matrix are determined and their degree of redundancy is analyzed. Based on this determination and analysis, in step 3400, the central cameras are then rotated and disposed to provide them so as to behorizontally prioritized to compensate any image redundancy and vertically disposed to increase the density of the upper and lower parts of the camera matrix.

[0061] Figure 32 is a diagram illustrating hardware configuration of a device in which one or more embodiments of the present disclosure can be implemented. Although it is depicted in Figure 32 that a device 5 includes a camera 1, such as a lightfield camera 1

(or 1 A that will be explained in later section of this description), a lightfield camera 1 can be configured separately from a device 5. A device 5 can be any device such as, for example, desktop or personal computers, smartphones, smartwatches, tablets, mobile phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users and a lightfield camera 1. Lightfield camera 1 can also have a configuration of a device 5 inside.

[0062] The device 5 comprises the following elements, which are connected to each other by a bus 54 of addresses and data that also transports a clock signal: a processor 51 (or CPU), a non-volatile memory of ROM (Read Only Memory) type 52, a Random Access Memory or RAM 53, a radio interface (RX) 56, an interface 55 (TX) adapted for the transmission of data, a lightfield camera 1, an MMI (Man Machine Interface) 58 (I/F appli) adapted for displaying information for a user and/or inputting data or parameters.

It is noted that the term "register" or "store" used in the description of memories 52 and 53 designates in each of the memories mentioned, a memory zone of a low capacity as well as a memory zone of a large capacity (enabling a whole program to be stored in such memories or all or part of the data representing data received and decoded for such memories). [0063] The ROM 52 comprises a program "prog". The algorithms implementing the steps of the method specific to the present disclosure and described below are stored in the ROM 52 memory and are associated with the device 5 implementing these steps. When powered up, the processor 51 loads and runs the instructions of these algorithms. RAM 53 notably comprises in a register and/or memory, the operating program of the processor 51 responsible for switching on the device 5, reception parameters (for example parameters for modulation, encoding, MIMO (Multiple Input Multiple Output), recurrence of frames), transmission parameters (for example parameters for modulation, encoding, MIMO, recurrence of frames), incoming data corresponding to the data received and decoded by the radio interface 56, decoded data formed to be transmitted at the interface to the application 58, parameters of the primary lens 10 and/or information representative of the centers of the micro-images formed by the microlenses of the microlens array. Other structures of the device 5, than those described with respect to FIG. 6, are compatible with the present disclosure. In particular, according to various alternative embodiments, the device 5 may be implemented according to a purely hardware realization, for example in the form of a dedicated component (for example in an ASIC (Application Specific Integrated Circuit) or FPGA (Field- Programmable Gate Array) or VLSI (Very Large Scale Integration) or of several electronic components embedded in an apparatus or even in a form of a mix of hardware elements and software elements. The radio interface 56 and the interface 55 are adapted for the reception and transmission of signals according to one or several telecommunication standards such as IEEE 802.11 (Wi-Fi), standards compliant with the IMT-2000 specifications (also called 3G), with 3GPP LTE (also called 4G), IEEE 802.15.1 (also called Bluetooth). According to an alternative embodiment, the device 5 does not include any ROM but only RAM where the algorithms implementing the steps of the method specific to the present disclosure are stored in the RAM.

[0064] Some processes implemented by embodiments may be computer implemented. Accordingly, such elements may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system 1 . Furthermore, such elements may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium. [0065] Since at least some elements can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.

[0066] Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention. [0067] Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.