Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE STABILIZING DEVICE OF THE MEMS TYPE, IN PARTICULAR FOR IMAGE ACQUISITION USING A DIGITAL-IMAGE SENSOR
Document Type and Number:
WIPO Patent Application WO/2007/031569
Kind Code:
A2
Abstract:
A device for stabilizing images acquired by a digital-image sensor (5) includes a motion- sensing device (15, 16, 17, 18), for detecting quantities (ΔφP, αY, ΔφY) correlated to pitch and yaw movements of the digital-image sensor (5), and a processing unit (14), connectable to the digital-image sensor (5) for receiving a first image signal (IMG) and configured for extracting a second image signal (IMG') from the first image signal (IMG) on the basis of the quantities (ΔφP, αY, ΔφY) detected by the motion-sensing device (15, 16, 17, 18). The motion-sensing device (15, 16, 17, 18) includes a first accelerometer (15) and a second accelerometer (16).

Inventors:
PASOLINI FABIO (IT)
FONTANELLA LUCA (IT)
Application Number:
PCT/EP2006/066387
Publication Date:
March 22, 2007
Filing Date:
September 14, 2006
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ST MICROELECTRONICS SRL (IT)
PASOLINI FABIO (IT)
FONTANELLA LUCA (IT)
International Classes:
H04N5/232
Domestic Patent References:
WO1990009077A11990-08-09
Foreign References:
DE19942900A12000-05-04
US4448510A1984-05-15
EP0773443A11997-05-14
US20050179784A12005-08-18
Other References:
LACQUET B M ET AL: "A CMOS compatible micromachined yaw-rate sensor" INDUSTRIAL ELECTRONICS, 1998. PROCEEDINGS. ISIE '98. IEEE INTERNATIONAL SYMPOSIUM ON PRETORIA, SOUTH AFRICA 7-10 JULY 1998, NEW YORK, NY, USA,IEEE, US, vol. 1, 7 July 1998 (1998-07-07), pages 327-329, XP010296014 ISBN: 0-7803-4756-0
Attorney, Agent or Firm:
JORIO, Paolo et al. (Via Viotti 9, Torino, IT)
Download PDF:
Claims:

C L A I M S

1. A stabilizer device of images acquired by a digital- image sensor (5) comprising: motion-sensing means (15, 16, 17, 18), for detecting quantities (δφ P , 0Cγ, δφ γ ) correlated to pitch and yaw movements of said digital-image sensor (5); and a processing unit (14), connectable to said digital- image sensor (5) for receiving a first image signal (IMG) and configured for extracting a second image signal (IMG') from said first image signal (IMG) on the basis of said quantities (δφ P , 0Cγ, δφ γ ); characterized in that said movement-sensing means (15, 16, 17, 18) comprise a first accelerometer (15) and a second accelerometer (16).

2. The device according to Claim 1, wherein said first accelerometer (15) and said second accelerometer (16) are oscillating-beam accelerometers of the MEMS type.

3. The device according to Claim 2, wherein said first accelerometer (15) comprises two respective beams (20) constrained so as to oscillate about respective first rotation axes (Rl), parallel to one another and staggered with respect to centroids (G) of said beams (20) of said first accelerometer (15).

4. The device according to Claim 3, wherein said second accelerometer (16) comprises two respective beams (20) constrained so as to oscillate about respective second rotation axes (R2) perpendicular to said first rotation axes (Rl) and staggered with respect to centroids (G) of said beams (20) of said second accelerometer (16).

5. The device according to Claim 4, wherein each of said beams (20) of said first accelerometer (15) and of said second accelerometer (16) is capacitively coupled to a respective first electrode (25) and to a respective second electrode (26), so that oscillations of said beams (20) cause differential capacitive unbalancing (δCIA, δCIB, δCIC, δCID, δC2A,

δC2B, δC2C, δC2D) between said beams (20), the respective said first electrode (25) and the respective said second electrode (26).

6. The device according to Claim 5, comprising a third accelerometer (17), of a biaxial type, having a first detection axis (X), parallel to said first rotation axes (Rl) of the beams (20) of said first accelerometer (15), and a second detection axis (Y) parallel to said second rotation axes (R2) of the beams (20) of said second accelerometer (16).

7. The device according to Claim 6, wherein said first accelerometer (15), said second accelerometer (16), and said third accelerometer (17) are integrated in a single semiconductor chip (19).

8. The device according to any one of Claims 5-7, wherein said motion-sensing means (15, 16, 17, 18) comprise a pre-processing stage (18), which is connected to said first accelerometer (15) and to said second accelerometer (16) for receiving sensing signals representing said differential capacitive unbalancing (δCIA, δCIB, δCIC, δCID, δC2A, δC2B, δC2C, δC2D) and is configured for determining said quantities (δφ P , 0Cγ, δφ γ ) on the basis of said differential capacitive unbalancing (δCIA, δCIB, δCIC, δCID, δC2A, δC2B, δC2C, δC2D)-

9. The device according to Claim 8, wherein said pre-processing stage (18) is configured for determining said quantities (δφ P , 0Cγ, δφ γ ) on the basis of said differential capacitive unbalancing (δCIA, δCIB, δCIC, δCID, δC2A, δC2B, δC2C, δC2D) using selectively one of a first mode and a second mode, on the basis of an orientation signal (S XY ) supplied by said third accelerometer (17), wherein, in said first mode, said quantities (δφ P , 0Cγ, δφ γ ) are determined so as to reject angular accelerations (α) acting on said first accelerometer and linear accelerations (AL) acting on said second accelerometer (16), and wherein, in said second mode, said quantities (δφ P , CCy, δφ γ ) are determined so as to reject

linear accelerations (AL) acting on said first accelerometer and angular accelerations (α) acting on said second accelerometer (16).

10. The device according to Claim 9, wherein, in said first mode, a first quantity δφ P and a second quantity 0Cγ are determined, respectively, according to the equations

δφ P = KI[(δCIA - δCIB) + (δC 1D - δC 1C )] (1) α γ = K 2 [(δC 2A - δC 2B ) - (δC 2D - δC 2C )] (2)

where Ki and K 2 are two constants and, moreover,

δC IA and δC ID are the capacitive variations between the beams (20) of said first accelerometer (15) and the first electrodes (25) of said first accelerometer (15); δC IB and δCic are the capacitive variations between the beams (20) of said first accelerometer (15) and the second electrodes (26) of said first accelerometer (15); δC 2A and δC 2D are the capacitive variations between the beams (20) of said second accelerometer (16) and the first electrodes (25) of said second accelerometer (16); and δC 2B and δC 2 c are the capacitive variations between the beams (20) of said second accelerometer (16) and the second electrodes (26) of said second accelerometer (16).

11. The device according to Claim 10, wherein, in said second mode, said first quantity δφ P and said second quantity 0Cγ are determined, respectively, according to the following equations:

δφ P = K 2 [(δC 2A - δC 2B ) + (δC 2D - δC 2C )] (3)

CCY = KI[(δCIA - δCIB) - (δC 1D - δCic)] (4)

12. The device according to any one of Claims 9-11, wherein said orientation signal (S XY ) has a first value when said third accelerometer (17) is oriented so that the force of gravity acts prevalently along said first axis of detection (X), and a second value when said third

accelerometer (17) is oriented so that the force of gravity acts prevalently along said second axis of detection (Y).

13. The device according to any one of the preceding claims, wherein quantities (δφ P , 0Cγ, δφ γ ) comprise a variation of pitch angle (δφ P ), an acceleration of yaw (θCγ) and a variation of yaw angle (δφ γ ).

14. An apparatus for the acquisition of digital images comprising a digital-image sensor (5) and a for stabilizer image device (10) according to any one of the preceding claims.

Description:

"IMAGE STABILIZING DEVICE OF THE MEMS TYPE, IN PARTICULAR FOR IMAGE ACQUISITION USING A DIGITAL-IMAGE SENSOR"

TECHNICAL FIELD The present invention relates to an image stabilizing device, in particular for image acquisition using a digital-image sensor.

BACKGROUND ART

As is known, shots taken using non-professional portable apparatuses, such as camcorders or digital cameras, either stand-alone or incorporated in telephone apparatuses, suffer from flickering caused by minor movements of the operator. In particular, portable apparatuses are supported only by the hands of the operator, and the lack of a firm point of rest makes it practically impossible to keep the framing stable. The resulting image is hence unstable and consequently unpleasant to the eye. The same problem also regards cameras during the acquisition of single images. A movement can render acquisition imprecise, especially for long exposure times.

The use of image stabilizers has thus been proposed. By appropriate processing, in digital apparatuses it is possible "to cut out" a portion (hereinafter referred to as "usable frame") of the image effectively acquired (hereinafter referred to as "complete image"). Only the usable frame is made available for display, whereas an outer frame is eliminated from the complete image. Stabilizer devices enable estimation of the movements of the equipment and recalculation of the co-ordinates of the usable frame so as to compensate for the movements and render the image stable.

Image stabilizers of a first type are based upon the content of the images to be stabilized. After identification of reference elements in a scene, in practice the displacement of the apparatus and the position of the usable frame are estimated by comparing the positions of the reference

elements in successive frames. Systems of this type are not satisfactory when the scene framed contains elements that are effectively moving, such as, for example, a person who is walking.

According to a different solution, image stabilizers include gyroscopes, which measure angular velocity of the apparatus with respect to axes transverse to an optical axis thereof (normally, two axes perpendicular to one another and to the optical axis). The rotations about said axes cause in fact the greatest disturbance. By means of temporal integration of the data detected by the gyroscopes, it is possible to trace back to the instantaneous angular position of the optical axis of the apparatus and from here to the position of the centre of the usable frame. The image can then be corrected accordingly. In this way, the stabilization of the image is independent of its content. Gyroscopes, however, absorb a lot of power, because they use a mass that must be kept constantly in oscillatory or rotational motion. Their use is hence disadvantageous in devices that are supplied autonomously because they markedly limit the autonomy thereof.

DISCLOSURE OF INVENTION

The aim of the present invention is to provide an image stabilizer device that is free from of the above referred drawbacks.

According to the present invention, a stabilizer device of images acquired by a digital-image sensor device is provided, as defined in Claim 1.

BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of the invention, an embodiment thereof is now described purely by way of non- limiting example and with reference to the attached drawings, wherein:

- Figure 1 is a right side view of a digital camera in a first operating configuration;

- Figure 2 is a front view of the camera of Figure 1;

- Figure 3 is a simplified block diagram of the camera of Figure 1;

- Figures 4a-4c are front views of an image sensor incorporated in the camera of Figures 1 and 2, in different operating configurations;

- Figure 5 is a block diagram of a, image stabilizer device according to the present invention, incorporated in the camera of Figure 1; - Figure 6a is a right view of the camera of Figure 1 and shows a movement of pitch in the first operating configuration;

- Figure 6b is a bottom view of the camera of Figure 1 and shows a movement of yaw in the operating configuration of Figure 6a;

- Figure 6c is a right view of the camera of Figure 1 and shows a movement of pitch in a second operating configuration, in which the camera is rotated substantially through 90° about a horizontal axis with respect to the first operating configuration;

- Figure 6d is a bottom view of the camera in the second operating configuration of Figure 6c and shows a movement of yaw in the second operating configuration;

- Figure 7 is a cross-sectional view through a first portion of a semiconductor chip incorporating the image stabilizer device of Figure 5, taken along line VII-VII of Figure 8;

- Figure 8 is a front view of the first portion of the semiconductor chip of Figure 7;

- Figures 9 and 10 schematically show the responses of a component incorporated in the image stabilizer device of Figure 5 to linear and, respectively, angular accelerations;

- Figure 11 shows a cross-section through a second portion of the semiconductor chip of Figures 7 and 8, taken along line XI-XI of Figure 8;

- Figure 12 is a front view of the second portion of the semiconductor chip of Figure 11;

- Figure 13 is a front view of the semiconductor chip of Figures 7, 8, 10 and 11, in the first operating configuration;

- Figure 14 is a more detailed block diagram of a first portion of the image stabilizer device of Figure 5;

- Figure 15 is a front view of the semiconductor chip of Figures 7, 8, 10, 11 and 13, in the second operating configuration; and

- Figure 16 is a more detailed block diagram of a second portion of the image-stabilizer device

of Figure 5.

BEST MODE FOR CARRYING OUT THE INVENTION

With reference to Figures 1-3, a digital camera 1, adapted for shooting digital films, comprises a body 2, a lens 3, a digital- image sensor 5, a non- volatile storage unit 6, a display 8, and an image stabilizer device 10.

The body 2 comprises a base 2a, normally facing downwards, and houses inside it the image sensor 5, the storage unit 6, and the image stabilizer device 10.

The image sensor 5 is, for example, a CCD or CMOS sensor and is arranged perpendicular to an optical axis OA of the lens 3. Furthermore, the optical axis OA intercepts the centre of the image sensor 5. Note that a sensitive portion 5 a of the image sensor 5 has a rectangular shape (see Figures 4a and 4b) and, during use of the camera 1 , is normally arranged in a "landscape" configuration or in a "portrait" configuration. More precisely, in the "landscape" configuration (Figure 4a), the larger sides Ll of the sensitive portion 5a are substantially horizontal and the smaller sides L2 are frequently, but not necessarily, vertical; in the "portrait" configuration (Figure 4b), the smaller sides L2 are substantially horizontal and the larger sides Ll are frequently, but not necessarily, vertical. With reference to the orientation of the larger sides Ll and of the smaller sides L2 of the sensitive portion 5a of the sensor 5, by "yaw" movements (and angles) are meant rotations (and angles) of the optical axis OA with respect to a yaw axis parallel to those sides, between the larger sides Ll and the smaller sides L2, which are less inclined with respect to the vertical. In particular, in the "landscape" configuration, a yaw movement is a rotation of the optical axis about a yaw axis parallel to the smaller sides L2; in the "portrait" configuration, instead, the yaw axis is parallel to the larger sides Ll. By "pitch" movements (and angles) are meant rotations of the optical axis OA about a pitch axis perpendicular to the yaw axis (and to the optical axis OA itself). Consequently, in the "landscape" configuration, the pitch axis is parallel to the larger sides Ll, whereas in the

"portrait" configuration the pitch axis is parallel to the smaller sides L2.

With reference to Figures 3 and 4c, the stabilizer device 10 receives from the image sensor 5 a first image signal IMG regarding a complete image 11 detected by the image sensor 5 itself, and generates a second image signal IMG' regarding a usable frame 12 obtained from the complete image 11 and stabilized. The second image signal IMG' is supplied to the storage unit 6 and to the display 8.

As illustrated in Figure 5, the stabilizer device 10 comprises a processing unit 14, a first accelerometer 15, a second accelerometer 16, and a third accelerometer 17, all of a microelectromechanical type and preferably integrated in a single semiconductor chip 19 (see also Figures 13 and 15). Furthermore, the stabilizer device 10 includes a pre-processing stage 18, which supplies variations of a pitch angle δφ P and variations of a yaw angle δφ γ (Figures 6a, 6b, which refer to the "landscape" configuration, and Figures 6c, 6d, which refer to the "portrait" configuration, in which the base 2a of the body 2 is facing sideways and not downwards; for reasons of simplicity, Figures 6a-6d show only the sensitive portion 5 a of the image sensor 5 and, moreover, in Figure 6d the stabilizer device 10 is not illustrated) on the basis of signals detected by the first, second, and third accelerometers 15, 16, 17.

The processing unit 14 receives the first image signal IMG and extracts the second image signal IMG' therefrom, using the variations of the pitch angle δφ P and the variations of the yaw angle δφ γ . In practice, the processing unit 14 is configured for determining displacements of the body 2 and of the optical axis OA on the basis of the variations of the pitch angle δφ P and the variations of the yaw angle δφ γ , for positioning the usable frame 12 within the complete image 11 , so as to compensate for the detected displacements of the body 2 and of the optical axis OA, and for generating the second signal image IMG' on the basis of the portion of the complete image 11 corresponding to the usable frame 12.

As illustrated in Figures 7 and 8, the first accelerometer 15 has a specularly symmetrical oscillating-beam structure. In greater detail, the first accelerometer 15 comprises two beams 20 of semiconductor material, constrained to a substrate 21 of the semiconductor chip 19 by return torsional springs 22, fixed to respective anchorages 23. The torsional springs 22 are shaped so that the beams 20 are free to oscillate about respective first rotation axes Rl in response to external stresses. In particular, the first rotation axes Rl are parallel to one another and to the surface 21a of the substrate 21, and perpendicular to longitudinal axes L of the beams 20 themselves and to the optical axis OA. The longitudinal axes L of the beams 20 are mutually aligned at rest. The first axes of rotation Rl intercept the longitudinal axes L in points staggered with respect to the centroids G of the respective beams 20, dividing each of them into a larger portion 20a, containing the respective centroid G, and into a smaller portion 20b.

At rest, the beams 20 are arranged in a specularly symmetrical way with respect to one another. In the embodiment of the invention described herein, the beams 20 have their respective smaller portions 20b facing one another, whereas the larger portions 20a project outwards in opposite directions. In the absence of external stresses, moreover, the torsional springs 22 tend to keep the beams 20 parallel to the surface 21a of the substrate 21.

A first electrode 25 and a second electrode 26 are associated to each beam 20, and are housed in the substrate 21 (insulated therefrom) in positions that are symmetrical with respect to the respective first axes of rotation Rl. The larger portion 20a and the smaller portion 20b of each beam 20 are capacitively coupled with the respective first electrode 25 and the respective second electrode 26 and form first and second capacitors 27, 28, having variable capacitance. In this way, a rotation of a beam 20 about the respective first rotation axis Rl causes a corresponding differential capacitive unbalancing between the first capacitor 27 and the second capacitor 28 associated thereto. In Figures 7 and 8, the capacitances of the first capacitors 27 are designated by CLA. and Cic, respectively, whereas the capacitances of the

second capacitors 28 are designated by Cm and Cm, respectively.

In the presence of linear accelerations AL having a component perpendicular to the surface 21a of the substrate 21 (in practice, parallel to the optical axis OA), the two beams 20 are subject to rotations of equal amplitude, one in a clockwise direction and one in a counterclockwise direction (Figure 9). Consequently, the capacitances of both of the first capacitors 27 increase (decrease) by an amount +δC (-δC), whereas the capacitances of both of the second capacitors 28 decrease (increase) by an amount -δC (+δC). The variations are hence of equal absolute value and of opposite sign. Instead, when the semiconductor chip 19 is subjected to a rotational acceleration α, both of the beams 20 undergo rotations in the same direction, whether clockwise or counterclockwise (Figure 10). Consequently, for one of the beams 20, the capacitance of the first capacitor 27 increases by an amount +δC and the capacitance of the second capacitor 28 decreases by an amount -δC, while, on the contrary, for the other beam 20 the capacitance of the first capacitor 27 decreases by the amount -δC, and the capacitance of the second capacitor 28 increases by the amount +δC.

Capacitance variations δCIA, δCIB, δCIC, δCID are detectable by means of a sense interface 30 having terminals connected to the first electrodes 25, to the second electrodes 26, and to the beams 20 (through the substrate 21, the anchorages 23, and the torsional springs 22, made of semiconductor material).

The second accelerometer 16 has a structure identical to the first accelerometer 15 and is rotated by 90° with respect thereto, as illustrated in Figures 11 and 12. More precisely, the beams 20 of the second accelerometer 16 are free to oscillate, in response to external stresses, about second rotation axes R2 parallel to one another and perpendicular to the first rotation axes Rl and to the optical axis OA. Also the second rotation axes R2 intercept the longitudinal axes L of the respective beams 20 in points staggered with respect to the centroids G. For the second accelerometer 16 (Figure 11), the capacitances of the first

capacitors 27 are designated by C2A and C2C, whereas the capacitances of the second capacitors 28 are designated by C2B and C2D; the corresponding capacitance variations are designated by δC2A, δC2B, δC2C, δC2D- The response of the second sensor 16 to linear accelerations AL, perpendicular to the second rotation axes R2 and to angular accelerations about axes parallel to the second rotation axes R2, is altogether similar to the response of the first accelerometer 15 to linear accelerations AL, perpendicular to the first rotation axes Rl and to angular accelerations about axes parallel to the first rotation axes Rl (as represented in Figures 9 and 10).

The semiconductor chip 19 is mounted in the body 2 of the camera 1 so that, in the absence of external stresses, the beams 20 of the first accelerometer 15 and of the second accelerometer 16 are perpendicular to the optical axis OA (Figure 13). Furthermore, when the optical axis OA and the base 2a of the body 2 are horizontal, the first rotation axes Rl of the first accelerometer 15 are horizontal, whereas the second rotation axes R2 of the second accelerometer 16 are vertical.

The third accelerometer 17 (Figure 13) is of a biaxial type with comb-fingered electrodes, as illustrated schematically in Figure 13, and is a low-resolution accelerometer. The third accelerometer 17 has a first detection axis X and a second detection axis Y, both perpendicular to the optical axis OA. Furthermore, the first detection axis X is parallel to the first rotation axes Rl of the beams 20 of the first accelerometer 15, whereas the second detection Y is parallel to the second rotation axes R2 of the beams 20 of the second accelerometer 16. In practice, according to how it is oriented, the third accelerometer 17 is able to discriminate along which one of the first detection axis X and the second detection axis Y the force of gravity prevalently acts and is thus able to provide an indication of how the body 2, the optical axis OA, and the semiconductor chip 19, the relative positions whereof are constant, are oriented. An orientation signal Sχγ ? of a logic type, supplied by a sense interface (not illustrated in detail) of the third accelerometer 17 is sent to the pre-processing stage 18

(see Figure 5).

Figure 14 shows in greater detail the pre-processing stage 18, which comprises a first computation module 31, a selector module 32, and an integrator module 33. The first computation module 31 is connected to the first accelerometer 15 and to the second accelerometer 16 for receiving sensing signals representing the capacitance variations δC IA , δCIB, δCic, δCID, δC2A, δC2B, δC2C, δC2D of the respective first capacitors 27 and second capacitors 28 (see also Figures 7 and 12). The first computation module 31 is moreover configured to calculate the variations of the pitch angle δφ P and an acceleration of yaw CCy on the basis of the capacitance variations δCIA, δCIB, δCIC, δCID, δC2A, δC2B, δC2C, δC2D, selectively according to one of two modalities, according to whether the camera 1 is used in the "landscape" configuration or in the "portrait" configuration. The selection of the calculation mode is made by the selector module 32 on the basis of the orientation signal S XY supplied by the third accelerometer 17.

In practice, when the camera is in the "landscape" use configuration, the force of gravity acts prevalently on the second detection axis Y, and the orientation signal S XY has a first value. In this case, the first calculation mode of the first computation module 31 is selected, in which the first accelerometer 15 is used as inclinometer for measuring variations of the pitch angle δφ P , and the second accelerometer 16 is used as rotational accelerometer for determining angular accelerations due to the variations of the yaw angle δφ γ (accelerations of yaw CCy; in this case, the yaw axis is parallel to the second detection axis Y). The calculation is carried out according to the equations:

sinδφ P ≡ δφ P =

=KI[(δCIA - δCIB) + (δCID - δCic)] (1) α γ = K 2 [(δC 2A - δC 2B ) - (δC 2D - δC 2C )] (2)

where Ki and K 2 are coefficients of proportionality.

As regards Eq. (1), a movement of pitch of the camera 1 in the "landscape" configuration modifies the effect of the force of gravity on the first accelerometer 15 and is equivalent, in practice, to a linear acceleration AL directed perpendicularly to the surface 21a of the substrate 21 (as in the example of Figure 9). Furthermore, the variation of the effect of the force of gravity is proportional to the sine of the variation in the pitch angle δφ P . However, for small oscillations, as in the present application, the approximation sinδφ P ≡ δφ P is justified. Alternatively, the first computation module 31 of Figure 14 calculates the value of the function arcsinδφ P . For the first accelerometer 15, Eq. (1) enables an amplification of the capacitive variation due to linear accelerations AL and the selective rejection of the effects of angular accelerations CC due to rotations (in particular, following upon variations of the yaw angle δφ γ ) to be obtained. From Figure 9, in which the effects of a linear acceleration AL are illustrated, from Eq. (1) we obtain

δφ P = K 1 [(δC - (-δC)) + (δC - (-δC))] = 4K 1 AC

In the case of angular accelerations CC, illustrated in Figure 10, we obtain instead:

δφ P = K 1 [(δC - (-δC)) + (-δC - δC)] = 0

Instead, for the second accelerometer 16, Eq. (2) enables amplification of the effects of angular accelerations due to the accelerations of yaw CCy and the selective rejection of linear accelerations perpendicular to the surface 21a of the substrate 21 to be obtained. Again, with reference to Figure 10, from Eq. (2) we obtain

CC Y = K 2 [(δC - (-δC)) - (-δC - δC)] = 4K 2 δC

whereas, in the case of Figure 9 (effect of linear accelerations AL), we have

CC Y = K 2 [(δC - (-δC)) - (δC - (-δC))] = 0

In practice then, the first accelerometer 15 senses only linear accelerations or forces having a component parallel to the optical axis OA and perpendicular to the second axis of detection Y (yaw axis) and is used as inclinometer for evaluating the variations of the pitch angle δφ P . The second accelerometer 16 is selectively sensitive to the angular accelerations and is used as rotational accelerometer for determining the yaw accelerations CCy.

When the camera 1 is in the "portrait" configuration, the force of gravity acts prevalently along the first detection axis X of the third accelerometer 17, and the orientation signal S XY has a second value.

In this case, the second calculation mode of the first computation module 31 is selected, in which the second accelerometer 16 is used as inclinometer for measuring variations of the pitch angle δφ P , and the first accelerometer 15 is used as rotational accelerometer for determining angular accelerations caused by the variations of the yaw angle δφ γ (yaw accelerations CCy; in this case, the yaw axis coincides with the first detection axis X). In practice, the second detection axis Y is substantially horizontal, as illustrated in Figure 15. The calculation is carried out according to the following equations:

sinδφ P ≡ δφ P =

K 2 [(δC 2A - δC 2B ) + (δC 2D - δC 2C )] (3) ( XY = KI[(δCIA - δCIB) - (δCID - δCIC)] (4)

In practice, the functions performed by the first and second accelerometers 15, 16 are swapped on the basis of the information supplied by the third accelerometer 17. Consequently,

the first accelerometer 15 is selectively sensitive to the angular accelerations about the yaw axis (first detection axis X) and rejects the linear accelerations. Instead, the second accelerometer 16 selectively rejects the angular accelerations and reacts to the linear accelerations and to the forces having a component parallel to the optical axis and perpendicular to the first detection axis X.

Returning to Figure 14, the values of the yaw acceleration CCy, determined by the first computation module 31, are supplied to the integrator module 33, which integrates them twice to trace back to the variations of the yaw angle δφ γ .

Figure 16 shows, in greater detail, the processing unit 14, which comprises a second computation module 35 and an image-processing module 36. The second computation module 35 receives from the pre-processing stage 18 the variations of the pitch angle δφ P and the variations of the yaw angle δφ γ and accordingly calculates compensated co-ordinates Xc, Yc of the usable frame 12 (Figure 4a), so as to compensate for pitch and yaw movements and to stabilize the corresponding image. Stabilization is carried out according to criteria in themselves known.

The image-processing module 36 receives the first image signal IMG from the image sensor 5, and the compensated co-ordinates Xc, Yc of the usable frame 12 from the second computation module 35. On the basis of these compensated co-ordinates Xc, Yc, the image- processing module 36 extracts the usable frame 12 from the complete image 11 (Figures 4a and 4b) and generates the second image signal IMG', stabilized.

The stabilizer device according to the invention is mainly advantageous because accelerometers are used. Image stabilization can then be performed based on the detected movement, rather than on the content of the image itself, and, moreover, power absorption is minimal and in any case much lower than the consumption of stabilizer devices based on

gyroscopes. Consequently, also the autonomy is improved, and the stabilizer device according to the invention is particularly suited for being integrated in appliances for which power absorption is a critical factor, such as, for example, cellphones equipped with a camera. The stabilizer device described is moreover advantageous because the accelerometers used are simple and robust and, moreover, can be integrated in a single semiconductor chip. Also this feature renders the stabilizer device suitable for being incorporated within cellphones and other appliances of small dimensions.

Finally, it is evident that modifications and variations can be made to the stabilizer device described herein, without departing from the scope of the present invention, as defined in the annexed claims. In particular, instead of oscillating-beam accelerometers, rotational accelerometers or linear accelerometers with comb-fingered electrodes may be used. In the first case, two rotational accelerometers with rotation axes perpendicular to one another and to the optical axis are sufficient. In the second case, two pairs of linear accelerometers with comb-fingered electrodes are necessary, arranged so as to differentially react to the accelerations directed along two axes perpendicular to one another and to the optical axis.