Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PASSIVE-OPTOELECTRONIC RANGEFINDING
Document Type and Number:
WIPO Patent Application WO/1991/004458
Kind Code:
A1
Abstract:
A rangefinder has a linear array of pairs of photo-sensitive devices (40) mounted as two rows of photo-sensitive devices, viewing the same scene. Each photo-sensitive device is responsive to the intensity of illumination of a small patch of the scene. The outputs from the photo-sensitive devices of the first row are weighted by a first angular sensitivity function g($g(a)). The outputs from the photo-sensitive devices of the second row are weighted by a second angular sensitivity function g'($g(a)) which is the angular spatial derivative of the function g($g(a)). A spatial function h'(T), which is a translation function, is applied to the weighted outputs of the photo-sensitive devices of the first row and a second spatial function h(T), which is the integral of the function h'(T) is applied to the weighted outputs of the photo-sensitive devices of the second row. The doubly weighted outputs of the photo-sensitive devices of each row are summed and the ratio of the signals indicating the two summed, doubly weighted, output signals is obtained using a comparator (56). This ratio signal is indicative of the range or proximity of an object in the scene. In a rangefinder for blind persons, each photo-sensitive device (40) is a four-quadrant photodiode, the weighting functions are applied using banks of resistors (52) and the output signal from the comparator is converted into a frequency signal which drives an audio signal generator (58).

Inventors:
STANGE FREIDRICH GERT (AU)
SRINIVASAN MANDYAM VEERAMBUDI (AU)
DALCZYNSKI JAN (AU)
Application Number:
PCT/AU1990/000423
Publication Date:
April 04, 1991
Filing Date:
September 14, 1990
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV AUSTRALIAN (AU)
International Classes:
G01S11/12; (IPC1-7): A61F9/08; G01C3/00; G01S11/12
Foreign References:
DE3743696A11989-06-29
EP0069938A11983-01-19
FR2347032A11977-11-04
US3945023A1976-03-16
Other References:
PATENT ABSTRACTS OF JAPAN, P-219, page 93; & JP,A,58 095 206 (FUJI DENKI SEIZO K.K.), 6 June 1983.
PATENT ABSTRACTS OF JAPAN, P-237, page 44; & JP;A,58 142 215 (CANON K.K.), 24 August 1983.
Attorney, Agent or Firm:
Duncan, Alan David (1 Little Collins Street Melbourne, VIC 3000, AU)
Download PDF:
Claims:
CLAIMS
1. A method of generating a signal indicative of the distance of an object from an observation position, said method being characterised by the steps: (a) providing at said observation position a linear array of a plurality of pairs of photosensitive devices, each pair of photosensitive devices comprising a first photosensitive device and a second photosensitive device; each photosensitive device having an output which is proportional to the intensity of illumination of the device; (b) mounting said linear array of photosensitive devices so that: (i) the first photosensitive devices of the linear array form a first row of photosensitive devices; and the second photosensitive devices of the linear array form a second row of photosensitive devices; (ii) the first and second photosensitive devices of any pair within the linear array are responsive to the optical signals from the same small patch of a scene containing said object; and (iii) adjacent pairs of photosensitive detectors in the linear array are responsive to adjacent small patches of the scene; (c) applying a first angular sensitivity weighting function to each photosensitive device in said first row; (d) applying a second angular sensitivity weighting function to each photosensitive device in said second row, said second angular sensitivity weighting function being the angular spatial derivative of the first angular sensitivity weighting function; (e) applying a first spatial weighting function to the outputs of the photosensitive devices of said first row; (f) applying a second spatial weighting function to the outputs of the photosensitive devices of said second row, the second spatial weighting function being the integral of said first spatial weighting function; (g) adding the output signals of the photosensitive devices of said first row to obtain a first weighted sum; (h) adding the output signals of the photosensitive devices of said second row to obtain a second weighted sum; and (i) producing a ratio signal indicative of the ratio of the second weighted sum to the first weighted sum, said ratio signal providing an indication of the range of an object within said scene.
2. A method as defined in claim 1, in which each of said photosensitive devices (40) comprises a cluster of photodiodes or charge coupled devices.
3. A method as defined in claim 1, in which each of said photosensitive devices (40) is a fourquadrant photodiode.
4. A method as defined in any preceding claim, further characterised by the step of converting the ratio signal into a tone signal by feeding the ratio signal to a voltagetofrequency converter (57) , the output of which is connected to an audio signal generator (58).
5. A method as defined in claim 4, including the step of switching off said tone signal when the weighted sum which is used as the denominator in producing said ratio signal has a value which is lower than a predetermined minimum value.
6. Apparatus for providing an indication of the range of an object within a scene, said apparatus comprising: (a) a linear array of a plurality of pairs of photosensitive devices, each pair of photosensitive devices comprising a first photosensitive device closely adjacent to a second photosensitive device, each photosensitive device having an output which is proportional to the intensity of illumination of the device; the pairs of devices being so mounted that the first photosensitive devices constitute a first row of photosensitive devices and the second photosensitive devices constitute a second row of photosensitive devices; the first and second photosensitive devices of any pair of devices being responsive to the optical signals from the same small patch of said scene; the pairs of photosensitive detectors within the linear array being responsive to adjacent small patches of said scene; (b) respective first weighting means connected to the outputs of each photosensitive device of said first row, each of said first weighting means applying a first angular sensitivity weighting function to the output of its associated photosensitive device; (c) respective second weighting means connected to the outputs of each photosensitive device of said second row, each of said second weighting means applying a second angular sensitivity weighting function to the output of its associated photosensitive device, the second angular weighting function being the angular spatial derivative of the first angular sensitivity weighting function; (d) third weighting means applying a first spatial weighting function to the outputs of the photosensitive devices of said first row; (e) fourth weighting means applying a second spatial weighting function to the outputs of the photosensitive devices of said second row, said second spatial weighting function being the integral of said first spatial weighting function; (f) first adding means for adding the outputs of the photosensitive devices of said first row, to obtain a first weighted sum signal; (g) second adding means for adding the outputs of the photosensitive devices of said second row, to obtain a second weighted sum signal; and (h) comparator means connected to the outputs of said first and second adding means, for generating a ratio signal indicative of the ratio of said second weighted sum signal to said first weighted sum signal, said ratio signal being indicative of the range of said object from said apparatus.
7. Apparatus as defined in claim 6, in which each of said photosensitive devices (40) comprises a cluster of photodiodes or charge coupled devices.
8. Apparatus as defined in claim 7, in which each of said photosensitive devices is a fourquadrant photodiode.
9. Apparatus as defined in claim 6, claim 7 or claim 8, in which a respective lens (43) is interposed between each photosensitive device (40) and said scene, each photosensitive device being located in the focal plane of its associated lens.
10. Apparatus as defined in any one of claims 6 to9 in which each pair of photosensitive devices is created by providing a single photosensitive device, the output signal of the single photosensitive device being connected, independently, to said first weighting means and said second weighting means.
11. Apparatus as defined in any one of claims 6 to10 in which each of said first, second, third and fourth weighting means comprises a respective bank of resistors, the values of the resistors (52) in each bank of resistors being selected to produce a desired weighting function.
12. Apparatus as defined in claim 11, including a respective preamplifier (51) interposed between each output of each of said photosensitive devices (40) and a respective resistor (52) of said first and second weighting means.
13. Apparatus as defined in any one of claims 6 to12 in which said first adding means and said second adding means are each a summing differential amplifier.
14. Apparatus as defined in any one of claims 6 to13 in which said comparator means (56) is an analogue divider.
15. Apparatus as defined in any one of claims 6 to14 including a voltagetofrequency converter (57) receiving the ratio signal generated by said comparator means, the output of said voltagetofrequency converter being input to an audio signal generator (58).
16. Apparatus as defined in claim 15 including disconnection means (59) responsive to the output of whichever of said first and second adding means (55) generates the denominator signal for said comparator means (56), said disconnection means being connected to deactivate said audio signal generator (58) when said denominator signal has a value which is lower than a predetermined minimum value.
17. A method of generating a signal indicative of the distance of an object from an observation position, substantially as hereinbefore described with reference to the accompanying drawings.
18. Apparatus for providing an indication of the range of an object within a scene, substantially as hereinbefore described with reference to the accompanying drawings.
Description:
PASSIVΕ^PTQE ECTRCNIC RANGEFINDING

Technical Field

This invention concerns optoelectronic rangefinders. More particularly, it concerns a method and apparatus for measuring the range (distance) of an object which utilises passive optical means, as opposed to active arrangements requiring the motion of either the object or the detector.

Background Art

A number of techniques for measuring the range or distance of an object in the visual environment have been developed. Those techniques require computers to perform the necessary image analyses.

Almost all of the computer-vision techniques that have been developed for "passive" measurement of the range of objects in a scene rely upon the principle of stereopsis. That is, the range or distance of the object is inferred from the disparity of the position of the object in two images of the scene, each image being taken from a different viewpoint. This approach is similar to the human visual system, which processes the images on the two retinae to determine the distance of objects. In the computer-vision system, however, the calculation of the range of an object from the disparity of position in two images has proved to be a computationally intense process which (as noted by R A Jarvis in his article entitled

"Sensors, distance measurement" which is published in the International Encyclopedia of Robotics

Applications and Automation - John Wiley, New York,

1988) is as yet impossible to solve in real time for anything but the most simplified images.

Other workers have investigated this stereopsis problem of establishing which feature in one image corresponds to a given feature in the other image. Such work has been reported, for example, by (i) D Marr and T Poggio in their paper entitled "A computational theory of human stereo vision", which was published in the Proceedings of the Royal Society ^ (London) , volume B-204, pages 301 to 308, 1979, (ii) J E W Mayhew and J P Frisby in their paper entitled "Psychophysical and computational studies towards a theory of human stereopsis", which was published in Artificial Intelligence, volume 17, pages 349 to 385, 1981, and (iii) N C Griswold and C P Yeh in their paper entitled "A new stereo vision model based upon the binocular fusion concept", which was published in Computer Vision, Graphics and Image Processing, volume 41, pages 153-171, 1988. The conclusion reached by all these workers is that, in computational terms, the solution of this problem, which has been called the "correspondence problem", is a non-trivial task.

A ranging technique which involves the measurement of geometrical parallax of intensity gradients within a scene has recently been proposed by K Skifstad and

R Jain. Their system is described in their paper entitled "Range estimation from intensity gradient analysis", which was published, in Machine Vision and Applications, volume 2, pages 81 to 102, 1989. Skifstad and Jain use a moving video camera to obtain a series of snapshots or frames of a scene as the video camera moves along a translational axis at a constant velocity. A powerful computer analyses the images of objects in the scene, and from the temporal intensity gradients that are observed, calculates the ranges of those objects. The advantage of this technique is that the ranges of several objects are obtained simultaneously. The disadvantages include (a) a lack of portability, due to the size of the powerful digital computer; (b) the fact that the indication of range can be obtained only at the end of the "sweep" by the camera, after all of the frames captured by the sweep have been transmitted to the digital computer and processed by the computer; and (c) the fact that the accuracy of this technique depends upon, inter alia, the mechanical accuracy with which the camera can be moved at a uniform velocity, in a straight line, with its frame axis always pointing in the same direction.

Clearly there is a need (and there has been a need for a considerable time) for an accurate, portable rangefinder, which can provide an essentially instantaneous indication of the distance of an object.

Disclosure of the Present Invention

It is an object of the present invention to satisfy the above-mentioned need for an accurate, portable rangefinder which provides an essentially instantaneous readout of the range of an object in a scene.

This objective is achieved by providing an optoelectronic device having an array of a plurality of pairs of optoelectronic sensors or photodetectors (such as photodiodes or CCD devices) which are positioned and used to measure the translational stationary intensity gradient over an object within a scene or field of view.

Two rows of photo-responsive devices are mounted adjacent to each other, so that each photo-responsive device of one row is adjacent to, views the same small patch of the scene as, and forms a pair of devices with the corresponding photo-responsive device of the second row.

The outputs of the devices of one of the rows (the first row) in the linear array are weighted by an angular sensitivity profile characterised by the function g (-*-) . In addition, the weighted outputs of the first row of photo-sensitive devices are further weighted by a spatial weighting function h' (T), which is a translational function applied to the outputs of

the first row. Thus the sum of the outputs of the photo-responsive devices in the first row is the sum of the doubly weighted outputs of those devices.

Similarly, the outputs of the photo-responsive devices in the second row of the linear array are also weighted, by an angular sensitivity profile characterised by the function g* (© . The function g' (X) is the angular spatial derivative of the function g(o ) . In addition, the angularly weighted outputs of the photo-sensitive devices of the second row are further weighted by a spatial weighting function h(T), which is the spatial integral of the weighting function h'(T). Thus the sum of the outputs of the photo-responsive devices of the second row of devices in the linear array is the sum of the doubly weighted outputs of those devices.

The present inventors have determined, by an extension of the generalised gradient schemes for the measurement of image motion, which were recently developed by M V Srinivasan, that the ratio of the weighted outputs of the photo-responsive devices in the first and second rows of the linear array provides an indication of either the distance (range) of an object which is in the field of view of the linear array, or the proximity (the reciprocal of the range) of that object, depending upon which weighted output is the numerator and which is the denominator in determining the ratio. The generalised gradient schemes are. described in the paper by M V Srinivasan

entitled "Generalised gradient schemes for the measurement of two-dimensional image motion", which is shortly to be published in the journal Biological Cybernetics. The contents of that paper, which were included in the specification of Australian patent application No PJ 6394 filed 15 September 1989, are included in the present specification by this cross-reference thereto.

Thus, according to the present invention, there is provided a method of generating a signal indicative of the distance of an object from an observation position which comprises the steps:

(a) providing at said observation position a linear array of a plurality of pairs of photo-sensitive devices, each pair of photo-sensitive devices comprising a first photo-sensitive device and a second photo-sensitive device; each photo-sensitive device having an output which is proportional to the intensity of illumination of the device;

(b) mounting said linear array of photo-sensitive devices so that:

(i) the first photo-sensitive devices of the linear array form a first row of photo-sensitive devices; and the second photo-sensitive devices of the linear array form a second row of photo-sensitive devices;

(ii) the first and second photo-sensitive devices of any pair within the linear array are responsive to the optical signals from the same small patch of a scene containing said object; and

(iii) adjacent pairs of photo-sensitive detectors in the linear array are responsive to adjacent small patches of the scene; (c) applying a first angular sensitivity weighting

. function to each photo-sensitive device in said first row;

(d) applying a second angular sensitivity weighting function to each photo-sensitive device in said second row, said second angular sensitivity weighting function being the angular spatial derivative of the first angular sensitivity weighting function;

(e) applying a first spatial weighting function to the outputs of the photo-sensitive devices of said first row;

(f) applying a second spatial weighting function to the outputs of the photo-sensitive devices of said second row, the second spatial weighting function being the integral of said first spatial weighting function;

(g) adding the output signals of the photo-sensitive devices of said first row to obtain a first weighted sum;

(h) adding the output signals of the photo-sensitive devices of said second row to obtain a second weighted sum; and

(i) producing a ratio signal indicative of the ratio of the second weighted sum to the first weighted sum, said ratio signal providing an indication of the range of an object within said scene.

Also according to the present invention, there is provided apparatus for providing an indication of the range of an object within a scene, said apparatus comprising:

(a) a linear array of a plurality of pairs of photo-sensitive devices, each pair of photo-sensitive devices comprising a first photo-sensitive device closely adjacent to a second photo-sensitive device, each photo-sensitive device having an output which is proportional to the intensity of illumination of the device; the pairs of devices being so mounted that the first photo-sensitive devices constitute a first row of photo-sensitive devices and the second photo-sensitive devices constitute a second row of photo-sensitive devices; the first and second photo-sensitive devices of any pair of devices being responsive to the optical signals from the same small patch of said scene; the pairs of photo-sensitive detectors within the linear array being responsive to adjacent small patches of said scene;

(b) respective first weighting means connected to the outputs of each photo-sensitive device of said first row, each of said first weighting means applying a first angular sensitivity weighting function to the output of its associated photo-sensitive device;

(c) respective second weighting means connected to the outputs of each photo-sensitive device of said second row, each of said second weighting means applying a second angular sensitivity weighting function to the output of its associated photo-sensitive device, the second angular weighting function being the angular spatial derivative of the first angular sensitivity weighting function;

(d) third weighting means applying a first spatial weighting function to the outputs of the photo-sensitive devices of said first row;

(e) fourth weighting means applying a second spatial weighting function to the outputs of the photo-sensitive devices of said second row, said second spatial weighting function being the integral of said first spatial weighting function; (f) first adding means for adding the outputs of the photo-sensitive devices of said first row, to obtain a first weighted sum signal;

(g) second adding means for adding the outputs of the photo-sensitive devices of said second row, to obtain a second weighted sum signal; and

(h) comparator means connected to the outputs of said first and second adding means, for generating a ratio signal indicative of the ratio of said second weighted sum signal to said first weighted sum signal, said ratio signal being indicative of the range of said object from said apparatus.

It will be apparent that the two photo-sensitive devices comprising each pair of the linear array view the same patch of the visual scene, but have their outputs weighted by different weighting functions. Thus, each pair of photo-sensitive devices produces two output signals, each of which is subsequently weighted by different optical weighting functions.

In general, each photo-sensitive device will be in the form of a cluster of photodiodes or charge-coupled devices, the individual components of which sample the intensity of adjacent regions of the visual scene. One form of opto-electronic device which has been used successfully by the present inventors to realise the photo-sensitive detector pair is a four-quadrant photodiode, manufactured by Reticon (catalogue No TJV-140BQ-4) . However, any suitable cluster or combination of photo-responsive elements may be used as a photo-sensitive device of the present invention.

If the photo-sensitive devices are photodiode elements, the signal processing may be performed using analogue techniques. If the elements of the photo-sensitive device are charge coupled devices (CCDs), digital signal processing will be required.

The features of the present invention will be explained in more detail in the following description, in which an embodiment of the present invention will be described (by way of example), and in which reference will be made to the accompanying drawings.

Brief Description of the Drawings

Figure 1 is a drawing illustrating the basic concepts required for an analysis of the measurement of the range of an object from an observation point.

Figure 2 is a modified form of Figure 1, which is useful to illustrate the analysis concepts when a linear array of sensors is used instead of a discrete sensor for the measurement of the range of an object from the observation axis.

Figure 3 illustrates how a four-quadrant photodiode array may be used as one of the photo-sensitive devices of the present invention.

Figure 4 illustrates a linear array of photo-sensitive devices, in combination with respective lenses, for use in the present invention,

with an indication of the pairs of outputs from each device, weighted by functions of translational position.

Figure 5 is a block diagram showing one embodiment of the electronic circuitry that may be used with the array of photo-sensitive devices shown in Figure 4, to apply weighting functions to the outputs of the photo-sensitive devices.

Figure 6 shows the sensitivity functions g(< )*h f (T) (left column) and g'(^)*h(T) (right column), measured separately for each individual photo-sensitive device in the linear array.

Figure 7 is a perspective sketch of a test arrangement for devices constructed in accordance with the present invention.

Figure 8 is a print-out of signals generated by a prototype of the present invention when tested using the arrangement of Figure 7.

Detailed Description of the Present Invention In the aforementioned paper by M V Srinivasan (incorporated into this specification by the cross-reference to it), the classical "gradient model" for the measurement of image velocity is developed further.

In the gradient model, in a one dimension system, it is assumed that a scene has a spatial intensity profile which is described by the function f(x). If this scene moves at a uniform and constant velocity V, its spatiotemporal intensity profile, denoted by I(x,t), will be described by the function f(x - V.t). The partial derivatives of I(x,t) with respect to space and time are

d [I(x,i)] = f(x - V.t) ( i ) dx and

-v.f(x - v.t) (2) dtΛi( χ .t)] =

Dividing Equation (2) by Equation (1) produces the relationship dl/dt dl/dx

Thus the velocity at a point in the moving image is given by the ratio of the temporal and spatial derivatives of the intensity at that point. Note that this way of calculating the velocity is independent of the intensity profile of the scene, and is valid for all profiles except one that corresponds to a uniform and structureless surface.

in which case the relationship of Equation (3) gives an erratic signal corresponding to the indeterminate quantity (zero/zero).

M V Srinivason, in his aforementioned paper, describes how the conventional gradient model for measurement of image velocity can be generalised to include spatial and temporal filtering in the

"front-end" of the model. In particular, by

(a) viewing the scene by two separate channels; (b) including, in one channel (channel 1) a spatial filter characterised by a weighting function g(x), which does not extend to infinitely large distances;

(c) including, in the other channel (channel 2), a spatial filter characterised by a weighting function g f (x) that is the spatial derivative of g( χ ) ;

(d) passing the output of the spatial filter of channel 2 through a temporal filter characterised by an impulse response i(t);

(e) passing the output of the spatial filter of channel 1 through a temporal filter characterised by an impulse response i' (t), which is the temporal derivative of the impulse response of the other filter;

Srinivasan shows that the velocity of motion is given by the ratio of the outputs of channels 1 and 2 for any set of spatial and temporal filters which satisfy the minor constraints he requires.

He then proceeds to show that, in general, the velocity of a two-dimensional object translating in two dimensions can be determined unambiguously by combining the outputs of two sets of filters having similar relationships to those described above.

The present inventors have investigated whether the use of weighting functions can also simplify the measurement of the distance of an object from an observation location. They have discovered that it can be shown that the distance of an object from an observation point is equal to the ratio of (a) the local angular gradient of the intensity profile, as seen at the observation point, to (b) the local translational gradient of the profile, measured along a line parallel to the plane of the object. The essential steps in this analysis are reproduced below.

Figure 1 shows a one-dimensional object 10 which has an intensity profile in the translational dimension T, described by the function f(T). The object 10 is a distance R from the observation axis 11. The axis of the object 10 is assumed to be parallel to the observation axis 11. Thus the line 12 passing through the (arbitrary) origin of the observation axis along the direction o< = 0 intersects the object axis at the origin of the object axis (that is at T = 0). Thus, if the object is viewed from the origin of the observation axis, the bearing - ■ of a

point on the object is related to its distance T from the origin of the object axis and is given by the relationship

T = .tanα

which, at small values of , simplifies to

T = R.a (4)

The angular derivative of the intensity profile (as seen from the origin of the observation axis) is given by df_ _ df_ dT da ~ dT ' da

Using Equation (4), the above relationship can be rewritten

da ~ ' dT

Thus,

(df/da)

R = (df/dT)

Angular derivative Translational derivative

In other words, the distance to the object is given by the ratio of the local angular derivative to the local translational derivative. A simple way to approximate the angular derivative is to use the difference between the outputs of two photodetectors located at T = 0, one looking along the direction C = 0 and the other along a direction deviating by a small amount A - . A simple way to approximate the translational derivative is to use the difference between the outputs of two photodetectors, both looking along the same direction (say, ^ = 0) separated by a distance A along the observation axis. Thus, with reference to Figure 1, the range R is obtained by computing the quantity

where it is assumed that the derivative of f(T) is constant in the interval over which I,, I» and I- are measured.

Figure 2 illustrates a more generalised version of the observation situation shown in Figure 1. Consider two rows of photodetectors arranged along the observation axis 11 of Figure 2. In practice, the two rows of photodetectors will each comprise a finite number of discrete detectors, but for convenience of the analysis, each row will be treated as a continuum of infinitesimally small

photodetectors. (The analysis is similar, but less convenient, for a finite number of discrete detectors. )

The object 10 in Figure 2 has a spatial intensity profile f(T). Assume that in one of the rows of photodetectors, each photodetector possesses an angular sensitivity profile characterised by the function g(0<), shown by a polar plot in Figure 2.

The directional (or angular) intensity profile of the object, for small values of Λ, is given by the function f(R.0 + T) , and the output of this photodetector is obtained as the integral of this profile, weighted by the angular sensitivity of the photodetector, and is given by

If the output signals of the photodetectors in this row are weighted by a spatial function h'(T), which is defined along the observation axis of Figure 2, then the output 0, from this entire row of photodetectors is obtained as the sum of the outputs of the individual detectors, weighted by the spatial weighting function h*(T). Thus

0 1 V(T) .dT J ra--a- f( R ' a + T )-9( a ) d (5}

Integrating by parts with respect to T, Equation (5) can be written

O l ~- g(a)da __ To h(T).f(R.a+T)]

Jot-— —eta

- r g(a)da f' h(T)f(R.a+T).dT (6)

Ja=-—ao JT ~ =—To

Assuming that the row of detectors extends symmetrically from the origin to distances ±T Q on either side, that h(T ) = h(-T ) = 0, and that g ( 0 ) = g(-(X ) = 0, then the quantity in square brackets is zero at each limit and Equation (6) reduces to

°ι = ~ h(T).f ( R.a+ T).dT (7

Assume now that the photodetectors of the other (second) row of photodetectors each have an angular sensitivity profile characterised by the function g'(0(), which is the angular spatial derivative of g(0(). Assume also that the output signal of each photodetector in the second row is weighted by a spatial weighting function h(T), which is the integral of the spatial weighting function used in the first row of photodetectors. Both g' (C<) and h(T)

are shown in Figure 2. By a similar process to that applied to the detectors of the first row, the output 0 2 of the second row of photodetectors will be given by

o 2 ~= -R g{ct)-d- ~ h(T).f(R.a + T).dT (8

Dividing Equation (8) by Equation (7) gives:

o_

This means that the ratio of the weighted outputs of the two rows corresponds to the range of the object. Thus, object range can be measured as the ratio of the output signals derived from two rows of photodetectors, 1 and 2, provided the angular sensitivity of each photodetector in row 2 is the angular derivative of that in row 1, and that the spatial weighting function used for summing the outputs of the detectors in row 1 is the spatial derivative of that used for summing the outputs of row 2.

Alternatively, the proximity of an object, defined as the reciprocal of its range, can be obtained by computing the ratio 0,/0 2 .

To translate this result into operative hardware and produce a rangefinder, the conceptually simplest solution is to create two rows of photodetectors or photo-sensitive devices, each detector or device comprising a large number of very small sensors, each sensor being adapted to view a small portion of a patch of the scene that is monitored by the detector or device. One major disadvantage of this approach is that a substantial digital computer would be required to apply the weighting functions, to perform the output additions, and to make the ratio determinations.

In practical rangefinders, a less accurate approximation to the ideal solution is obtained by using a pair of linear arrays of photo-sensitive devices, each device comprising a cluster of sensors adapted to monitor different regions of the patch of the scene that is monitored by the device. The application of an angular sensitivity weighting function can be effected by suitable choice of the shapes of the photo-sensitive areas of the detectors, and by suitably connecting the outputs of the sensors to a bank of resistors. The translational sensitivity weighting functions can be applied by electronically weighting the outputs of the sensor arrays, using a bank of resistors connected to differential amplifiers.

To construct a prototype of the rangefinding apparatus of the present invention, the present inventors have successfully used commercially available four-quadrant photodiode arrays (Reticon UV-140BQ-4) with square photo-sensitive elements, tilted 27 to the horizontal. Arctan (0.5) is 27 . The arrangement of one four-quadrant photodiode array is shown in Figure 3. In this configuration, the angular sensitivity of each element in the horizontal plane to a vertical bar is a good approximation to one half period of a sine function. An odd angular sensitivity weighting function, g' (Λ), that approximates to one full period of a sine function is obtained by assigning weights of +1 and -1 to the left and right elements, respectively. An even angular sensitivity weighting function, g(^), that approximates to one period of a raised cosine function is obtained by assigning weights of 1, 2, 2 and 1 to the outputs of the four elements as indicated in Figure 3. With this approach, the error in the approximating function is everywhere less than 7 per cent of the peak sensitivity.

A convenient way of applying the required spatial sensitivity weighting functions to a linear array of photo-sensitive devices is to electronically apply a translational triangular function as h(T) to one array, and its derivative h' (T), a bipolar pair, to the other array. The application of such translational weighting functions to a linear array

is easily performed electronically (for example, using a bank of resistors connected to differential amplifiers) .

The prototype rangefinder of the present invention was produced as a visual aid for blind persons, and the dimensions of the prototype were chosen with this purpose in mind. It will be appreciated, however, that the dimensions of the elements of rangefinders will depend, in general, upon the use to which the rangefinder is to be put, and the components used in the rangefinder.

A Reticon UV-140BQ-4 four-quadrant photodiode array has an active area of 2mm x 2mm, covering four elements, as shown in Figure 3. In the prototype rangefinder, as shown in Figure 4, two linear arrays of five such four-quadrant detectors 40 are mounted at one end of a set of cylindrical apertures 42 bored in an aluminium block 41. The cylindrical apertures 42 were each 5 mm in diameter and 50 mm long. The central axes of the cylindrical apertures were spaced a distance of 9 mm. A respective lens 43, having a diameter of 5 mm and a focal length of 50 mm, is mounted at the end of each cylindrical aperture remote from the detectors 40. The linear array of detectors thus produced had an angular field of view of 4° and a near point (the closest distance visible to all five of the sensors in an array) of 500 mm.

The way in which the angular and translational weighting functions are applied to the outputs of the detectors is also indicated - schematically - in Figure 4.

Figure 5 shows the electronic arrangements of the prototype device. A respective preamplifier 51 is connected to the output of each sensor of each four-quadrant photodiode 40 to convert the photocurrents from the individual sensors into voltages. The voltages produced by the preamplifiers 51 are fed to respective resistors 52 of a bank of resistors 53, to apply the angular sensitivity weighting functions g(#) and g' (0C) as indicated above. The translational weighting functions are then applied to the currents through the resistors 52, and the thus-weighted currents are fed into the inputs of a pair of summing differential amplifiers 55, which also serve as current-to-voltage converters.

The outputs of the differential amplifiers 55 are, respectively, signals indicative of the summed, weighted outputs 0, and 0-. These outputs can be processed in a variety of ways to produce a ratio signal indicative of the range or proximity of an object viewed by the linear arrays of photodiodes 40. In the prototype device, as shown in Figure 5, the output signals 0, and 0 2 from the differential amplifiers 55 are fed to an analogue divider 56 as numerator and denominator, respectively. The output

signal from the divider 56, which represents proximity of an object, is fed via a voltage-to-frequency converter 57 to a pair of headphones or another audio signal generator 58, to produce a tone signal, the pitch or frequency of which is proportional to the proximity of an object. The prototype device thus produced was a compact, fully analogue, hand-held rangefinder with an audio output.

One electronic modification to this arrangement was made to overcome the problem that arises when the denominator signal approaches zero, which can cause erroneous values of proximity to be indicated (see the above discussion of the underlying theory of the present invention). The simple expedient of using a switch activated by a signal from a level detector 59 (see Figure 5) to switch the tone signal off when the denominator signal to the comparator 56 falls below a predetermined threshold value effectively, overcomes this problem.

It will be appreciated that each four-quadrant photodetector of the single array of photodiodes 40 shown in Figures 4 and 5 produces, in effect, two outputs, each obtained by applying a different set of weights to the four quadrants.

Various tests of the performance of the prototype rangefinder device produced by the present inventors have been undertaken. As a preliminary test, the

angular sensitivity functions of each array were measured separately by using a pattern, positioned 1000 mm from the device, consisting of a vertical 0.5 mm wide white bar on a black background. This bar was moved across the field of view of the device, with all but one sensor arrays covered, in turn, and the angular and translational signals were recorded at each position of the bar. The measurements obtained are shown in Figure 6. It will be seen that the even functions approximate a raised cosine, as expected, and the odd functions a sine. An imperfection, consisting of an offset of the baseline, is evident in the even functions. This is almost certainly due to the presence of scattered light inside the cylindrical apertures of the device, despite the use of a matt black coating of the walls of the apertures. A f rther reduction of scattered light would be desirable in production models of the device.

A qualitative assessment of the device was made by having an observer point the device at various visual features in the environment, both indoors and outdoors. It was found that the pitch of the tone produced by the device was a fairly reliable indicator of the proximity of the visual feature, and that the readout was largely independent of the spatial intensity profile of the feature.

In a further test of the device, the prototype rangefinder was mounted on the rotatable table 70 in the arrangement of objects shown in Figure 7. Each object of the arrangement of Figure 7 is identified by an initial letter. A "map" produced by analysing the proximity signals generated by a series of to and fro scans of the prototype device to monitor the objects in the arrangement of Figure 7 is reproduced as Figure 8. It will be seen that consistent indications of proximity of objects, suitable to provide a warning to a blind person of the presence and range of the objects (including corners of the room - objects A and M) , are produced by the prototype device.

It will be readily apparent that although specific examples of the present invention have been illustrated and described above, modifications to, and variations of, those examples can be made without departing from the present inventive concept.

Applications of the Invention

In addition to the sensing and alerting device for use by blind persons, described above, the present invention may be used in other applications, including (a) hand-held electronic measuring tapes; and

(b) warning devices for signalling the possibility of a collision with an object to the driver of a reversing motor vehicle.

In addition, by incorporating a pair of temporal filters into a rangefinder in accordance with the present invention, the apparent angular velocity of an object can be measured, in addition to its range, using the generalised-gradient techniques for the measurement of image velocity that are described in the aforementioned paper by M V Srinivasan. Thus, a modified version of the device can be used in a camera not only for the automatic control of focus, but also for the automatic control of shutter speed to reduce blurring due to motion of the image.

These examples of other uses of the present invention are not intended to be exhaustive.