Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OPTICAL READER SYSTEM FOR EXTRACTING INFORMATION IN A DIGITAL IMAGE
Document Type and Number:
WIPO Patent Application WO/2008/154611
Kind Code:
A2
Abstract:
A method of using a hand held image reader comprises imaging a target, such as a package, positioned such three surfaces of the target appear in the captured image. A processor may then identify or determine from the image outer edges of the package. For instance, the processor may identify three or more outer edges. The processor may then identify or determine from the image corners of the target, corners being representative of the intersection of edges. The processor may then determine from the edges and corners dimensions of the package, such as height, width or depth.

Inventors:
HUSSEY ROBERT M (US)
LI JINGQUAN (US)
HNATOW JUSTIN
LONGCARE ANDREW (US)
DELOGE STEPHEN P (US)
KOZIOL THOMAS J (US)
MONTORO JAMES (US)
KOSECKI JAMES C (US)
MEIER TIMOTHY P (US)
ANDERSON DONALD GORDON (US)
HAWLEY THOMAS (US)
POMERLEAU ARON C (US)
HIGGINS PAUL C (US)
Application Number:
PCT/US2008/066615
Publication Date:
December 18, 2008
Filing Date:
June 11, 2008
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HONEYWELL INT INC (US)
HUSSEY ROBERT M (US)
LI JINGQUAN (US)
HNATOW JUSTIN
LONGCARE ANDREW (US)
DELOGE STEPHEN P (US)
KOZIOL THOMAS J (US)
MONTORO JAMES (US)
KOSECKI JAMES C (US)
MEIER TIMOTHY P (US)
ANDERSON DONALD GORDON (US)
HAWLEY THOMAS (US)
POMERLEAU ARON C (US)
HIGGINS PAUL C (US)
International Classes:
G06K7/10
Domestic Patent References:
WO1999060467A11999-11-25
Foreign References:
US6698656B22004-03-02
KR20020016322A2002-03-04
KR20040093710A
US5001658A1991-03-19
US20030194112A12003-10-16
Other References:
None
See also references of EP 2165289A4
Attorney, Agent or Firm:
PATENT SERVICES CENTRE (101 Columbia RoadMorristown, NJ, US)
Download PDF:
Claims:

CLAIMS

1. A hand held optical reader comprising: means for projecting a target onto an object; means for reading information bearing indicia on the object; and means for obtaining an image of the object; and means for extracting further information from the image of the object.

2. The reader of claim 1 wherein the further information comprises one or more of the group consisting of dimensions of an object, salient features, and license plate data.

3. A method for collecting data about a form comprising the steps of: using an image reader, obtaining an image of a form having data and non- data areas; identifying edges of data areas on the form; discarding non-data areas outside the edges of the data areas and transmuting a rectangle bounded by the edges into a rectified image of the data areas.

4. The method of claim 3 comprising the further step of correcting angular distortion.

5. The method of claim 3 comprising the further step of obtaining first and second images of different resolutions; searching the image of lower resolution to identify edges; determining the length, orientation and location of the edges in the lower resolution image; and identifying the corresponding edges in the higher resolution image.

6. An apparatus for collecting data about a form comprising the steps of: means for obtaining an image of a form having data and non-data areas;

means for identifying edges of data areas on the form; means for discarding non-data areas outside the edges of the data areas and means for transmuting a rectangle bounded by the edges into a rectified image of the data areas.

7. The apparatus of 6 further comprising means for correcting angular distortion.

8. The apparatus of claim 6 further comprising: means for obtaining first and second images of different resolutions; means for searching the image of lower resolution to identify edges; means for determining the length, orientation and location of the edges in the lower resolution image; and means for identifying the corresponding edges in the higher resolution image.

9. A method for collecting data about a form comprising the steps of: capturing and storing a sample image of a form bearing reference features to assist orienting images of the form; and identifying and storing one or more salient features of the form and the locations of the salient features on the form.

10. A method for processing data about a form in accordance with previously stored reference features or salient features, comprising: projecting a target from the image recorder onto the form to align the form with the image recorder; capturing an image of the form; locating the reference features in the captured image; orienting the image of the form to a desired orientation using the reference features and other salient features; and

processing the salient features found in the image.

11. The method of claim 10 further comprising correcting distortion of the image by comparing the portions of the captured reference features to the stored reference features.

12. The method of claim 10 wherein the salient features include one or more of the group consisting of check boxes and signatures.

13. The method of claim 10 wherein the check box images are processed to identify whether or not a box is checked.

14. The method of claim 10 wherein the salient signature box is corrected for distortion and stored.

15. The method of claim 10 wherein the target comprises a central crosshair and one or more brackets indicating the corner of a captured image.

16. The method of claim 10 wherein the reference features include information bearing indicia.

17. The method of claim 10 further comprising the step of simultaneously processing multiple salient features.

18. An apparatus for collecting data about a form comprising: means for capturing and storing a sample image of a form bearing reference features to assist orienting images of the form; and means for identifying and storing one or more salient features of the form and the locations of the salient features on the form.

19. An apparatus for processing data about a form in accordance with previously stored reference features or salient features, comprising: means for projecting a target from the image recorder onto the form to align the form with the image recorder; means for obtaining an image of the form; means for locating the reference features in the captured image; means for orienting the image of the form to a desired orientation using the reference features and other salient features; and means for processing the salient features found in the image.

20. The apparatus of claim 19 further comprising means for correcting distortion of the image by comparing the portions of the captured reference features to the stored reference features.

21. The apparatus of claim 19 wherein the salient features include one or more of the group consisting of check boxes and signatures.

22. The apparatus of claim 21 wherein the check box images are processed to identify whether or not a box is checked.

23. The apparatus of claim 21 wherein the salient signature box is corrected for distortion and stored.

24. The apparatus of claim 19 wherein the target comprises a central crosshair and one or more brackets indicating the corner of a captured image.

25. The apparatus of claim 19 wherein the reference features include information bearing indicia.

26. The apparatus of claim 19 further comprising means for simultaneously processing multiple salient features.

27. A system for generating and retrieving data about license plates comprising: a portable data acquisition device comprising: a housing; a camera in said housing for capturing an image of the iicense plate; a processor disposed in the housing for receiving the data signals representative of the captured image; first software means for analyzing graphic codes; second software means disposed in the processor including optical character recognition means for analyzing the image of the license plate to determine the alphanumeric data on the license plate; and means for recognizing the license plates of more than one jurisdiction.

28. The system of claim 27 for generating and retrieving data about license plates further comprising: means for wirelessly transmitting data signals representative of the alphanumeric data of the license plate to one or more remote databases; and means for wirelessly receiving said transmitted alphanumeric signals.

29. The system of claim 27 for generating and retrieving data about license plates further comprising: means for accessing records in the one or more remote databases to find alert data corresponding to the license plate; and means for wirelessly returning said alert data to the portable data acquisition device.

30. The system of claim 27 for generating and retrieving data about license plates wherein the second software means further comprises means for saving the image of the license plate.

31. A reader for reading graphic codes comprising: a housing; a camera in said housing for capturing an image of the license plate; a processor disposed in the housing for receiving the data signals representative of the captured image; first software means for analyzing graphic codes; second software means disposed in the processor including optical character recognition; means for analyzing the image of the license plate to determine the alphanumeric data on the license plate; and means for recognizing the license plates of more than one jurisdiction.

32. The reader of claim 31 further comprising means for wirelessly transmitting data signals representative of the alphanumeric data of the license plate to one or more remote databases.

33. The reader of claim 31 wherein the second software means further comprises means for saving an image of the license plate.

34. A method for capturing two or more measurable parameters of an object comprising: weighing an object; transmitting data signals representative of the weight of the object on the scale; receiving the data signals representative of the weight of the object on the scale; and generating data signals representative of dimensions of the object.

35. The method of claim 34 comprising the further step of calculating costs of shipping the object by weight and by volume and determining the lesser or greater of the two costs.

36. A system for capturing two or more measurable parameters of an object comprising: a scale having a surface for receiving and object and having apparatus for determining the weight of the received object; means for transmitting data signals representative of the weight of the object on the scale; means for receiving the data signals representative of the weight of the object on the scale; means for generating data signals representative of dimensions of the object; and means for calculating the volume of the object.

37. The system of claim 36 wherein the means for generating data signals representative of the dimensions of the object comprises: a portable data acquisition device comprising: a housing; a camera in said housing for capturing an image of an object on a substrate bearing indicia representative of know dimensions or known locations with respect to each other, said image comprising a plurality of data signals; a processor disposed in the housing for receiving the data signals representative of the captured image; and software means disposed in the processor for analyzing the image data signals to identify edges of the object and for calculating the length of the edges in accordance with the dimensional indicia captured in the image.

38. The system of claim 36 wherein the means for generating data signals representative of the dimensions of the object comprises: a housing; a camera in said housing for capturing an image of an object on a substrate bearing indicia representative of know dimensions, said image comprising a plurality of data signals; means disposed in said housing for transmitting data signals representative of the captured image; a remote receiver for receiving the transmitted image data signals and having a processor and software means for analyzing the image data signals to identify edges of the object and for calculating the length of the edges in accordance with the dimensional indicia captured in the image.

39. The system of claim 36 wherein the means for generating data signals representative of the dimensions of the object comprises a substrate having parallel, opposites surfaces for supporting an object on one of said substrate surfaces; a light source for illuminating the other surface of the substrate; means for capturing an image of the object on the illuminated substrate; means for identifying edges of the object on the substrate; and means for calculating the length of the edges.

40. The apparatus of claim 39 wherein the substrate transmits incident light.

41. The apparatus of claim 39 wherein the substrate carries a grid pattern of known dimensions.

42. A method for detecting and measuring edges of an object comprising the steps of:

disposing an object on one side of a substrate; illuminating the substrate; obtaining an image the object; identifying edges of the object on the substrate; and calculating the length of the edges.

43. The method of claim 42 wherein the substrate is transmissive and the object is illuminated by light from a source on the other side of the substrate.

44. The method of claim 42 wherein the substrate carries a grid pattern of known dimensions.

45. The method of claim 42 wherein the source of light is disposed on the other side of the substrate opposite the object.

46. The method of claim 42 wherein the substrate has a retro-reflective surface on the same side as the substrate and the source of light is disposed on the same side of the substrate as the object.

47. An apparatus for detecting and measuring edges of an object comprising: a substrate for supporting an object on one side of the substrate; a light source for illuminating the substrate; means for capturing an image of light reflected from the object; means for identifying edges of the object on the substrate; and means for calculating the length of the edges.

48. The apparatus of claim 47 wherein the substrate transmits incident light and the source of light is disposed on the other side of the substrate opposite the object.

49. The apparatus of claim 47 wherein the substrate has a retro-reflective surface on the same side as the substrate and the source of light is disposed on the same side of the substrate as the object.

50. The apparatus of claim 47 wherein the substrate is carries a grid pattern of known dimensions.

51. The apparatus of claim 47 further comprising means for weighing the object.

52. The apparatus of claim 47 further comprising means for generating and recording data signals representative of the dimensions of the object and the weight of the object.

53. A method for detecting and measuring edges of an object comprising the steps of: disposing an object on one surface of a substrate bearing indicia of known dimensions or known locations with respect to each other; capturing an image of the object and the indicia; identifying edges of the object; and calculating the length of the edges of the object.

54. A portable data acquisition device comprising: a housing; a camera in said housing for capturing an image of an object on a substrate bearing indicia representative of know dimensions or known locations with respect to each other, said image comprising a plurality of data signals; a processor disposed in the housing for receiving the data signals representative of the captured image;

software means disposed in the processor for analyzing the image data signals to identify edges of the object and for calculating the length of the edges in accordance with the dimensional indicia captured in the image.

55. A system for acquiring data about and object comprising: a housing; a camera in said housing for capturing an image of an object on a substrate bearing indicia representative of know dimensions or known locations with respect to each other, said image comprising a plurality of data signals; means disposed in said housing for transmitting data signals representative of the captured image; a remote receiver for receiving the transmitter image data signals and having a processor and software means for analyzing the image data signals to identify edges of the object and for calculating the length of the edges in accordance with the dimensional indicia captured in the image.

56. A tape measuring apparatus comprising: a housing holding a roller and having an opening for passage of measuring tape; a measuring tape coiled on the roller, said tape having indicia representative of the length of the tape, said tape having one end extending through said opening; a monitor disposed proximate the opening for generating signals representative of the length of the tape withdrawn from the housing; and a transmitter for wirelessly sending signals representative of the distance the tape is withdrawn through said opening.

57. The tape measuring apparatus of claim 56 wherein the indicia on the tape comprise regions of sequential and alternating mechanical, optical, electric, magnetic or chemical properties and the monitor has means for detecting the alternating properties and counting the sequential and alternating regions to generate signals representative of the length of the tape withdrawn form the housing.

58. The tape measuring apparatus of claim 56 wherein the indicia on the tape comprise regions of sequential and alternating intensity or sequential and alternating color.

59. The tape measuring apparatus of claim 56 wherein the means for counting the number of indicia comprises a light source and a photodetector for detecting regions of sequential and alternating intensity or sequential and alternating color.

60. A system for measuring dimensions of an object comprising: a tape measuring apparatus comprising: a housing holding a roller and having an opening for passage of measuring tape; a measuring tape coiled on the roller, said tape having indicia representative of the length of the tape, said tape having one end extending through said opening; an indicia signal generator proximate the opening for generating signals representative of indicia passing through said opening as a length of the tape withdrawn from the housing; means for generating one or more dimension signals representative of the cumulative length of the tape withdrawn from the housing and wirelessly transmitting said dimension signals; and a receiver for wirelessly receiving the dimension signals and for recoding said signals in a memory.

61. The system of claim 60 wherein the indicia on the tape comprises regions of sequential and alternating mechanical, optical, electric, magnetic or chemical properties and the monitor has means for detecting the alternating properties and counting the sequential and alternating regions to generate signals representative of the length of the tape withdrawn form the housing.

62. The system of claim 60 wherein the indicia on the tape comprises regions of sequential and alternating intensity or sequential and alternating color.

63. The system of claim 60 wherein the indicia on the tape comprises regions of sequential and alternating intensity or sequential and alternating color.

64. The system of claim 60 wherein the means for counting the number of indicia comprises a light source and a photodetector for detecting regions of sequential and alternating intensity or sequential and alternating color.

65. The system of claim 60 wherein the monitor further comprises means for counting the number of indicia read by the monitor to determine the length of the tape extending beyond the opening.

66. A reader comprising: a tape measuring apparatus comprising: a housing holding a roller and having an opening for passage of measuring tape; a measuring tape coiled on the roller, said tape having indicia representative of the length of the tape, said tape having one end extending through said opening;

an indicia signal generator disposed proximate the opening for generating signals representative indicia passing through said opening as a length of the tape withdrawn from the housing; means for generating one or more dimension signals representative of the cumulative length of the tape withdrawn from the housing and wirelessiy transmitting said dimension signals; and means for recording the dimension signals.

67. The reader of claim 66 wherein the indicia on the tape comprise regions of sequential and alternating mechanical, optical, electric, magnetic or chemical properties and the monitor has means for detecting the alternating properties and counting the sequential and alternating regions to generate signals representative of the length of the tape withdrawn form the housing.

68. The reader of claim 66 wherein the indicia on the tape comprise regions of sequential and alternating intensity or sequential and alternating color.

69. The reader of claim 66 wherein the indicia on the tape comprise regions of sequential and alternating intensity or sequential and alternating color.

70. The reader of claim 66 wherein the indicia signal generator comprises a light source and a photodetector for detecting regions of sequential and alternating intensity or sequential and alternating color.

71. The reader of claim 66 further comprising a bar code reader or an image capture device or both.

72. A portable data acquisition device comprising a housing;

in said housing, means for acquiring data from indicia on a surface of an object; on one surface of said housing a computer mouse; and means for generating signal(s) representative of the distance traveled from start to stop of the mouse along the object.

73. The portable data acquisition device of claim 72 wherein the means for acquiring data from indicia on the surface of an object comprises a bar code reader or image capture device.

74. The portable data acquisition device of claim 72 wherein the computer mouse is an optical mouse and the data acquisition device comprises a common light source for acquiring data from the surface of an object and for operating the optical mouse.

75. The portable data acquisition device of claim 72 wherein the computer mouse comprises a cradle holding a wheel disposed on a shaft and an encoder proximate the shaft for generating signals representative of the motion of the motion of the wheel in a direction perpendicular to the shaft.

76. The portable data acquisition device of claim 72 wherein a guide extends from one surface of the housing to bear against an edge of the object to guide the portable acquisition device along said edge of said object.

77. The portable data acquisition device of claim 74 wherein the guide is a roller or a guide rail.

78. The portable data acquisition device of claim 72 further comprising means for recording the cumulative distance a mouse moves from a start point to a finish point.

79. The portable data acquisition device of claim 78 further comprising one or more buttons mounted on the housing and responsive to the manual operation to set the start and finish points of a distance measurement.

80. A portable data acquisition device comprising: a housing; in said housing, means for acquiring data from indicia on a surface of an object; and accelerometer means mounted in the housing and responsive to movement of the housing to calculate the distance the housing moves.

81. The portable data acquisition device of claim 80 further comprising a guide extends from one surface of the housing to bear against an edge of the object to guide the portable acquisition device along said edge of said object.

82. The portable data acquisition device of claim 81 wherein the guide is a roller or a guide rail.

Description:

OPTICAL READER SYSTEM FOR EXTRACTING INFORMATION IN A DIGITAL

IMAGE

FIELD OF THE INVENTION

The present invention relates to indicia reading devices, and more particularly to a method of extracting information from an image provided by an indicia reading device including one or more parameters of an object bearing such image.

BACKGROUND

Indicia reading devices (also referred to as readers, bar code readers, etc.) typically read data represented by printed indicia, (also referred to as symbols, symbology, bar codes, graphic symbols, etc.) For instance one type of a symbol is an array of rectangular bars and spaces that are arranged in a specific way to represent elements of data in machine readable form. Optical indicia reading devices typically transmit light onto a symbol and receive light scattered and/ or reflected back from a bar code symbol or indicia. The received light is interpreted by an image processor to extract the data represented by the symbol. Laser indicia reading devices typically utilize transmitted laser light.

One-dimensional (1 D) optical bar code readers are characterized by reading data that is encoded along a single axis, in the widths of bars and spaces, so that such symbols can be read from a single scan along that axis, provided that the symbol is imaged with a sufficiently high resolution along that axis.

In order to allow the encoding of larger amounts of data in a single bar code symbol, a number of 1 D stacked bar code symbologies have been developed which partition encoded data into multiple rows, each including a respective 1 D bar code pattern, all or most all of which must be scanned and decoded, then linked together to form a complete message. Scanning still requires relatively

higher resolution in one dimension only, but multiple linear scans are needed to read the whole symbol.

A class of bar code symbologies known as two dimensional (2D) matrix symboiogies have been developed which offer orientation-free scanning and greater data densities and capacities thani D symbologies. 2D matrix codes encode data as dark or light data elements within a regular polygonal matrix, accompanied by graphical finder, orientation and reference structures. Often times an optica! reader may be portable and wireless in nature thereby providing added flexibility. In these circumstances, such readers form part of a wireless network in which data collected within the terminals is communicated to a host computer situated on a hardwired backbone via a wireless link. For example, the readers may include a radio or optical transceiver for communicating with a network computer.

Conventionally, a reader, whether portable or otherwise, may include a central processor which directly controls the operations of the various electrical components housed within the bar code reader. For example, the central processor controls detection of keyboard entries, display features, wireless communication functions, trigger detection, and bar code read and decode functionality.

Efforts regarding such systems have led to continuing developments to improve their versatility, practicality and efficiency.

SUMMARY

The invention has a number of embodiments for solving a problem of limited information gathered by reader devices. While reader devices are useful to decode information in one or more graphic codes, a user often requires more data to handle a package bearing such graphic codes. For example, a user often

desires knowledge of salient features on a form or the size and weight of the package that bears the graphic code.

The invention provides one or more embodiments that read conventional information bearing indicia (e.b. bar codes and Aztex codes) and uses image analysis, in particular software familiar to machine vision technology, to extract information from an object using a hand held instrument. The information includes but is not limited to measurements of dimensions and weight of packages. Some embodiments measure one parameter and other embodiments measure two or more parameters. Some embodiments provide stand alone apparatus for acquiring data about one or more parameters. Some embodiments operate independent of networks and others rely upon client/server networks for full operation. These and other features will be shown with reference to the attached drawings and the following detailed disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 is a perspective view of an exemplary PDA in accordance with the present invention.

Fig. 2 is a fragmentary partially cutaway side view of an exemplary PDA in accordance with the present invention.

Fig. 3 is a block schematic diagram of an exemplary PDA in accordance with the present invention.

Fig. 4 is a flowchart of an exemplary method of operating a PDA system in accordance with the present invention.

Fig. 5.1 is a block schematic diagram of an exemplary PDA system in accordance with the present invention.

Fig. 5.2 shows a reference prop and a table with details of the reference marks used for the prop.

Fig. 5.3 shows further details of the reference marks.

Figs. 5.4 and 5.5 show a form and a distorted form image, respectively.

Figs. 6A and 6B are exemplary diagrams of a measuring tape and a wireless measuring tape holder in accordance with the present invention.

Fig. 7 is a partially broken away view of an exemplary PDA having a tape measure in accordance with the present invention.

Fig. 8 is a partially broken away view of the back of a PDA having a ball-type computer mouse in accordance with the present invention.

Fig. 9 is a partially broken away view of the back of a PDA having an optical-type computer mouse in accordance with the present invention.

Figs. 10A-10D are views of a light box measurement apparatus in accordance with the present invention.

Fig. 11 is a view of a networked system with a scale in accordance with the present invention.

Fig. 12 is a view of a stand alone system with a scale in accordance with the present invention.

DETAILED DESCRIPTION

ROCHDOCS\SQ647I\1 - 4 -

Reference will now be made to exemplary embodiments of the invention which are illustrated in the accompanying drawings. This invention, however, may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these representative embodiments are described in detail so that this disclosure wili be thorough and complete, and will fully convey the scope, structure, operation, functionality, and potential of applicability of the invention to those skilled in the art. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.

Referring to Fig.s 1 and 2, a personal data acquisition device or reader device, such as a personal digital assistant (PDA) 112, portable data terminal (PDT), hand held scanner or image reader, mobile phone, cellular phone, or other device may be a platform for an image reading assembly 114 having the capability for capturing and reading images, some of which may have symbol indicia provided therein. Personal Digital Assistants (PDAs) are typically defined as handheld devices used as a personal organizer, and having many uses such as reading information bearing indicia, calculating, use as a clock and calendar, playing computer games, accessing the Internet, sending and receiving E-mails, use as a radio or stereo, video recording, recording notes, use as an address book, and use as a spreadsheet. A plurality of buttons or keys 115 may be used to control operation of the PDA and the imaging reader assembly 114. A display 116 may be utilized to provide a graphical user interface (GUI).

PDAs may be equipped with the ability to query and receive and transmit data, such as information extracted from and image via a communication link, such as by radio link or wired link.

A PDT is typically an electronic device that is used to enter or retrieve data via wireless transmission (WLAN or WWAN) and may also serve as an indicia

reader used in a stores, warehouse, hospital, or in the field to access a database from a remote location.

The PDA 112 may be a Hand Held Products Dolphin® series or the like and may include a cradle connected to a computer by a cable or wireless connection to provide two-way data communication there between. The computer may be replaced with a different processing device, such as a data processor, a laptop computer, a modem or other connection to a network computer server, an internet connection, or the like. The PDA may include a display and keys mounted in a case to activate and control various features on the PDA. The display may be a touch screen LCD that allows the display of various icons representative of different programs available on the PDA which may be activated by finger pressure or the touch of a stylus. The display may also be used to show indicia, graphs, tabular data, animation, or the like.

Referring to Fig. 5, imaging reader assembly 114 may have an aiming pattern generator 130, illumination assembly 142, and imaging assembly 150. The aiming pattern generator is part of a reader. One or more LEDs generate light. Known optics shape the LED output into a cross that is projected onto the surface of a target package or form. The optics also form a corner to indicate on the target the limit of the image that can be captured by the reader. The projected aiming pattern remains on the target while the user captures the image.

In one embodiment the user programs the reader to store reference features and salient features of a form. Reference features may be cross hairs or other registration indicia printed on the form. The referece forms are known and data corresponding to the reference features are stoned in a memory in the reader. The reader has software that compares the image data of the form to the stored reference features to locate the features. One the reference features are located, they salient features can be found using simple geometry becasue the salient features are always in the same relative location on the form with respect

to the reference features. Such referece features are more easily located by machine vision software because they are so distinctive. Once located, the reference features are used by the software to orient the image and find the salient features.

Examples of salient features include image bearing indicia or gaphic code to distinguish one form from another identical form. The type of unique code, bar code or Aztec code, can by identified as a salient feature and software in the reader can analyze images to identify the graphic code. Other salient features include check boxes. The user may record an image of the boxes and whether or not the boxes are checked. Another salient feature is a singature block. The user often wants to record an image of the signature. The pattern of the graphic code, the check boxes, the signature block and other salient form features are prestored in the memory of the reader for later identification on the target forms. Once the reader captures an image of the form, known software routines for pattern recognition analyze the captured image to identify the salient features in the form. Those salient features can be forwared via an wired or wireless network [ink to a network server. The server may record the salient features in a network database that stores only the salient features relevant to the form. Since the form has large quantities of preprinted, known data, the user does not need to record the known, standard data for each form. Instead, the user has an efficient reader that recognizes salient features and records those and discards the rest of the image.

With reference to Figs. 5.4 and 5.5 there is shown a form 550. It has a number of reference markings, including squares 561 , crosses 562, information bearing indicia (graphic symbols) 563, and an "X" for a signature line 564. A salient feature includes signature block 565. In Fig 5.5, the form 550 is shown with perspective distortion. The invention uses the reference markings 561-564 to orient the image and thereby locate the salient feature signature block 565.

Illumination and aiming light sources with different colors may be employed. For example, in one such embodiment the image reader may include white and red LEDs, red and green LEDs, white, red, and green LEDs, or some other combination chosen in response to, for example, the color of the symbols most commonly imaged by the image reader. Different colored LEDs may be each alternatively pulsed at a level in accordance with an overall power budget.

Aiming pattern generator 130 may include a power supply 131 , light source 132, aperture 133 and optics 136 to create an aiming light pattern projected on or near the target which spans a portion of the receive optical system 150 operational field of view with the intent of assisting the operator to properly aim the reader at the bar code pattern that is to be read. A number of representative generated aiming patterns are possible and not limited to any particular pattern or type of pattern, such as any combination of rectilinear, linear, circular, elliptical, etc. figures, whether continuous or discontinuous, i.e., defined by sets of discrete dots, dashes and the like.

Generally, the aiming light source may comprise any light source which is sufficiently small or concise and bright to provide a desired illumination pattern at the target. For example, light source 132 for aiming generator 130 may comprise one or more LEDs 134, such as part number NSPG300A made by Nichia Corporation.

The light beam from the LEDs 132 may be directed towards an aperture 133 located in close proximity to the LEDs. An image of this back illuminated aperture 133 may then be projected out towards the target location with a lens 136. Lens 136 may be a spherically symmetric lens, an aspheric lens, a cylindrical lens or an anamorphic lens with two different radii of curvature on their orthogonal lens axis. Alternately, the aimer pattern generator may be a laser pattern generator.

The light sources 132 may also be comprised of one or more laser diodes such as those available from Rohm. In this case a laser collimation lens (not shown in these drawings) will focus the laser light to a spot generally forward of the scanning hear and approximately at the plane of the target T. This beam may then be imaged through a diffractive interference pattern generating element, such as a holographic element fabricated with the desired pattern in mind. Examples of these types of elements are known, commercially available items and may be purchased, for example, from Digital Optics Corp. of Charlotte, N. C. among others. Elements of some of these types and methods for making them are also described in U.S. Pat. Nos. 4,895,790 (Swanson); 5,170,269 (Lin et al) and 5,202,775 (Feldman et ai), which are hereby incorporated herein by reference.

Illumination assembly 142 for illuminating target area T may include one or more power supplies 144, illumination sources 146 and illumination optics 148.

Imaging assembly may have received optics 152 and an image sensor 154.

The receive optics 152 has a focal point wherein parallel rays of light coming from infinity converge at the focal point. If the focal point is coincident with the image sensor, the target (at infinity) is "in focus". A target T is said to be in focus if light from target points are converged about as well as desirable at the image sensor. Conversely, it is out of focus if light is not well converged. "Focusing" is the procedure of adjusting the distance between the receive optics and the image sensor to cause the target T to be approximately in focus.

The target may be any object or substrate and may bear a 1 D or 2D bar code symbol or text or other machine readable indicia. A trigger 115 may be used for controlling full or partial operation of the PDA 112.

Image sensor 154 may be a two-dimensional array of pixels adapted to operate in a global shutter or full frame operating mode which is a color or monochrome 2D CCD, CMOS, NMOS, PMOS, CID, CMD, etc. solid state image sensor. This sensor contains an array of light sensitive photodiodes (or pixels) that convert incident light energy into electric charge. Solid state image sensors allow regions of a full frame of image data to be addressed. An exemplary CMOS sensor is model number MT9V022 from Micron Technology Inc. or model number VC5602V036 36CLCC from STMicroelectronics.

Further description of image sensor operation is provided in commonly owned U.S. Patent Application Serial Number 11/077,995 entitled "BAR CODE READING DEVICE WITH GLOBAL ELECTRONIC SHUTTER CONTROL" filed on March 11 , 2005, which is hereby incorporated herein by reference in its entirety.

In a full frame (or global) shutter operating mode, the entire imager is reset before integration to remove any residual signal in the photodiodes. The photodiodes (pixels) then accumulate charge for some period of time (exposure period), with the light collection starting and ending at about the same time for all pixels. At the end of the integration period (time during which light is collected), all charges are simultaneously transferred to light shielded areas of the sensor. The light shield prevents further accumulation of charge during the readout process. The signals are then shifted out of the light shielded areas of the sensor and read out.

Features and advantages associated with incorporating a color image sensor in an imaging device, and other control features which may be incorporated in a control circuit are discussed in greater detail in U.S. Patent No. 6,832,725 entitled "An Optical Reader Having a Color Imager" incorporated herein by reference. It is to be noted that the image sensor 154 may read images with

illumination from a source other than illumination source 146, such as by illumination from a source located remote from the PDA.

The output of the image sensor may be processed utilizing one or more functions or algorithms to condition the signal appropriately for use in further processing downstream, including being digitized to provide a digitized image of target T.

A microcontroller 160 may be utilized to control one or more functions and devices of the image reader assembly 114 wherein the particulars of the functionality of microcontroller 160 may be determined by or based upon certain parameters which may be stored in memory or firmware. One such function may be controlling the amount of illumination provided by illumination source 146 by controlling the output power provided by illumination source power supply 144.

An exemplary microcontroller 160 is a CY8C24223A made by Cypress Semiconductor Corporation, which is a mixed-signal array with on-chip controller devices designed to replace multiple traditional MCU-based system components with one single-chip programmable device. It may include configurable blocks of analog and digital logic, as well as programmable interconnects.

Microcontroller 160 may include a predetermined amount of memory 162 for storing firmware and data. The firmware may be a software program or set of instructions embedded in or programmed on the microcontroller which provides the necessary instructions for how the microcontroller operates and communicates with other hardware. The firmware may be stored in the flash ROM of the microcontroller as a binary image file and may be erased and rewritten. The firmware may be considered "semi-permanent" since it remains the same unless it is updated. This firmware update or load may be handled by a device driver.

The components in reader 112 may be connected by one or more bus 168 or data lines, such as an Inter-IC bus such as an I 2 C bus, which is a control bus that provides a communications link between integrated circuits in a system. This bus may connect to a remote host computer, server, or processor in relatively close proximity, on or off the same printed circuit board as used by the imaging device. I 2 C is a two-wire serial bus with a software-defined protocol and may be used to link such diverse components as the image sensor 154, temperature sensors, voltage level translators, EEPROMs, general-purpose I/O, A/D and D/A converters, CODECs, and microprocessors/microcontrollers.

The functional operation of the host processor or local server 118 may involve the performance of a number of related steps, the particulars of which may be determined by or based upon certain parameters stored in memory 166 which may be any one of a number of memory types such as RAM, ROM, EEPROM, etc... In addition some memory functions may be stored in memory 162 provided as part of the microcontroller 160.

One of the functions of the host processor 118 may be to decode machine readable symbols provided within the target or captured image. One dimensional symbols may include very large to ultra-small, Code 128, Interleaved 2 of 5, Codabar, Code 93, Code 11 , Code 39, UPC, EAN, and MSI. Stacked 1 D symbols may include PDF, Code 16K and Code 49. 2D symbols may include Aztec, Datamatrix, Maxicode, and QR-code. UPC/EAN bar codes are standard codes to mark retail products throughout North America, Europe and several other countries throughout the worlds. Decoding is a term used to describe the interpretation of a machine readable code contained in an image projected on the image sensor 154. The code has data or information encoded therein. Information respecting various reference decode algorithm is available from various published standards, such as by the International Standards Organization ("ISO").

In an alternate example, information from the indicia may be preliminarily reviewed or analyzed utilizing software provided in on-board memory (i.e. 162 or other) on the reader 112 and processed by an on-board device such as microcontroller 160. The preliminary review would identify whether upgrade software is available and perhaps where to access it.

A communications module 180 provides a communication link from imaging reader 114 to other imaging readers or to other remote systems such as host processor 118, memory 166, communication network 120, or network computer 124.

A further detailed description of indicia reading operation is disclosed in commonly owned published United States Patent Application Publication No. 20030029917 entitled OPTICAL READER FOR IMAGING MODULE and United States Patent Application Publication No. 20030019934 entitled OPTICAL READER AIMING ASSEMBLY COMPRISING APERTURE , United States Patent Application Publication No. 20040134989 entitled DECODER BOARD FOR AN OPTICAL READER UTILIZING A PLURALITY OF IMAGING FORMATS which are hereby incorporated herein by reference.

The information bearing indicia with upgrade data may be considered sensitive information, it may therefore be required that the data be encrypted, wherein the information bearing indicia can be read, but the data in the information bearing indicia is encrypted. Encryption is the conversion of data into a form that cannot be easily understood by unauthorized people. A decrypting algorithm would be required to decrypt such data. Decryption is the process of converting encrypted data back into its original form, so it can be understood. Operation of the decrypting algorithm requires the use of a "key". Encryption key(s) may be secret keys, private keys, or public keys. This encryption key may be provided in the scanner firmware, the host device, in the encrypted barcode or in a separate barcode, which allows the user to decide whether to separate the encryption key

from the data or combine them. Encryption keys may be associated by mathematical derivation, symmetry, or other relationship. Encryption keys may updated by pushing the key to the scanner from the host device, or by scanner to scanner communication as discussed hereinbefore.

For example, the scanner may be able to recognize the information bearing indicia as an encrypted information bearing indicia by recognizing a unique unencrypted piece of a data string provided within the information bearing indicia. That same piece of data may also instruct the scanner where to look for the encryption key.

The information bearing indicia may be partially encrypted, which may allow the user only to read an unencrypted part of the information bearing indicia with any scanner. A data formatter may be utilized to strip encrypted data portions before further processing. If the encryption key matches the encrypted information bearing indicia and decoding is completed, the scanner will successfully "read" the data in the information bearing indicia.

If a mismatch between encryption key and information, bearing indicia is noticed the scanner may have an "encryption protected" routine with a different sequence of led blinking/beeps, different from an unsuccessful scanner read type situation.

Fig. 5 illustrates an exemplary scanning system configuration in accordance with one embodiment of the present invention, wherein a plurality of readers 112A, 112B are being operated or utilized in a remote location, such as in a warehouse or on a delivery truck. Each reader may be in communication (wired or wireless) with a communication network 120. The communication network 120 may be in communication with a remote/web server 134 through a wired or wireless connection for the transfer of information over a distance without the use of electrical conductors or "wires". The distances involved may be short (a few meters as in television remote control) or very long (thousands or even millions of

kilometers for radio communications). Wireless communication may involve radio frequency communication. Applications may involve point-to-point communication, point-to-multipoint communication, broadcasting, cellular networks and other wireless networks. This may involve: cordless telephony such as DECT (Digital Enhanced Cordless Telecommunications); Cellular systems such as OG, 1G, 2G, 3G or 4G; Short-range point-to-point communication such as IrDA or RFID (Radio Frequency Identification), Wireless USB, DSRC (Dedicated Short Range Communications); Wireless sensor networks such as ZigBee; Personal area networks such as Bluetooth or Ultra- wideband (UWB from WiMedia Alliance); Wireless computer networks such as Wireless Local Area Networks (WLAN), IEEE 802.11 branded as WiFi or HIPERLAN; or Wireless Metropolitan Area Networks (WMAN) and Broadband Fixed Access (BWA) such as LMDS, WiMAX or HIPERMAN.

The Internet is the worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web. The IP is a data-oriented protocol used for communicating data across a packet-switched internetwork, and may be a network layer protocol in the internet protocol suite and encapsulated in a data link layer protocol (e.g., Ethernet). As a lower layer protocol, the IP provides the service of communicable unique global addressing amongst computers to provide a service not necessarily available with a data link layer.

Ethernet provides globally unique addresses and may not be globally communicable (i.e., two arbitrarily chosen Ethernet devices will only be able to communicate if they are on the same bus). IP provides final destinations with data packets whereas Ethernet may only be concerned with the next device

(computer, router, etc.) in the chain. The final destination and next device could be one and the same (if they are on the same bus) but the final destination could be remotely located. IP can be used over a heterogeneous network (i.e., a network connecting two computers can be any mix of Ethernet, ATM, FDDl, Wi-fi, token ring, etc.) and does not necessarily affect upper layer protocols.

One or more PDAs may be outfitted with a communication module configured to communicate with other PDAs that have an appropriate type communication module. One or more PDA may be configured to communicate with a base unit 138 configured to interface between the PDA and the communication network.

In the case of a mobile hand held optical PDA hardwired to its individual base unit, this link between the PDA and base unit is fixed and permanent, in the case of a wireless mobile hand held optical PDA that communicates wirelessiy with its individual base unit, this link can be made by programming the PDA with information identifying the particular base unit so the PDA directs its transmitted information to that base unit, or vice versa.

A Portable Data Terminal, or PDT, is typically an electronic device that is used to enter or retrieve data via wireless transmission (WLAN or WWAN) and may also serve as an indicia reader used in a stores, warehouse, hospital, or in the field to access a database from a remote location.

The term "scan", "scanning" or "reading" used herein refers to reading or extracting data from information bearing indicia or symbol.

Referring to Fig. 4, an exemplary method of using a hand held image reader 112. A target is imaged in a step 310. The target may take on many forms, such as a package, box, container, etc. The image reader 112 and the target may be positioned such three surfaces of the target appear in the captured image. A processor may then identify or determine (314) from the image outer edges of

the package. For instance, the processor may identify three or more outer edges 304a-g. The processor may then identify or determine (318) from the image corners 306a-g of the target, corners being representative of the intersection of edges. The processor may then calculate or determine (320) from the edges and corners dimensions of the package, such as height, width or depth. This determination may be made by translating (324) image sensor pixels into true distance.

An exemplary method of performing this translation is to image the target on a surface or in an environment that has a pattern or other marks which includes indicators of known dimensions or known distances with respect to each other. In one embodiment, the indicators are black bars of known dimensions, square or rectangular, on a white surface. In another embodiment, the indicators are one or more concentric black rings or a central black dot surrounded by one or more concentric black rings. See, for example, Figs. 5.2 and 5.3. In Fig. 5.2 a prop 500 has three walls 510, 511 , 512 at right angles to each other to form an inside corner 505 and the intersection of the planes of the three walls. The edges of the walls have one or more reference marks designated as Type I 501 and Type Il 502. In operation, a rectangular package has one corner placed in inside corner 505 with the package end and side against walls 513 and 510. Now the package is oriented and the location of the package with respect to the markings 501 , 502 can be determined by analysis of an image of the package and the prop. The table in Fig. 5.2 shows the type, number and locations of the reference markings. Details of the markings are shown in Fig. 5.3.

When an object, such as a rectangular package, is placed on a surface with indicators, the image of the object and the indicators is processed to identify the edges of the object using well-known edge detection image analysis techniques. The indicators provide standards for measuring the size of the pixels in the image. Oblique images suffer from perspective shortening, so portions of objects farther from the camera appear physically closer than they are. These errors are

known and there are conventional software techniques that use geometry and trigonometry to scale the oblique image in order to measure the length of an edge of the object. In general, the average height of camera is known and the distance between the camera and the object can be calculated using known parametric values of the camera.

Another exemplary method of performing this translation is to determine the distance from the target to the sensor and calculate using the known focal length of the lens of the imager.

Another exemplary method of performing this translation is to utilize two cameras with slightly different optical axis angles to the target and then calculate by triangulating the image in a manner analogous to human sight utilizing two eyes.

Another exemplary method of performing this translation is to project an aiming pattern on the target at a known angle. The processor may calculate dimensions by triangulating the length of the know dimension of the aiming pattern at specific distances with the imager optical axis.

Another exemplary method of performing this translation is to project and aiming pattern on the target and measure the time for the aiming pattern reflection reaches the imager, similarly to a radar system.

Another exemplary method of performing this translation is to project two approximately parallel aiming patterns onto the target with a know distance between them, which would be the same distance as reflected from the target.

An exemplary embodiment for an image reader system is to take an image of a vehicular license plate as discussed above in relation to Fig. 5. The image may be analyzed with optical character recognition to determine the alphanumeric characters of the plate. Once the letters and numbers are recognized, they may

be stored in memory, and compared to a lookup table to determine the vehicle owner. In an exemplary indicia reader network a reader scans a vehicle's state license plate. The reader output signal of information contained in the alphanumeric characters is wirelessly communicated to a local host which may decode the data message for further processing. The local host may be located in a relative close proximity, such as toll booth or another vehicle, such as police or other governmental agency vehicle. The local host may communicate the license data message to a remote server. The remote server may perform a variety of functions and responsibilities, such as decoding, accessing information to compare the indicia information against information in government databases such as motor vehicle departments, customs, the justice department, the BATF, police departments, etc.. The information retrieved from the databases may be vehicle or operator registration information, driving or other records. The remote server may be linked to another remote server or computer so that another person may provide remote help or service. The remote server may reference a third party database, cull information, make comparisons and determinations, alert establishment personnel and security, etc. and send back a result. The remote server may also record the information to another database for record keeping purposes.

If the reader is an optical reader, it may take an archival picture of the vehicle, operator, or passengers which may be saved by the remote server. Information read from indicia or the picture taken may be used to electronically complete various types of forms, such as traffic tickets, statutorily required forms, etc. The process of extracting the information from the picture might include OCR, 2D barcode decoder such as PDF417 decoder, or matrix decoder such as Datamatrix, Aztec, QR code decoder, etc.

Multiple indicia may be provided on the license plate in order to provide the capacity to read more information that is allowable in alphanumeric characters or singular indicia.

In another example, the scanner might read the vehicle operator's information from the indicia (such as a PDF417 bar code) on the operator's driver's license. The information may then be compared with information associated with the indicia on vehicle license plate. Information from the operator driver license may also be utilized to populate forms, such as traffic tickets, statutoriiy required forms, etc. Such a system would be more convenient while at the same time reducing time and reducing application error rate because of incorrectly transcribed information. At the same time the scanner may be automatically changed to a picture taking mode, signal the operator to aim the scanner at the applicant, the driver's license, etc. and then take a picture. This picture could then also be automatically added to or associated with a roadside transaction or stop, at toll booths, customs checkpoints, military checkpoints, airports, etc...

The reader may include a wireless transceiver, such as, for example a wireless Bluetooth, IEEE 802.11 b, ZigBee, or other standardized or proprietary RF device which may be configured to provide communications between the reader and the local host 118. The wireless transceiver may consist of an RF module and antenna (not shown) and is configured to engage in two-way communication with at least one other wireless transceiver. Another wireless transceiver may be located in the local host, which may be a stand-alone unit or physically incorporated into another host device such as a computer or similar device. The wireless transceiver may include a RF module and an antenna. The wireless transceiver may transmit decoded information to a wireless transceiver in the local host for secure transactions. The wireless communication protocol may be according to a secure protocol, such as the FIPS 140-2 standard.

The wireless device may be configured for operation in a hostile environment and may be hermetically sealed units.

Information bearing indicia or alphanumeric characters may contain sensitive information such as component specifications, recipes or process data in a production environment, personal records, medical information in healthcare, social security numbers, biometrics, entrance and access keys, ticketing applications, vouchers for discount in retail or the information bearing devices may be involved in transactions involving financial or private information. In these types of applications the data is generally at risk from being misused and/or to perform criminal activity. A scanning system with security features may reduce such risks. For these applications it may be required that the data in an information bearing indicia be encrypted, wherein the information bearing indicia can be read, but the data in the information bearing indicia is encrypted. Encryption is the conversion of data into a form that cannot be easily understood by unauthorized people. A decrypting algorithm would be required to decrypt such data. Decryption is the process of converting encrypted data back into its original form, so it can be understood. Operation of the decrypting algorithm requires the use of a "key". Encryption key(s) may be secret keys, private keys, or public keys. This encryption key may be provided in the scanner firmware, the host device, in the encrypted barcode or in a separate barcode, which allows the user to decide whether to separate the encryption key from the data or combine them. Encryption keys may be associated by mathematical derivation, symmetry, or other relationship. Encryption keys may updated by pushing the key to the scanner from the host device, or by scanner to scanner communication as discussed hereinbefore.

For example, the scanner may be able to recognize the information bearing indicia as an encrypted information bearing indicia by recognizing a unique unencrypted piece of a data string provided within the information bearing indicia. That same piece of data may also instruct the scanner where to look for the encryption key.

The information bearing indicia may be partially encrypted, which may allow the user only to read an unencrypted part of the information bearing indicia with any scanner. A data formatter may be utilized to strip encrypted data portions before further processing. If the encryption key matches the encrypted information bearing indicia and decoding is completed, the scanner will successfully "read" the data in the information bearing indicia.

If a mismatch between encryption key and information bearing indicia is noticed the scanner may have an "encryption protected" routine with a different sequence of led blinking/beeps, different from an unsuccessful scanner read type situation.

Tape Measuring

Another exemplary embodiment here is to have a measuring tape which knows the measurement distance, and can communicate that information to a wireless connected device such as a PDT. This would allow a user to have a program on their PDT where a distance measure input was required, and at this time the PDT would connect (for instance using Bluetooth) up to this tape measure. The user would then pull out the tape so that one end was touching one end of the item to be measured, and the base of the tape measure would be at the other end of the item to be measured. The tape measure would "know" the distance based on how far the tape is stretched out. The user would then indicate (possibly with a button press on the tape measure) that the tape is in position at which time the tape measure would send the measurement data, using the wireless connection, to the PDT and the data would automatically input that data from the tape measure as the input to the PDT program.

Tape measures often use a stiff, curved metallic ribbon that can remain stiff and straight when extended, but retracts into a coil for convenient storage. This type of tape measure will have a floating tang on the end to aid measuring. The tang will float a distance equal to its thickness, to provide both inside and outside

measurements that are accurate. The tape extends from point to point placing the end-clip at the location one wants to measure from. Most tape measures have a clip (tang) that attaches to a fixed object to measure spans easily. Many steel blade tapes have tension-control brakes that lock the blade in place for measuring spans.

The tape itself could be segmented by lengths of black and white areas of a predetermined distance (say 1/8 inch or other desired distance resolution) repeating itself all the way up the tape. Fluxuations from black to white on the tape itself could be monitored optically as the reflectance changes, and counted by a processing unit on the tape measure itself. By counting the number of transitions and multiplying that by the length of a specific color, the distance could be measured by the unit.

As for the triggering or wireless communication of the data, this may be done by a mechanical switch triggering a port pin on the processor, and a radio such as Bluetooth could be used respectively.

For example, Figs. 6A and 6B show a stand alone tape measure 600. The measuring tape 610 with sequential, alternate light and dark regions 614, 616. The length X 612 of each region is the same. As such, counting the sequential light and dark regions and multiplying by the known length of X give a distance measurement. The tape measure may be a stand alone apparatus 618 that has a coiled tape on a roller (not shown). The tape 610 has a tang 609 on one end to prevent the tape from traveling entirely into the holder housing. The tang 609 also provides a stop against the wall of a measured object for withdrawing the tape 610 through opening 630 in the housing. An optical monitor 622 senses the passage of the alternate light and dark areas of the tape 610. The holder has a wireless transmitter 620 for broadcasting data signals representative of the each passage of a light or dark area. As an alternative, the a processor onboard the holder may count the pulses sensed by the optical monitor and broadcast the

distance the tape is extended from the housing. Another exemplary embodiment here is to have a measuring tape which knows the measurement distance, and can communicate that information to a wireless connected device such as a PDT. This would allow a user to have a program on their PDT where a distance measure input was required, and at this time the PDT would connect (for instance using Bluetooth) up to this tape measure. The user would then pull out the tape so that one end was touching one end of the item to be measured, and the base of the tape measure would be at the other end of the item to be measured. The tape measure would "know" the distance based on how far the tape is stretched out. The user would then indicate (possibly with a button press on the tape measure) that the tape is in position at which time the tape measure would send the measurement data, using the wireless connection, to the PDT and the data would automatically input that data from the tape measure as the input to the PDT program.

Turning to Fig. 7, there is another embodiment using a tape measure. In unit 712 the tape 716 is would in a coil on a roller disposed inside the PDA. The tang 709 prevents the tape from retracting entirely within the housing of the PDA 712. Since the PDA 712 already has a light source and onboard processor, they can by used to perform the sensing and counting functions described above in connection with the stand alone tape measure 600.

Computer Mice Measuring

Mechanical computer mice track their own movement by the user. As such, they record velocities and X and Y positions. Mechanical computer mice are well known. Early mouse patents include opposing track wheels, U.S. Patent 3,541 ,541 , ball and wheel devices, U.S. Patent 3,835,464, and ball and two rollers with spring, U.S. Patent 3,987,685 ^ The ball mouse utilizes two rollers rolling against two sides of the ball. One roller detects the horizontal motion of the mouse and other the vertical motion. The motion of these two rollers causes

two disc-like encoder wheels to rotate, interrupting optical beams to generate electrical signals. The mouse sends these signals to the computer system by means of connecting wires. The driver software in the system converts the signals into motion of the mouse pointer along X and Y axes on the screen.

The operating features of a mechanical mouse are also well known. They include moving the mouse to turn a ball located on the bottom of the mouse. X and Y rollers grip the ball and transfer movement to optical encoding disks include light holes. Infrared LEDs shine through the disks and sensors gather light pulses passing through the holes to convert to X and Y velocities. Conventional computer programs such and Microsoft Paint and Microsoft Power Point permits users to click on the mouse once to set a start point and a second time to set a finish point. Such operations draw lines of known lengths that are proportional to the distance between the points.

One exemplary embodiment of the invention using a mechanical computer mouse is illustrated in Fig. 8. A ball-type mouse has a ball 810 held in a cradle (not shown) on the bottom surface of a PDA 812. Optical wheels 815, 825 are oriented at right angles to each other. The wheels turn about their respective axes 814, 824 as they follow the motion of the ball 810. As such, the wheels turn clockwise and counter clockwise as indicated by the arrows 816, 826. A guide rail 860 extends from the bottom surface to keep the PDA aligned as it transits the length of an object, such as an edge of a rectangular package. As an alternative, a guide roller 862 keeps the PDA traveling in a straight line along an edge of the package. In operation, the wheels 815, 816 turn and generate signals that are received by the controller 160. It has a known program to calculate the distance traveled by the PDA 812.

While conventional mice could measure two directions, the invention may be practiced using a modified mouse that measures distance in only one direction. For example, the portable data acquisition device may have a cradle

that holds one wheel which protrudes beyond the housing. The wheel is mounted on a shaft and an encoder proximate the shaft for generates signals representative of the motion of the motion of the wheel in a direction perpendicular to the shaft.

Modern surface-independent optical mice work by using an optoelectronic sensor to take successive pictures of the surface on which the mouse operates. The optical mouse has embedded powerful special-purpose image-processing chips in the mouse itself. This enables the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the pointer and eliminating the need for a special mouse-pad. Optical mice illuminate the surface that they track over, using an LED or a laser diode. Changes between one frame and the next are processed by the image processing part of the chip and translated into movement on the two axes using an optical flow estimation algorithm. For example, the Avago Technologies ADNS-2610 optical mouse sensor processes 1512 frames per second: each frame consisting of a rectangular array of 18x18 pixels and each pixel can sense 64 different levels of gray. Optical mice work equally well with drawing programs such as Paint and Power Point.

An exemplary embodiment of the invention using an optical mouse is shown in Fig. 9. PDA 912 has a light source, such as a light emitting diode, which illuminates the surface. An image capture device 922, typically a CMOS imager, follows the motion of the mouse. Suitable programming known by those skilled in the art computes the travel of the mouse over the surface. A guide rail 960 extends from the bottom surface to keep the PDA aligned as it transits the length of an object, such as an edge of a rectangular package. As an alternative, a guide roller 962 keeps the PDA traveling in a straight line along an edge of the package. In operation, the cmos imager provide data signals representative of the distance traveled by the PDA 912 to processor 160. It has a known program to calculate the distance traveled by the PDA 812.

Those skilled in the art understand that conventional computer mice are imprecise and may not yield consistent and accurate readings. However, for purposes of the invention, the computer mice can be refined to add more precise shaft encoders and imagers. The mice can be tested against a reference standard and their readings can be normalized to such standard. For example, if the mouse was moved a standard 100 inches and measured only 90 inches, the encoder could be normalized to add 0.1 inch to each one inch measurement.

Accelerometers

Modern accelerometers are often small micro electro-mechanical systems (MEMS), and are indeed the simplest MEMS devices possible, consisting of little more than a cantilever beam with a proof mass (also known as seismic mass) and some type of deflection sensing circuitry. Under the influence of gravity or acceleration, the proof mass deflects from its neutral position. The deflection is measured in an analog or digital manner. Another type of MEMS-based accelerometer contains a small heater at the bottom of a very small dome, which heats the air inside the dome to cause it to rise. A thermocouple on the dome determines where the heated air reaches the dome and the deflection off the center is a measure of the acceleration applied to the sensor.

Single-axis, dual-axis, and triple-axis models exist to measure acceleration as a vector quantity or just one or more of its components. MEMS acceierometers are available in a wide variety of measuring ranges, reaching up to thousands of g's.

Accelerometers can measure velocity and distance. Velocity is the first integral of acceleration and distance is the second integral. So long as one knows the time the mass undergoes motion, the distance between the start and

stop of motion is relatively simple calculation and can be done automatically be a processor.

See, for example, U. S. Patent No. 4,149,417 that is incorporated by reference. It show a distance measuring arrangement with an accelerometer transducer of the force balance type in which a switch closed by the action of an acceleration force causes a constant current to flow in a coil to oppose the force and open the switch. The switch opens and closes in a repetitive cycle at the resonant frequency of the system, the closed switch gating clock pulses to integration means. The transducer may take two forms, in the first the proportion of time that the switch is closed in each cycle is proportional to the acceleration value. Distance measurement comprises counting the clock pulses to provide a first stage of integration and summating the counter totals every 1000 clock pulses to give a distance traveled signal. In the second, the proportion of time that the switch is closed in each cycle is proportional to the square root of the acceleration value and distance measuring comprises counting the clock pulses to give a signal proportional to the square root of distance traveled. In both cases, this signal may be compared with a stored predetermined value to provide a control function.

An exemplary embodiment of the invention using an accelerometer shown as a alternate embodiment of PDA 9. The accelerometer 950 is mounted inside the housing of the PDA. In operation, it senses the motion and provides data signals for the processor 160. It has a known program to calculate the distance the PDA moves.

Edge Detection and Edge Measurement for Packages and Forms

Edge detection is a terminology in image processing and computer vision, particularly in within the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities. The edges

extracted from a two-dimensional image of a three-dimensional scene can be classified as either viewpoint dependent or viewpoint independent. A viewpoint independent edge typically reflects inherent properties of the three-dimensional objects, such as surface markings and surface shape. A viewpoint dependent edge may change as the viewpoint changes, and typically reflects the geometry of the scene, such as objects occluding one another.

A typical edge might for instance be the border between a package and substrate supporting the package. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line. Edges play quite an important role in many applications of image processing, in particular for machine vision systems that analyze scenes of man-made objects under controlled illumination conditions. During recent years, however, substantial (and successful) research has also been made on computer vision methods that do not explicitly rely on edge detection as a preprocessing step.

Edge detection methods often differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y- directions. Once one has computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant features from the image. Conversely, a high threshold may miss subtle edges, or result in fragmented edges.

If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning postprocessing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.

A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may vary over the image.

Machine vision systems for identifying edges and processing images of objects to measure the lengths of edges are well known. See, for example, United States Patent 6,621 ,928, lnagaki et al., September 16, 2003 for Image edge detection method, inspection system, and recording medium. That patent shows an inspection system which includes a memory for storing image data provided by picking up an image of a workpiece, a monitor for displaying the image data stored in the memory on a display screen with pixels arranged in an X-axis direction and a Y-axis direction perpendicular to the X-axis direction. It

also provides a contro! panel for setting a window with four sides along the X- or Y-axis direction on the display screen, and an edge detection section for integrating the lightness values of the pixels with respect to each pixel string arranged in the Y- or X-axis direction in the setup window. The system detects as an edge the position in the X- or Y-axis direction corresponding to the maximum value of the portion where the absolute value of the differential operation result in the X- or Y-axis direction, of the integration result is equal to or greater than a threshold value.

Once the edges of a package identified, the length of the edge can be calculated by the counting the number of pixels that define the length of the edge and scaling the pixel count to provide a distance. Scale may be provided by indicia embedded in the surface defined by the edge or by other means, such as a light table with a grid pattern.

Another exemplary embodiment is to take an image of a target, such as a paper form (e.g. a shipping label or shipping form). Various types of data may automatically be collected by an image reader. The collection process involves the human operator placing a target, such as a form or target in the field of view of an image reader. The operator may actuate a trigger on an image reader for any data type to be read or the reader may automatically image the target. The data shown may include typed text, an IBI, such as a two-dimensional barcode encoding a label number, a signature, hand-written text, etc. An image reader may be placed on a stand for viewing a document which may be placed on a surface or platen or the image reader may be pointed at the document.

An image reader may be used as a document scanner or camera, as well as an IBI reader for use in certain exemplary situations, such as a shipping label, wherein a shipping company may desire to keep electronic records of packages or documents or forms. Forms may be of many different sizes and shapes,

which may result in different image file sizes, some of which may be undesirably large. For example, a large document may take up the entire field of view of the image reader; however, a very small document may only take up a small portion of the imager field of view.

A method to minimize image file size may be to binarize the image (i.e. turn it into a 1 bit-per-pixel image so that each dot or pixel is either black or white instead of grayscale), and then compress the data using a lossless algorithm such as a CCITT T6 Group 4 compression.

It may not be desirable to retain an entire image after a target is imaged. In an exemplary embodiment, the size of an image may be reduced through a process referred to herein as image cropping or automatic image cropping. An exemplary image cropping process is taking an image, looking at that image to determine a region or regions of interest, and cropping the image so that the resulting image only includes the region(s) of interest. The unwanted portions of the image are cut or cropped out of the image. Other exemplary embodiments of the invention may include correction of angular distortion or incorrect rotational orientation caused by improper location of the imager relative to the object being imaged. In another exemplary embodiment, other image processing may be utilized on the image taken for different effects, such as flattening of the image (i.e. adjusting to make the dark/light contrast uniform across the cropped image), or other filtering techniques which may help to make the resulting image more appealing. An exemplary embodiment for cropping an image may be to search at least two digitized images, one image taken at full or high resolution and one taken with reduced or lower resolution for nominally straight edges within the image(s). These nominally straight edges may then be characterized in terms of length and direction (i.e. vectors). By a histogram of the directions, a determination may be made as to which edge orientation predominates. All edges not nominally parallel or perpendicular to the predominate orientation may be discarded. A group of edges that comprise a form may then be chosen by their proximity to the

center of the image and then their proximity to other remaining edge positions. The process may then transmute a rectangle bounding those edges into a rectified image.

The digitized images may be binarized if they are captured as grayscale images. Binarization may be described as turning the pixels of an image from grayscale or pixels having multibit values to binary value pixels, so that each dot or pixel is either black (e.g. 1) or white (e.g. 0). The higher and lower resolution images may be derived or obtained from a single image capture taken by the image reader, viewed both at high resolution and at reduced or lower resolution. Full resolution may be considered the highest resolution. The searches for nominally straight edges may be done in succession. Both sets of straight edges may contribute to the same pool of candidate edges. Some edges may appear in both images, and contribute twice to the search.

An exemplary histogram analysis may consist of a series of one-dimensional slices along horizontal and vertical directions defined relative to the orientation of edges. In an embodiment, the value for each one-dimensional slice corresponds to the number of zero valued pixels along a pixel slice. An exemplary histogram analysis may provide a two-dimensional plot of the density of data element pixels in the image data. Edges may be determined with respect to a minimum density threshold for a certain number of sequential slices. In an embodiment, a histogram analysis searches inwardly along both horizontal and vertical directions until the pixel density rises above a predefined cut-off threshold. An exemplary embodiment for determining the region or regions of interest determination of image processing may be implemented depending on the complexity desired by a user. For example, the region or regions of interest may be determined by having a known template on the surface where the document or form to be imaged is placed. The exemplary template may have a known pattern such as evenly spaced dots, or a grid of some type, wherein so that

placing a document on the grid breaks the pattern and reveals where the document is.

In an exemplary embodiment, once a predominant edge orientation is established, the image may be electronically rotated if, and for example, an operator does not place a form properly square with the image reader when imaging the form.

In an exemplary embodiment, a processor looks at an image to determine a region or regions of interest and then crops the image (i.e. cutting portions of the image out) so that the resulting image only includes the region(s) of interest. An exemplary embodiment may comprise correction of angular distortion or rotational orientation caused by improper location or positioning of the imager relative to the object being imaged. An exemplary embodiment may comprise other image processing effects such as flattening of the image (i.e. adjusting to make the dark/light contrast uniform across the cropped image), and/or other filtering techniques which may help to make the resulting image look better to the human eye.

An exemplary embodiment may comprise determining congruent lines of activity for rotational adjustment. A transformation matrix may be utilized to orient the image.

Another exemplary embodiment is to use a reader stand relative to the area to be imaged for angular distortion correction.

In an exemplary embodiment, a programmed processor searches two binarized images, one full resolution and one with reduced resolution, for nominally straight edges, characterizing them in terms of length and direction. By a histogram of those directions it determines which orientation predominates, discarding all edges not nominally parallel or perpendicular to it. By proximity to the center,

and then to other remaining edge positions, it chooses a group of edges that may comprise a form. It then transmutes the rectangle bounding those edges into a rectified image. The two binarized images may be derived from a single captured image, viewed both at full resolution and at reduced resolution. The searches may be done in succession. Both sets of nominally straight edges may contribute to the same pool of candidate edges. Some edges may appear in both images, and contribute twice.

Another exemplary embodiment for finding the region, or regions, of interest is by mapping the energy of the image. High energy areas (i.e. areas with large pixel value variation in relatively close proximity, thus representing high contrast areas) might be considered regions of interest. After these high energy areas are established, an area within the image may be determined by including all of these regions of interest, and the image is cropped to that new area.

Another exemplary technique may include determining congruent lines of activity in the image for rotational adjustment.

Another exemplary embodiment is to provide location of a stand relative to the area to be imaged, wherein angular distortion may be determined correction applied.

Edge data may be utilized to establish predominant orthogonal horizontal and/or vertical directions which may be interpreted as representing features of a form if the presence of a form has not been assumed, if the predominant horizontal and/or vertical edges have been established, horizontal and/or vertical outer form boundaries may be established. In an exemplary embodiment, predominant horizontal edges may be established first, and then predominant vertical edges.

Referring to Figs 10A-10C, they show another exemplary embodiment to aid in the process of software driven package dimensioning software. This

embodiment 1010 has a light emitting surface 1014 that receives objects (packages) so that imager based products (or any package dimensioning product) can use the surface for determining packaging dimension. This embodiment may or may not have a translucent film (either permanent or removable) to control the amount of reflection. This film may or may not be textured to aid in the desired contrast. This will have the ability to control the amount and brightness of the light. This surface may or may not have a grid pattern applied to it to aid in the package dimensioning software edge detection techniques. The device may or may not have the ability to exchange different types of light sources.

The apparatus 1010 has a light source or sources 1041 disposed in the lower portion of a housing. A surface 1014, usually translucent or transparent, covers the light source or sources 1041. A rectangular package 1030 is placed on the surface of cover 1014. In operation, the light source 1041 is turned on to generate light. Light striking the bottom surface of the package 1030 is reflected back to toward the bottom of the apparatus 1010. A reader obtains an image of the illuminated package. A suitable processor and suitable software are used to process the data signals and provide detection of the edges of the bottom surface of the package 1030. By counting the number of pixels in the edges and knowing the scale of the pixels to the cover surface, one may calculate the length of the edges. As an alternative, the apparatus 1010 has indicia 1012 on the edges of the cover 1014. The indicia are evenly spaced, for example, one-eighth of an inch apart, and are in the same plane as the bottom of the package. Hence, an image that captures the bottom surface of the package and the indicia of the cover can be processed using the indicia to measure the edges. Alternately, the surface can be coated with a retro-reflective material and light striking the surface at a critical angle will be reflected back, leaving the package dark.

By using this embodiment, the edge of any color box in relation to its surroundings will be enhanced making package dimensioning detection easier and more accurate. This embodiment can be used for any package dimensioning system.

Another exemplary embodiment is to image a form, and recognize or determine certain within a target check boxes for a person to check or mark regarding the answers to certain questions of otherwise certain information. The check boxes may be found or determined, and then a determination may be made as to whether those boxes have been checked or marked. The boxes which are marked may be correlated with a lookup table or other information to determine certain information, such as requirements, instructions, insurance information, time, etc.

Another exemplary embodiment is to image a form, and recognize or determine known features on a target, such as lines, text, logos, etc. and then locate check boxes based on the known or predetermined location.

Another exemplary embodiment is to image a form, and recognize, determine or locate places which can't be automatically read, and image that part for further image processing.

An exemplary embodiment uses a PDT to take an image of a package placed on a reference grid. Once the image is captured, image processing software may be used to determine if the image is of acceptable quality.

Another exemplary embodiment is to a user input via the touch screen or the keyboard of the PDT to indicate the quality of the image or input some other possible data for inclusion in dimensioning calculations.

Another exemplary embodiment is to have the corners of the package in the image marked or noted utilizing a touch screen on the PDT. There are many possible embodiments for the proposed solution. One embodiment is a client server architecture where a client application on the PDT is used to acquire an image and possibly provide additional input into the system. The client application can then transmit wirelessly the image and any additional data to a server application. The server application can then perform the dimensioning logic and provide feedback back to the client application, for instance wirelessly, such as Bluetooth, 802.11 , or other wireless communication methodology. Another embodiment is to have all the image processing and dimensioning software logic resident in the application running on the PDT, thereby eliminating the need to have a server application for this function.

Weighing

Turning to Figs. 11 and 12, there are exemplary embodiments of an apparatus and a system for detecting the weight of a package in addition to taking its external dimensions. In Fig. 11 A package 1130 is placed on top of an RF enabled scale 1120. A PDT device 1112 running a client application, takes an image of the box on the scale while the scale calculates the weight of the package. The client application may initiate the beginning of the weighing programmatically.

Both the scale 1120 and the PDT 1112 will communicate either by 8O2.lla/g/b or by some other standard wireless communication technology to the server application on the PC. The PDT will send the appropriate package dimensioning data to the server while the scale will send the weight of the package to the server.

The server 1140 will calculate the dimensions of the package from one or more of the other measurement techniques disclosed above. Abiding by the business

logic of any given company the server 1140 will determine if the package's shipping price will be calculated by the weight or by the volume. The server will calculate the appropriate rate and will either send the data to a printer 1142 to print the shipping label or it will send the shipping label to the PDT so that the PDT can send the shipping label information to the printer. (This depends on the environment and physical setup of the infrastructure / equipment)

Referring to Fig. 12 another exemplary embodiment is to provide a stand alone PDT implementation.

A package 1230 is placed on top of an RF enabled scale 1220. A PDT 1212 running a software server application takes an image of the box 1230 on the scale 1220 while the scale calculates the weight of the package. The PDT application may initiate the beginning of the weighing programmaticaliy.

Both the scale 1220 and the PDT 1212 will communicate either by 8O2.ila/g/b or by some other standard wireless communication technology to the server application on the PC. The PDT 1212 will calculate the dimensions of the package 1230 while the scale 1220 will send the weight of the package to the PDT 1212.

The server running on the PDT will abide by the business logic of the given company. The server will determine if the package's shipping price will be calculated by the weight or by the volume. The server will calculate the appropriate rate and will send the data to a printer 1242 to print the shipping label. In one embodiment, the printer is incorporated into the PDT 1212.

As the market for PDTs continues to grow, competition will be based on performance and features. The embodiments disclosed and claimed herein add more functions to the PDT, including measuring one or more parameters of packages, such as their dimensions and weight.

Many functions of electrical and electronic apparatus may be implemented in hardware (for example, hard-wired logic), in software (for example, logic encoded in a program operating on a general purpose processor), and in firmware (for example, logic encoded in a non-volatiie memory that is invoked for operation on a processor as required). Substitution of one implementation of hardware, firmware and software for another implementation of the equivalent functionality using a different one of hardware, firmware and software may be considered. To the extent that an implementation may be represented mathematically by a transfer function, that is, a specified response is generated at an output terminal for a specific excitation applied to an input terminal of a "black box" exhibiting the transfer function, any implementation of the transfer function, including any combination of hardware, firmware and software implementations of portions or segments of the transfer function may be considered.

It should be understood that the programs, processes, methods and apparatus described herein are not related or limited to any particular type of computer or network apparatus (hardware or software). Various types of general purpose or specialized computer apparatus may be used with or perform operations in accordance with the teachings described herein. While various elements of the preferred embodiments have been described as being implemented in software, in other embodiments hardware or firmware implementations may alternatively be used, and vice-versa. The illustrated embodiments are exemplary only, and should not be taken as limiting the scope of the present invention. For example, the steps of the flow diagrams may be taken in sequences other than those described, and more, fewer or other elements may be used in the block diagrams. In addition, unless applicants have expressly disavowed any subject matter within this application, no particular embodiment or subject matter is considered to be disavowed herein.

Those skilled in the art understand that machine vision technology has developed software for analyzing images to detect edges, to use reference markings on objects for orienting images and for removing perspective distortion. Indeed, all of the techniques described above can be implemented by those with skill in the art of machine vision technology and image analysis.