Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
BIOLOGICALLY-INSPIRED NETWORK GENERATION
Document Type and Number:
WIPO Patent Application WO/2019/134753
Kind Code:
A1
Abstract:
A system and method includes training of a first neural network, the first neural network defined by first metaparameters, evaluation of the performance of the trained first neural network, determination of a reward based on the performance, modification of the first metaparameters based on the reward to generate second metaparameters of a second neural network, the second neural network being larger than the first neural network, training of the second neural network, determination that the second neural network meets a performance goal, modification, in response to the determination that the second neural network meets a performance goal, that the second metaparameters to generate third metaparameters of a second neural network, the third neural network being smaller than the second neural network, evaluation of the performance of the third neural network, determination of a second reward based on the performance of the third neural network, modification of the third metaparameters to generate fourth metaparameters of a fourth neural network based on the reward, and evaluation of the performance of the fourth neural network.

Inventors:
COMANICIU DORIN (US)
MA KAI (US)
Application Number:
PCT/EP2018/050320
Publication Date:
July 11, 2019
Filing Date:
January 08, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS HEALTHCARE GMBH (DE)
International Classes:
G06N3/04; G06N3/02; G06N3/08
Foreign References:
EP3065085A12016-09-07
US20160358070A12016-12-08
US9767557B12017-09-19
Other References:
BARRET ZOPH ET AL: "Neural Architecture Search with Reinforcement Learning", 15 February 2017 (2017-02-15), XP055444384, Retrieved from the Internet [retrieved on 20180125]
BOWEN BAKER ET AL: "Designing Neural Network Architectures using Reinforcement Learning", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 November 2016 (2016-11-07), XP080729990
XIAOLIANG DAI ET AL: "NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 6 November 2017 (2017-11-06), XP080834772
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A system comprising: a reinforcement learning network to: train a first neural network, the first neural network defined by first metaparameters; evaluate the performance of the trained first neural network; determine a reward based on the performance; modify the first metaparameters based on the reward to generate second metaparameters of a second neural network, the second neural network being larger than the first neural network; train the second neural network; determine that the second neural network meets a performance goal; in response to the determination that the second neural network meets a performance goal, modify the second metaparameters to generate third

metaparameters of a second neural network, the third neural network being smaller than the second neural network; evaluate the performance of the third neural network; determine a second reward based on the performance of the third neural network; modify the third metaparameters to generate fourth metaparameters of a fourth neural network based on the reward; and evaluate the performance of the fourth neural network.

2. A system according to Claim 1, wherein determination of the second reward is based on the performance of the third neural network and the size of the third neural network.

3. A system according to Claim 2, the reinforcement learning network further to: train the fourth neural network; evaluate the performance of the trained fourth neural network; determine a reward based on the performance of the trained fourth neural network;

modify the fourth metaparameters based on the reward to generate fifth

metaparameters of a fifth neural network, the fifth neural network being larger than the first fourth neural network.

4. A system according to Claim 1, wherein modification of the first metaparameters to generate the second metaparameters comprises: selection from a list of actions to increase the size of a neural network.

5. A system according to Claim 4, wherein modification of the second

metaparameters to generate the third metaparameters comprises: selection from a list of actions to reduce the size of a neural network.

6. A system according to Claim 1, the reinforcement learning network further to: train the fourth neural network; evaluate the performance of the trained fourth neural network; determine a reward based on the performance of the trained fourth neural network; modify the fourth metaparameters based on the reward to generate fifth metaparameters of a fifth neural network, the fifth neural network being larger than the first fourth neural network.

7. A system according to Claim 1, the reinforcement learning network further to: determine that the fifth neural network meets a performance goal; and in response to the determination that the fifth neural network meets a performance goal, modify the fifth metaparameters to generate sixth metaparameters of a sixth neural network, the sixth neural network being smaller than the fifth neural network.

8. A method, comprising: training a first neural network, the first neural network defined by first

metaparameters; evaluating the performance of the trained first neural network; determining a reward based on the performance; modifying the first metaparameters based on the reward to generate second metaparameters of a second neural network, the second neural network being larger than the first neural network; training the second neural network; determining that the second neural network meets a performance goal; in response to the determination that the second neural network meets a performance goal, modifying the second metaparameters to generate third metaparameters of a second neural network, the third neural network being smaller than the second neural network; evaluating the performance of the third neural network; determining a second reward based on the performance of the third neural network; modifying the third metaparameters to generate fourth metaparameters of a fourth neural network based on the reward; and evaluating the performance of the fourth neural network.

9. A method according to Claim 8, wherein the determination of the second reward is based on the performance of the third neural network and the size of the third neural network.

10. A method according to Claim 9, further comprising: training the fourth neural network; evaluating the performance of the trained fourth neural network; determining a reward based on the performance of the trained fourth neural network;

modifying the fourth metaparameters based on the reward to generate fifth

metaparameters of a fifth neural network, the fifth neural network being larger than the first fourth neural network.

11. A method according to Claim 8, wherein modifying the first metaparameters to generate the second metaparameters comprises: selecting from a list of actions to increase the size of a neural network.

12. A method according to Claim 11, wherein modifying the second metaparameters to generate the third metaparameters comprises: selecting from a list of actions to reduce the size of a neural network.

13. A method according to Claim 8, further comprising: training the fourth neural network; evaluating the performance of the trained fourth neural network; determining a reward based on the performance of the trained fourth neural network;

modifying the fourth metaparameters based on the reward to generate fifth metaparameters of a fifth neural network, the fifth neural network being larger than the first fourth neural network.

14. A method according to Claim 13, further comprising: determine that the fifth neural network meets a performance goal; and in response to the determination that the fifth neural network meets a performance goal, modifying the fifth metaparameters to generate sixth metaparameters of a sixth neural network, the sixth neural network being smaller than the fifth neural network.

15. A system comprising : a neural network defined by neural network metaparameters; and a reinforcement learning network to: a) modify the neural network metaparameters to add a layer to or to add nodes to a layer of the neural network until it is determined that the neural network meets a performance goal; b) after it is determined that the neural network meets the performance goal, modify the neural network metaparameters to remove a layer, to remove a node, or to directly connect nodes of different layers of the neural network until it is determined that a size and a performance of the neural network have reached an equilibrium; and c) repeat a) and b) until a predetermined metric is reached.

16. A system according to Claim 15, wherein the predetermined metric is based on a total elapsed time executing a), b) and c), a number of repetitions of a) and b), a size of the neural network, and/or a performance of the neural network.

17. A system according to Claim 15, wherein modification of the neural network metaparameters to add a layer to or to add nodes to a layer of the neural network comprises: determination of a reward based on the performance of the neural network; and modification of the neural network metaparameters to add a layer to or to add nodes to a layer based on the reward.

18. A system according to Claim 17, wherein modification of the neural network metaparameters to remove a layer, to remove a node, or to directly connect nodes of different layers of the neural network comprises: determination of a reward based on the performance and size of the neural network; and modification of the neural network metaparameters to remove a layer, to remove a node, or to directly connect nodes of different layers of the neural network based on the reward.

19. A system according to Claim 15, wherein modification of the neural network metaparameters to remove a layer, to remove a node, or to directly connect nodes of different layers of the neural network comprises: determination of a reward based on the performance and size of the neural network; and modification of the neural network metaparameters to remove a layer, to remove a node, or to directly connect nodes of different layers of the neural network based on the reward.

Description:
BIOLOGICALLY-INSPIRED NETWORK GENERATION

BACKGROUND

[0001] Neural networks are increasingly used to address complex problems or tasks.

Generally, a neural network is designed by a human architect and then trained based on known training data. The trained neural network is used to generate results based on test data and the accuracy of the results is evaluated. If the accuracy is acceptable, the neural network is deployed to address its intended problem or task.

[0002] The human architect determines the goal of the network, the structure of the network, the machine learning algorithms of the network, the training data and the test data. Each of these determinations is time-consuming and error-prone. Systems are desired to efficiently automate aspects of the development of a neural network.

BRIEF DESCRIPTION OF THE DRAWINGS

[0003] FIG. 1 is a block diagram illustrating generation of a network according to some embodiments;

[0004] FIG. 2 is a flow diagram of process to generate a network according to some embodiments;

[0005] FIG. 3 is a block diagram illustrating network training according to some embodiments;

[0006] FIG. 4 is a block diagram illustrating network growth according to some

embodiments;

[0007] FIG. 5 is a block diagram illustrating network training according to some embodiments;

[0008] FIG. 6 is a block diagram illustrating network growth according to some

embodiments;

[0009] FIG. 7 is a block diagram illustrating network training according to some embodiments; [0010] FIG. 8 is a block diagram illustrating network evaluation according to some embodiments;

[0011] FIG. 9 is a block diagram illustrating network pruning according to some

embodiments;

[0012] FIG. 10 is a block diagram illustrating network training according to some embodiments;

[0013] FIG. 11 is a block diagram illustrating network growth according to some embodiments; and

[0014] FIG. 12 illustrates a system according to some embodiments.

DETAILED DESCRIPTION

[0015] The following description is provided to enable any person in the art to make and use the described embodiments and sets forth the best mode contemplated for carrying out the described embodiments. Various modifications, however, will remain apparent to those in the art.

[0016] Some embodiments operate to control growth and pruning of a neural network via a second neural network. Generally, a control network trains a neural network and enlarges the neural network based on the training until a desired performance level is achieved. The control network then prunes the neural network while attempting to maintain the desired performance. The process repeats until the control network determines that generation of the neural network is complete.

[0017] FIG. 1 is a block diagram illustrating system 100 according to some embodiments. System 100 includes reinforcement learning network 110 and neural network 120.

Reinforcement learning network 110 automates aspects of the growth, training and pruning of neural network 120 as will be described below.

[0018] Reinforcement learning network 110 is generally configured to take actions with respect to neural network 120 so as to maximize a cumulative reward. The reward is determined based on an evaluation of the result of the actions as will be described below.

The actions may be defined in actions 115 and may include actions which grow, prune or otherwise change neural network 120. Unlike standard supervised learning techniques, reinforcement learning does not rely on training data consisting of correct input/output pairs, and does not explicitly modify the network to correct sub-optimal actions.

[0019] Neural network 120 may comprise a collection of connected units (i.e.,“neurons”) connected by“synapses” which transmit signals between connected neurons can transmit a signal to another neuron. A receiving (postsynaptic) neuron can process a signal and then signal downstream neurons. Neurons of neural network 120 may be organized in layers which may perform different kinds of transformations on their inputs. Neural network 120 may comprise a convolutional neural network including convolutional layers, pooling layers, fully-connected layers and normalization layers as is known in the art.

[0020] According to some embodiments, reinforcement learning network 110 trains neural network 120 and evaluates a performance of trained neural network 120. Based on the evaluation and a corresponding reward, network 110 may perform an action to enlarge network 120. The modified network 110 is trained and the cycle continues until network 110 determines to prune as-then-configured network 120.

[0021] Network 110 performs an action to prune network 120, results of the modification are evaluated, and a reward is provided (or not provided) to network 110 based on the evaluation. A reward may be received if the pruning action shrinks network 120 while maintaining a particular level of performance of network 120. Network 110 may continue to prune network 120 and receive (or not receive) rewards, until it is determined to return to the cycle of enlarging and training described above. The process repeats until network 110 determines that generation of neural network 120 is complete.

[0022] FIG. 2 is a flow diagram of process 200 according to some embodiments. Process 200 and the other processes described herein may be performed using any suitable combination of hardware, software or manual means. Software embodying these processes may be stored by any non-transitory tangible medium, including a fixed disk, a floppy disk, a CD, a DVD, a Flash drive, or a magnetic tape. Embodiments are not limited to the examples described below.

[0023] Initially, metaparameters of a neural network are defined at S205. The

metaparameters may define the number of layers and type of layers within the neural network, as well as the number of neurons within each layer. Metaparameters defining a layer type may define a type of network operation (e.g., convolution, pooling, activation, normalization) and corresponding parameters of the operations such as number of

convolutions in each layer, type of pooling and activation operations, etc.

[0024] The metaparameters are defined at S205 based on the intended function of the neural network. For example, if the neural network is intended to recognize an object within an image, the metaparameters are defined in this regard. Such task-based definition of neural network metaparameters is known in the art.

[0025] Next, at S210, the neural network is trained based on training data as is known in the art. Training consists of determining values for parameters defined by the metaparameters which were defined at S205. For example, each type of node of each type of layer may be associated with a weighting parameter which determines the node’s contribution to the network, and/or with one or more parameters which affects the processing performed by the mode. Metaparameter-defined parameters of a neural network are also known in the art.

[0026] FIG. 3 illustrates training of neural network 300 at S210 according to some embodiments. At the commencement of S210, neural network 300 is defined by

metaparameters MPi depicted in FIG. 1.

[0027] During training, neural network 300 receives input data from training data 310. In a simple example, the input data consists of images and network 300 is tasked with identifying whether or not each image includes an image of a cat. Network 300 generates an output (e.g., “Y” or“N”) based on each input image. For each image, loss layer 320 compares the output with Ground Truth information (i.e., The image includes/does not include an image of a cat) received from training data 310. Based on the comparison, loss layer 320 generates a loss term indicative of the inaccuracy of neural network 300 with respect to the input data. The values of the parameters of network 300 are then modified based on the loss term in an attempt to minimize the loss. This process iterates until the loss term reaches an acceptable level, at which point neural network 300 is considered trained. At this point, neural network 300 is defined by metaparameters MPi T , which indicates that the parameters associated with metaparameters MPi (which define the structure of network 300) have been trained.

[0028] The performance of the trained network is evaluated at S215. FIG. 4 illustrates a system for performance evaluation according to some embodiments. According to some embodiments, the FIG. 4 system also includes training data 310 and loss layer 320 coupled to network 300 as shown in FIG. 3. These elements are omitted from FIG. 4 for clarity. In this regard, the training process of FIG. 3 may be controlled by reinforcement learning network 400 at S210.

[0029] At S215, neural network 300 generates outputs based on validation data 410 as described with respect to S210. Evaluation unit 420 determines the performance of network 300 based on the outputs and the ground truth information received from validation data 410. At S220, it is determined whether the performance meets a performance goal. Initially, it will be assumed that the performance does not meet the performance goal. Accordingly, flow proceeds from S220 to S225.

[0030] A reward is determined based on the performance at S225. Evaluation unit 420 may determine no reward or a negative reward, which is passed to network 400. Network 400 performs an action of actions 405 based on the reward at S230. The action may be to modify the metaparameters of network 300 (to add a new layer of a particular type, to increase the number of nodes in a layer, etc.) based on the reward. The modified metaparameters, which increase the size of network 300, are depicted in FIG. 4 as metaparameters MP 2 .

[0031] Metaparameters which may be used during S230 include operations which may be added as layers to network 400. Such operations (and their parameters/options) may include convolution (stride, padding, dilation), pooling (max_pooling, avg_pooling,

adaptive_pooling), non-linear activation (ReLU, SELU, LeakyReLU, sigmoid, Tanh), normalization (BatchNorm, InstanceNorm), and Dropout.

[0032] Flow returns to S210 to train network 300. More specifically, the modified metaparameters MP 2 are associated with parameters, and values of those parameters are determined at S210 via training as described above. FIG. 5 depicts training data 310 and loss layer 320 used to train network 300 at S210 based on metaparameters MP 2 , resulting in trained metaparameters MP 2 T .

[0033] The performance of the re-trained network is evaluated at S215, as illustrated in FIG. 6. Evaluation unit 420 determines the performance of network 300 based on the outputs and the ground truth information received from validation data 410. It will again be assumed that it is determined at S220 that the performance does not meet the performance goal.

Accordingly, flow proceeds from S220 to S225. [0034] A reward is determined based on the performance at S225, and network 400 performs an action to change network 300 based on the reward at S230. The action may be similar or different from the action taken to change metaparameters MPi to metaparameters MP 2 . According to some embodiments, change may undo one or more prior changes and/or include other changes. The modified metaparameters are depicted in FIG. 7 as

metaparameters MP 3 .

[0035] Flow again returns to S210 to train network 300 associated with metaparameters MP 3 . FIG. 7 depicts training data 310 and loss layer 320 used to train network 300 at S210 based on metaparameters MP 3 , resulting in trained metaparameters MP 3 T . The performance of the re -trained network is again evaluated at S215, as illustrated in FIG. 8. It will now be assumed that it is determined at S220 that the performance meets the performance goal, causing flow to proceed from S220 to S232.

[0036] It is determined at S232 whether a predetermined metric has been reached. The predetermined metric may be based on a total elapsed time, a number of executed network expansion cycles (per S210 through S230) and compression cycles (per below-described S235 through S255), a current size of the network, and/or a current performance of the network. Flow proceeds from S232 to S235 if it is determined that the predetermined metric has not been reached.

[0037] At S235, the metaparameters of network 300 (i.e., metaparameters MP 3 T at this point of the present example) are modified to compress the network. Reinforcement learning network 400 may utilize one or more compression actions of actions 405 to modify metaparameters MP 3 T to metaparameters MP 4 in order to compress network 300. According to some embodiments, compression may include reducing the depth (e.g., number of layers) and/or width (number of nodes in a layer) of network 300. Actions for reducing depth include creating direct connections between nodes of non-adjacent layers and eliminating intermediate nodes, while actions for reducing width include pruning nodes.

[0038] Performance of the thus-modified network is evaluated at S240. Performance may be evaluated similarly as described with respect to S215, and as illustrated in FIG. 9. At S245, a reward is determined based on the evaluated performance and the current size of the network. The reward may be determined by evaluation unit 420. For example, a positive reward may be determined if the performance of the modified network has not decreased significantly and the size of the network has decreased. A negative reward may be determined if the performance of the modified network has decreased significantly and/or the size of the network has not decreased.

[0039] At S250, it is determined whether a compression equilibrium has been reached. A compression equilibrium may represent a point at which further compression of the network would result in unacceptable network performance. Assuming that a compression

equilibrium has not been reached, flow proceeds from S250 to S255 to modify the

metaparameters based on the reward.

[0040] In some embodiments, reinforcement learning network 400 modifies

metaparameters MP 4 to metaparameters MP5 based on the reward. For example, if the reward was positive, the metaparameters may be modified to further compress the network.

If the reward was negative, the prior modification may be reversed and a new modification may be implemented. Flow then returns to S240 to evaluate the performance of the modified network.

[0041] Flow therefore cycles between S240 through S255 until it is determined that a compression equilibrium has been reached. FIG. 9 illustrates a further modification to change metaparameters MP5 to metaparameters MP 6 , after which it is determined at S250 that a compression equilibrium has been reached. Flow therefore returns to S210 to train the current network.

[0042] FIG. 10 illustrates the training of network 300 defined by metaparameters MP 6 , in order to generate trained metaparameters MP 6 T . Flow then continues as described above to evaluate and grow network 300 at S215, S225 and S230 as described above. As illustrated in FIG. 11, the size of network may be increased by modifying metaparameters MP 6 T to create metaparameters MP 7 .

[0043] Flow continues as described above to grow and shrink network 300 under control of network 400, until the determination at S232 is positive. At this point, a set of trained metaparameters defining network 300 has been generated. The thus-defined network may then be used to address problems and/or tasks as is known in the art.

[0044] FIG. 12 illustrates system 1 to execute process 200 according to some embodiments. Embodiments are not limited to system 1. [0045] System 1 includes x-ray imaging system 10, scanner 20, control and processing system 30, and operator terminal 50. Generally, and according to some embodiments, X-ray imaging system 10 acquires two-dimensional X-ray images of a patient volume and scanner 20 acquires surface images of a patient. Control and processing system 30 controls X-ray imaging system 10 and scanner 20, and receives the acquired images therefrom. Control and processing system 30 processes the images. Such processing may be based on user input received by terminal 50 and provided to control and processing system 30 by terminal 50.

The processed image may also be provided to a neural network generated according to the present embodiments.

[0046] Imaging system 10 comprises a CT scanner including X-ray source 11 for emitting X-ray beam 12 toward opposing radiation detector 13. Embodiments are not limited to CT data or to CT scanners. X-ray source 11 and radiation detector 13 are mounted on gantry 14 such that they may be rotated about a center of rotation of gantry 14 while maintaining the same physical relationship therebetween.

[0047] Radiation source 11 may comprise any suitable radiation source, including but not limited to a Gigalix™ x-ray tube. In some embodiments, radiation source 11 emits electron, photon or other type of radiation having energies ranging from 50 to 150 keV.

[0048] Radiation detector 13 may comprise any system to acquire an image based on received x-ray radiation. In some embodiments, radiation detector 13 is a flat-panel imaging device using a scintillator layer and solid-state amorphous silicon photodiodes deployed in a two-dimensional array. The scintillator layer receives photons and generates light in proportion to the intensity of the received photons. The array of photodiodes receives the light and records the intensity of received light as stored electrical charge.

[0049] In other embodiments, radiation detector 13 converts received photons to electrical charge without requiring a scintillator layer. The photons are absorbed directly by an array of amorphous selenium photoconductors. The photoconductors convert the photons directly to stored electrical charge. Radiation detector 13 may comprise a CCD or tube-based camera, including a light-proof housing within which are disposed a scintillator, a mirror, and a camera.

[0050] The charge developed and stored by radiation detector 13 represents radiation intensities at each location of a radiation field produced by x-rays emitted from radiation source 11. The radiation intensity at a particular location of the radiation field represents the attenuative properties of mass (e.g., body tissues) lying along a divergent line between radiation source 11 and the particular location of the radiation field. The set of radiation intensities acquired by radiation detector 13 may therefore represent a two-dimensional projection image of this mass.

[0051] To generate X-ray images, patient 15 is positioned on bed 16 to place a portion of patient 15 between X-ray source 11 and radiation detector 13. Next, X-ray source 11 and radiation detector 13 are moved to various projection angles with respect to patient 15 by using rotation drive 17 to rotate gantry 14 around cavity 18 in which patient 15 is positioned. At each projection angle, X-ray source 11 is powered by high-voltage generator 19 to transmit X-ray radiation 12 toward detector 13. Detector 13 receives the radiation and produces a set of data (i.e., a raw X-ray image) for each projection angle.

[0052] Scanner 20 may comprise a depth camera. Scanner 20 may acquire depth images as described above. A depth camera may comprise a structured light-based camera (e.g., Microsoft Kinect or ASUS Xtion), a stereo camera, or a time-of-flight camera (e.g., Creative TOF camera) according to some embodiments.

[0053] System 30 may comprise any general-purpose or dedicated computing system. Accordingly, system 30 includes one or more processors 31 configured to execute processor- executable program code to cause system 30 to operate as described herein, and storage device 40 for storing the program code. Storage device 40 may comprise one or more fixed disks, solid-state random access memory, and/or removable media (e.g., a thumb drive) mounted in a corresponding interface (e.g., a USB port).

[0054] Storage device 40 stores program code of system control program 41. One or more processors 31 may execute system control program 41 to perform process 200 according to some embodiments. System control program 41 may also or alternatively be executed to move gantry 14, to move table 16, to cause radiation source 11 to emit radiation, to control detector 13 to acquire an image, to control scanner 20 to acquire an image, and to perform any other function. In this regard, system 30 includes gantry interface 32, radiation source interface 33 and depth scanner interface 35 for communication with corresponding units of system 10. [0055] Two-dimensional X-ray data acquired from system 10 or from external sources may be stored in data storage device 40 as training data 43 or validation data 44, in DICOM or another data format. Training data 43 and validation data 44 may also include three- dimensional CT images reconstructed from corresponding two-dimensional CT images as is known in the art. Training data 43 and validation data 44 may also comprise two- dimensional depth images acquired by scanner 20 or from external sources. In some embodiments, a two-dimensional depth image may be associated with a set of CT images, in that the associated image/frames were acquired at similar times while patient 15 was lying in substantially the same position.

[0056] Network metaparameters 45 may comprise sets of metaparameters defining a network as described above, while network parameters 46 comprise trained parameter values associated with particular network metaparameters 45.

[0057] Terminal 50 may comprise a display device and an input device coupled to system 30. Terminal 50 may display any acquired images or network output, and may receive user input for controlling display of the images, operation of imaging system 10, and/or the processing described herein. In some embodiments, terminal 50 is a separate computing device such as, but not limited to, a desktop computer, a laptop computer, a tablet computer, and a smartphone.

[0058] Each of system 10, scanner 20, system 30 and terminal 40 may include other elements which are necessary for the operation thereof, as well as additional elements for providing functions other than those described herein. Embodiments are not limited to a single system performing each of these functions. For example, system 10 may be controlled by a dedicated control system, with the acquired frames and images being provided to a separate image processing system over a computer network or via a physical storage medium (e g·, a DVD).

[0059] Those in the art will appreciate that various adaptations and modifications of the above-described embodiments can be configured without departing from the scope and spirit of the claims. Therefore, it is to be understood that the claims may be practiced other than as specifically described herein.