Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SEMI-AUTOMATED IMAGE ANNOTATION FOR MACHINE LEARNING
Document Type and Number:
WIPO Patent Application WO/2021/226296
Kind Code:
A1
Abstract:
System and methods for automatically labeling images for use in training a machine learning network, include receiving a digital image; selecting a class of object to be detected in the digital image; training a machine learning model to identify the selected class of object in the digital image; and employing the machine learning model to identify an instance of an object from the selected class of object in the digital image and annotate the digital image to indicate the presence of the instance of the object and the region of the digital image containing the instance of the object. A list may be specified containing a plurality of classes, each class corresponding to an object of interest; training the machine learning model to identify a current class; and annotating the received image based on the machine learning model of the current class.

Inventors:
GURBAXANI RAGHAV (US)
YU DAN (US)
HAIDER TOM (US)
MICHAHELLES FLORIAN (US)
Application Number:
PCT/US2021/031005
Publication Date:
November 11, 2021
Filing Date:
May 06, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS AG (DE)
SIEMENS CORP (US)
International Classes:
G06K9/62
Foreign References:
US20170144378A12017-05-25
Other References:
ADHIKARI BISHWO ET AL: "Faster Bounding Box Annotation for Object Detection in Indoor Scenes", 2018 7TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP), IEEE, 26 November 2018 (2018-11-26), pages 1 - 6, XP033499741, DOI: 10.1109/EUVIP.2018.8611732
Attorney, Agent or Firm:
BRINK, John, D., Jr. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method for automatically labeling images for use in training a machine learning network comprising: receiving a digital image; selecting a class of object to be detected in the digital image; training an object recognition model to identify the selected class of object in the digital image; and employing the object recognition model to identify an instance of an object from the selected class of object in the digital image and annotate the digital image to indicate the presence of the instance of the object and the region of the digital image containing the instance of the object.

2. The computer-implemented method of claim 1 , further comprising: specifying a list containing a plurality of classes, each class corresponding to an object of interest; training the object recognition model to identify a current class; and annotating the received image based on the object recognition model of the current class.

3. The computer-implemented method of claim 2, further comprising: identifying, in the list containing the plurality of classes, a first group of classes from the plurality of classes containing classes where a pre-trained machine learning model already exists for identifying the corresponding object.

4. The computer-implemented method of claim 3, further comprising: identifying, in the list containing the plurality of classes, a second group of classes from the plurality of classes that do not have an existing pre-trained machine learning model for the object corresponding to the class in the second group of classes; and training an object recognition model to detect the object corresponding to the class in the second group of classes.

5. The computer-implemented method of claim 4, further comprising; training the object recognition model using one of: a one-shot object recognition model; and a few-shot object recognition model.

6. The computer-implemented method of claim 5, further comprising: when training the object recognition model with a few-shot detector, providing a plurality of reference images containing an object of interest to a user; instructing the user to draw a bounding box around the object of interest in the reference image; and training the network to detect the object of interest based on the reference images.

7. The computer-implemented method of claim 5, further comprising: when training the object recognition model with the one-shot detector and no reference image is available, presenting the received digital image to a user; instructing the user to draw a bounding box around an object of interest corresponding the selected class of object to be detected; and training the network to detect the object of interest based on the received digital image.

8. The computer-implemented method of claim 5, further comprising; applying the trained object recognition model to the received digital image; and annotating the received digital image based on a determination of the trained object recognition model.

9. The computer-implemented method of claim 8 further comprising: training the object recognition model using the annotated digital image.

10. The computer-implemented method of claim 1 , further comprising: identifying the selected class of object by defining a bounding box around a region of the received digital image; displaying the bounding box to a user; and receiving an input from the user indicating one of an acceptance, a rejection, and an adjustment to the bounding box.

11. A system for automatically labeling data in images for use in training a machine learning network comprising: a computer processor in communication with a non-transitory memory, the memory containing instructions that when executed by the computer processor cause the computer processor to: receive a digital image; select a class of object to be detected in the digital image; train an object recognition model to identify the selected class of object in the digital image; and employ the object recognition model to identify an instance of an object from the selected class of object in the digital image and annotate the digital image to indicate the presence of the instance of the object and the region of the digital image containing the instance of the object.

12. The system of claim 11 , the memory containing instructions that when executed by the computer processor further cause the computer processor to: specify a list containing a plurality of classes, each class corresponding to an object of interest; train the object recognitionmodel to identify a current class; and annotate the received image based on the object recognitionmodel of the current class.

13. The system of claim 12, the memory containing instructions that when executed by the computer processor further cause the computer processor to: identify, in the list containing the plurality of classes, a first group of classes from the plurality of classes containing classes where a pre-trained machine learning model already exists for identifying the corresponding object.

14. The system of claim 13, the memory containing instructions that when executed by the computer processor further cause the computer processor to: identify, in the list containing the plurality of classes, a second group of classes from the plurality of classes that do not have an existing pre-trained machine learning model for the object corresponding to the class in the second group of classes; and train an object recognition model to detect the object corresponding to the class in the second group of classes.

15. The system of claim 14, the memory containing instructions that when executed by the computer processor further cause the computer processor to: train the object recognition model using one of: a one-shot object recognition model; and a few-shot object recognition model.

16. The system of claim 15, the memory containing instructions that when executed by the computer processor further cause the computer processor to: when training the object recognition model with a few-shot detector, provide a plurality of reference images containing an object of interest to a user; instruct the user to draw a bounding box around the object of interest in the reference image; and train the network to detect the object of interest based on the reference images.

17. The system of claim 15, the memory containing instructions that when executed by the computer processor further cause the computer processor to: when training the object recognition model with the one-shot detector and no reference image is available, present the received digital image to a user; instruct the user to draw a bounding box around an object of interest corresponding the selected class of object to be detected; and train the network to detect the object of interest based on the received digital image.

18. The system of claim 15, the memory containing instructions that when executed by the computer processor further cause the computer processor to: apply the trained object recognitinon model to the received digital image; and annotate the received digital image based on a determination of the trained object recognition model.

19. The system of claim 18, the memory containing instructions that when executed by the computer processor further cause the computer processor to: train the object recognition model using the annotated digital image.

20. The system of claim 11 , the memory containing instructions that when executed by the computer processor further cause the computer processor to: identify the selected class of object by defining a bounding box around a region of the received digital image; display the bounding box to a user; and receive an input from the user indicating one of an acceptance, a rejection, and an adjustment to the bounding box.

Description:
SEMI-AUTOMATED IMAGE ANNOTATION FOR MACHINE LEARNING

TECHNICAL FIELD

[0001] This disclosure relates to machine learning. More particularly, this disclosure relates to methods for training machine learning networks.

BACKGROUND

[0002] Obtaining labeled data is a fundamental step in machine learning for supervised learning problems. Acquiring the data and labeling it with ground truth labels is still largely manual and very time consuming. Despite recent advancements in machine learning, obtaining high-quality and quantities of labeled data is still considered a bottleneck. Embodiments of this disclosure provide improved methods for expediting the process of labeling images for machine learning, including machine learning techniques directed toward machine vision applications.

SUMMARY

[0003] Embodiments described in this disclosure include a computer-implemented method for automatically labeling images for use in training a machine learning network, wherein the method includes receiving a digital image; selecting a class of object to be detected in the digital image; training a object recognition model to identify the selected class of object in the digital image; and employing the object recognition model to identify an instance of an object from the selected class of object in the digital image and annotate the digital image to indicate the presence of the instance of the object and the region of the digital image containing the instance of the object. [0004] Further embodiments include specifying a list containing a plurality of classes, each class corresponding to an object of interest; training the object recognition model to identify a current class; and annotating the received image based on the object recognition model of the current class. Additionally, the method may identify, in the list containing the plurality of classes, a first group of classes from the plurality of classes containing classes where a pre-trained machine learning model already exists for identifying the corresponding object. In some embodiments, the method further identifies, in the list containing the plurality of classes, a second group of classes from the plurality of classes that do not have an existing pre-trained machine learning model for the object corresponding to the class in the second group of classes; and trains a object recognition model to detect the object corresponding to the class in the second group of classes.

[0005] In alternative embodiments, the method may further train the object recognition model using one of: a one-shot object recognition model; and a few-shot object recognition model. According to some embodiments when training the object recognition model with a few-shot detector, a plurality of reference images containing an object of interest to a user is provided and the user is instructed to draw a bounding box around the object of interest in the reference image; and the network is trained to detect the object of interest based on the reference images. In other embodiments, when training the object recognition model with the one-shot detector and no reference image is available, the received digital image to a user and the user is instructed to draw a bounding box around an object of interest corresponding the selected class of object to be detected; and the network is trained to detect the object of interest based on the received digital image. In alternative embodiments, the method further applies the trained object recognition model to the received digital image; and annotates the received digital image based on a determination of the trained object recognition model. The object recognition model may then be trained using the annotated digital image. In another embodiment, the selected class of object is identified by defining a bounding box around a region of the received digital image; displaying the bounding box to a user; and receiving an input from the user indicating one of an acceptance, a rejection, and an adjustment to the bounding box.

[0006] Other embodiments describe a system including a computer processor in communication with a memory to perform the computer-implemented methods described hereinabove.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:

[0008] FIG. 1 is a convention process flow for training a machine learning network.

[0009] FIG. 2 is an improved process flow for training a machine learning network.

[0010] FIG. 3 is a flow diagram for training a machine learning network according to aspects of embodiments of this disclosure. [0011] FIG. 4 is a process flow diagram for a method of automatically labeling data for use in training a machine learning network according to aspects of embodiments in this disclosure.

[0012] FIG. 5 is a block diagram of a computer system that may perform aspects of training a machine learning network according to embodiments described in this disclosure.

DETAILED DESCRIPTION

[0013] Conventional machine vision models require large numbers of labeled images. These images are typically manually labeled by a human user. Labeling images for supervised machine learning is expensive and time consuming due to the amount of manual human labor required. An image must be displayed to a user. The user must identify objects of interest in the image and indicate the location of the object of the image. For example, the user may use a mouse or other input device to draw a bounding box around the object of interest. This allows the machine learning network to focus on the relevant pixels in the image for learning to recognize the object. Finally, the user must indicate the object of interest. By way of example, for an image containing a cat, the user must locate the cat in the image, highlight the area of the image containing the cat, then provide a label for the area to indicate that the focus area contains a “cat”.

[0014] Not surprisingly, for large datasets containing millions of images, this process requires thousands of man-hours of manual labor. Costs to perform this labor may approach average costs of two dollars per image. Depending on the application, the images may contain specialized information requiring skilled labor for object identification and labeling. This is true for a complex scene, such as a manufacturing plant, where the components are highly specific, and the images may contain many components. An observer needs to be knowledgeable about the components and the plant to properly recognize and label the data in the image. In any case, the labeling of objects is highly repetitive and time-consuming. The task of labeling images may be outsourced, but this approach may raise privacy and security concerns as the image may contain proprietary information that should not be disclosed to the general public. The costs of labeling data cannot be ignored and require careful consideration when evaluating the overall costs of implementing a machine learning process, such as a machine learning application.

[0015] FIG. 1 is a flow diagram of a conventional process of labeling images for a machine learning process. As previously discussed above, labeling of images is substantially performed manually where a human tags many images and typically draw a bounding box or segmentation map around the regions of interest. Once labeled, the data may then be used to train supervised learning algorithms. Data acquisition 110 involves providing images for data labeling. The provided images are data labeled 120 by a human user. The human user identifies one or more objects of interest in the image, tag the objects in the image by drawing a bounding box or segmentation mask and identify the object. The labeled images are then prepared 130 for input to a machine learning algorithm. The machine learning algorithm may include a neural network or any other technique for machine learning. The labeled images are input to the machine learning

[0016] The process of data labeling 120 may be approached in a number of ways. First, internal labeling may be used. For internal labeling the company or organization utilizes internal resources, such as data scientists or domain experts who manually label the images. Second, the data labeling 120 may be outsourced. Outsourcing provides the image data to a third party. The third party provides the human resources required to label the data in the images. Finally, crowdsourcing in another approach to data labeling where the data labeling is provided a service, such as an internet service, which provides the data to a community and allows the members of the community to provide the data labeling.

[0017] FIG. 2 is a process flow that provides improved methods for training a machine learning network by addressing the challenges of providing quality labeled data for supervised learning. In general, the machine learning, and processing is similar to the process described above with reference to FIG. 1. Methods according to embodiments of this disclosure focus on data labeling 220. According to embodiments of an improved data labeling and training of a machine learning network, artificial intelligence is used to augment human input to provide labeled image data efficiently and effectively for training machine learning applications.

[0018] A system is provided that uses pre-trained machine learning models to automate labeling of images. This achieves novel methods for training machine learning models while minimizing the effort, time and costs associated with manual labeling techniques.

[0019] In embodiments, the user labels one or a few images. Based on this small sample of labeled images, the system learns which objects to mark in the images and automatically detects instances of the objects of interest in the images.

[0020] FIG. 3 is a flow diagram for labeling training data in images according to aspects of embodiments described in this disclosure. [0021] Each class 303 corresponding to an object of interest to be identified in the image is determined. Initially a user is asked to provide a list of all classes that appear in the set of images, which they would like to have labeled. The classes may be divided into known objects, where a classically trained machine learning model is available and unknown objects, where a trained classic classification machine learning model is not yet available.

[0022] Each image 301 is then processed. Each novel image 305 is then processed by the image labeling process 300 one at a time. For each image 301 , the identified classes 303 are looped over according to the following flow:

[0023] If the current class 303 represents a known object that already has a trained classification model 307. The image is presented to a known object detector 309 module. That is, an existing detection/segmentation deep learning model, which localize and detect the objects (either by pixel or object level) and provide detected classes examines the photo for the objects belonging to the class that are contained in the image 305.

[0024] A user is then provided with a suggestion on bounding boxes and/or a segmentation mask 311 , which the user 312 may accept, decline, or adjust. The output is the final annotation of the current image for the current class including the classifier, a bounding box containing the object and/or a segmentation mask identifying the region of interest in the image 305.

[0025] If the current class being evaluated contains objects that do not have a corresponding trained object detector, the image is presented for unknown object detection 315 to either a one-shot object detector 317 or a few-shot object detector 321. The one-shot object detector 321 takes a reference image (not shown) of an object and aims at finding similar objects in the query image 305. Accordingly, if a reference image of the object we are wishing to detect is not available, a user is asked to provide an initial bounding box of the class currently being processed. The one-shot detector 317 may use machine vision techniques to recognize objects in the received image 305. For example, one methodology for object recognition is pattern recognition. A pattern associated with the object to be detected is used to analyze the image 305 and to identify possible instances of the object contained in the image 305. The bounding box is then drawn around the pattern detected by the one-shot detector 317. Other techniques for recognition may be used by the one-shot detector 317. In some embodiments, machine learning techniques including but not limited to deep learning techniques may be used. The contents of this initialized bounding box serve as a reference image. Once one or more reference images that contain the object currently of interest are encountered, the one-shot detector 317 may be used to provide the user with suggestions on the location of the objects of interest in subsequent images. In the case where a few reference images containing the object of interest are available, the few-shot detector 321 may be used. The few-shot object detector 321 , several reference images of an object, for example 10 or more, are used to train a network, which then can detect objects of the given class. That is, once enough reference images are identified for a given object, a network which can be trained to suggest or provide bounding boxes containing the object of interest in subsequently processed images.

[0026] When a determination of object location is performed by the one-shot object detector 317 or the few-shot object detector 321 , the suggested bounding box may be presented to a user 325. The user 325 looks at the suggested bounding box and may accept the current bounding area, reject the bounding box for not containing the object or interest, or adjust the bounding box to better capture the region of the image containing the object of interest. The user’s 325 determination 323 is then used to output the bounding box annotation 327 of the current image for the current class.

[0027] As an optional additional step, after defining the bounding boxes 327 for all objects in an image 315, the bounding boxes may be further refined using a refinement module 331. The refinement module 331 provides some preprocessing or utilizes user input 333 to generate from a coarse bounding box of an object a finer polygon or segmentation mask. The output of the refinement process 329 is the final annotation of the current image of the current class containing the accepted bounding boxes or segmentation masks 335. The final segmentation mask or bounding boxes 335 of an annotated image are used as reference images for future query images 337 either to train the few shot detector 321 or provide a ground truth for the one-shot detector 317.

[0028] After a sufficient number of unknown models 315 are labeled 335, the new models may be used to train a classic object detection model 337, or to expand existing known object detection models 339. This further automates future image labeling work.

[0029] Embodiments of this disclosure provide improved methods of data labeling including automatically labeling for all known objects (e.g., cars, pedestrians, etc.) For unknown objects, it is not required that a user manually draw bounding boxes or segmentation maps, foreground detection techniques already achieve that. For new unseen objects, after one or few instances embodiments of this disclosure provide systems that mark all subsequent instances of that object. During labeling, systems according to embodiments described herein continuously learn and improve. This greatly reduces labeling time and effort. Systems and methods according to embodiments allow scaling to a large number of images.

[0030] Improvements provided by the embodiments herein achieve great reduction of complicated configurations during commissioning of machine vision cameras, which is now possible for automation engineers less experienced with machine vision to quickly commission a machine vision-driven high performance production system with little effort. Because of the lowered skill requirements, it is possible to use machine vision for more complicated production jobs, as long as a model is available in the model library. This solves one of the biggest barriers in application of machine vision in different industries and help boost growth of machine vision products by providing a technical solution using machine learning to automate the process of data labeling, which has formerly been unpracticable and required manual human intervention.

[0031] FIG. 4 is a process flow diagram for a method of automatically providing data labels for images according to aspects of embodiments of this disclosure. The process begins 401 by presenting a new image to be labeled. One or more classes of objects are identified. For each label to be labeled, each identified class will be checked to determine if the image contains that class of object. When all classes have been checked, the process moves to the next image 403. Otherwise, if there is remaining class to be checked 405, the process for the current image continues. First, the currently class is compared to a list of known object detection models. Known object detection models include classically trained machine learning networks that are previously trained to detect and label the known object class. If the current class is a known object, the image is provided to the pre-trained known object detector 409. The known object detector 409 detects any instance of the current class and draws a bounding box around each instance. Optionally, the suggested bounding boxes may be provided to a user 411. The user may look at the bounding boxes and accept the bounding box if it is accurate, reject the bounding box if it does not include an object instance of the current class, or adjust the bounding box to fully contain the object with the minimum amount of background detail 415. The known object detector 409 outputs the suggested and reviewed bonding boxes as output to label the current image. The known object detector 409 is pre-trained and may include a segmentation module for generating a segmentation mask for the object. This may be included in the output 419 for labeling the current image.

[0032] If the current class is not known and there is not a pre-trained object detector for the current class, then the image is provided to either a one-shot object detector or a few-shot object detector 413. Reference images containing the class of interest are identified by a user and provided to few-shot detector which examines the reference images and suggests bounding boxes for detected objects in the current image. When no reference image is available, a one-shot detector will use the current image as a reference image and attempt to discover objects in the image and suggest bounding boxes associated with any detected objects. The suggested bounding boxes are presented to a user to look at the bounding boxes and accept the bounding box if it is accurate, reject the bounding box if it does not include an object instance of the current class, or adjust the bounding box to fully contain the object with the minimum amount of background detail 415. An additional refinement model may be used to accept user input 417. The refinement model allows the user to more specifically define the boundaries of a detected object to provide a segmentation mask for the object. The refined segmentation map is provide as output for the current image and current object class 419. The newly annotated image for the current class may be provided back 421 for training of the unknown object detector 413 to allow the updating and learning of the system to increase the number of known object detectors. When there are no more classes to be checked 403 and the last image is reached 423 the data labeling is complete.

[0033] FIG. 5 illustrates an exemplary computing environment 500 within which embodiments of the invention may be implemented. Computers and computing environments, such as computer system 510 and computing environment 500, are known to those of skill in the art and thus are described briefly here.

[0034] As shown in FIG. 5, the computer system 510 may include a communication mechanism such as a system bus 521 or other communication mechanism for communicating information within the computer system 510. The computer system 510 further includes one or more processors 520 coupled with the system bus 521 for processing the information.

[0035] The processors 520 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting, or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller, or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general-purpose computer. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.

[0036] Continuing with reference to FIG. 5, the computer system 510 also includes a system memory 530 coupled to the system bus 521 for storing information and instructions to be executed by processors 520. The system memory 530 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 531 and/or random-access memory (RAM) 532. The RAM 532 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The ROM 531 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 530 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 520. A basic input/output system 533 (BIOS) containing the basic routines that help to transfer information between elements within computer system 510, such as during start-up, may be stored in the ROM 531. RAM 532 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 520. System memory 530 may additionally include, for example, operating system 534, application programs 535, other program modules 536 and program data 537.

[0037] The computer system 510 also includes a disk controller 540 coupled to the system bus 521 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 541 and a removable media drive 542 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid-state drive). Storage devices may be added to the computer system 510 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).

[0038] The computer system 510 may also include a display controller 565 coupled to the system bus 521 to control a display or monitor 566, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The computer system includes an input interface 560 and one or more input devices, such as a keyboard 562 and a pointing device 561 , for interacting with a computer user and providing information to the processors 520. The pointing device 561, for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processors 520 and for controlling cursor movement on the display 566. The display 566 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 561. In some embodiments, an augmented reality device 567 that is wearable by a user, may provide input/output functionality allowing a user to interact with both a physical and virtual world. The augmented reality device 567 is in communication with the display controller 565 and the user input interface 560 allowing a user to interact with virtual items generated in the augmented reality device 567 by the display controller 565. The user may also provide gestures that are detected by the augmented reality device 567 and transmitted to the user input interface 560 as input signals.

[0039] The computer system 510 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 520 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 530. Such instructions may be read into the system memory 530 from another computer readable medium, such as a magnetic hard disk 541 or a removable media drive 542. The magnetic hard disk 541 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security. The processors 520 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 530. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.

[0040] As stated above, the computer system 510 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 520 for execution. A computer readable medium may take many forms including, but not limited to, non- transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 541 or removable media drive 542. Non-limiting examples of volatile media include dynamic memory, such as system memory 530. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 521. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

[0041] The computing environment 500 may further include the computer system 510 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 580. Remote computing device 580 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to computer system 510. When used in a networking environment, computer system 510 may include modem 572 for establishing communications over a network 571 , such as the Internet. Modem 572 may be connected to system bus 521 via user network interface 570, or via another appropriate mechanism.

[0042] Network 571 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 510 and other computers (e.g., remote computing device 580). The network 571 may be wired, wireless or a combination thereof. Wred connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite, or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 571.

[0043] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.

[0044] An executable application, as used herein, comprises code or machine- readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine-readable instruction, sub routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.

[0045] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.