Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USER INTERFACE ELEMENTS TO PRODUCE AND USE SEMANTIC MARKERS
Document Type and Number:
WIPO Patent Application WO/2024/049466
Kind Code:
A1
Abstract:
A system can define semantic markers that denote or indicate where various objects can be placed, how various objects can be assembled, or where various objects can be gripped by a robot. An example user interface can indicate to users how objects and grippers are intended to be placed or positioned.

Inventors:
MCDANIEL RICHARD GARY (US)
Application Number:
PCT/US2022/075653
Publication Date:
March 07, 2024
Filing Date:
August 30, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS AG (DE)
SIEMENS CORP (US)
International Classes:
G06F30/12; G05B19/418; G06F111/04
Domestic Patent References:
WO2022071933A12022-04-07
Foreign References:
US20200262073A12020-08-20
US20080010041A12008-01-10
Attorney, Agent or Firm:
BRAUN, Mark E. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method, the method comprising: generating a model of an automation system comprising a plurality of work components and an autonomous device configured to interact with the plurality of work components, the model comprising data objects representative of the autonomous device and the plurality of work components; attaching a semantic marker to a first work component of the plurality of work components; and based on the semantic marker, displaying an image of at least a portion of the autonomous device, the image indicating a behavior associated with the semantic marker.

2. The method as recited in claim 1, wherein the image is independent of an operation performed by autonomous device.

3. The method as recited in claim 1, wherein the semantic marker defines a location at which the first work component is grasped, and the image defines a gripper in a position to grasp the first work component.

4. The method as recited in claim 1, wherein the semantic marker defines a location on the first work component at which a second work component is placed, and the image defines the second work component aligned with the location.

5. The method as recited in claim 1, wherein the semantic marker defines a location at which a plurality of work components can be snapped together so as to define an assembly, the method further comprising: generating a new object that defines the assembly.

6. The method as recited in claim 5, the method further comprising: responsive to a user actuation on one of the work components of the plurality of work components, moving the assembly and attaching a new semantic marker to the assembly.

7. A computing automation system, the computing automation system comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the computing system to: generate a model of an automation system comprising a plurality of work components and an autonomous device configured to interact with the plurality of work components, the model comprising data objects representative of the autonomous device and the plurality of work components; attach a semantic marker to a first work component of the plurality of work components; and based on the semantic marker, display an image of at least a portion of the autonomous device, the image indicating a behavior associated with the semantic marker.

8. The system as recited in claim 7, wherein the semantic marker defines a location at which the first work component is grasped, and the image defines a gripper in a position to grasp the first work component.

9. The system as recited in claim 7, wherein the semantic marker defines a location on the first work component at which a second work component is placed, and the image defines the second work component aligned with the location.

10. The system as recited in claim 7, wherein the semantic marker defines a location at which a plurality of work components can be snapped together so as to define an assembly, the memory further storing instructions that, when executed by the processor, cause the system to: generate a new object that defines the assembly.

11. The system as recited in claim 10, the memory further storing instructions that, when executed by the processor, cause the system to: responsive to a user actuation on one of the work components of the plurality of work components, move the assembly and attach a new semantic marker to the assembly.

12. The system as recited in claim 7, wherein the image is independent of an operation performed by autonomous device.

Description:
USER INTERFACE ELEMENTS TO PRODUCE AND USE SEMANTIC MARKERS

BACKGROUND

[0001] Autonomous operations, such as robotic grasping and manipulation, in unknown or dynamic environments present various technical challenges. When developing an automation application, numerous machines might interact with work products to transform, assemble, and otherwise work the materials to create the final products that are transported to their next stage. It is recognized herein that computer-aided design (CAD) tools for designing these processes often focus on the design of the work product itself, or the layout of the tools used to manipulate the work product, without describing the interaction that does the work. Instead, in many cases, individual machines are programmed using traditional languages to show how they react to input events at various levels of detail. It is further recognized herein that simulation systems can run these programs to produce an animation for what the machines would do under different circumstances, but the programs do not intrinsically describe the process that the machines perform.

BRIEF SUMMARY

[0002] Embodiments of the invention address and overcome one or more of the described- herein shortcomings or technical problems by providing methods, systems, and apparatuses for enhancing user interface tools for various autonomous systems. For example, a system can define semantic markers that denote or indicate where various objects can be placed, how various objects can be assembled, or where various objects can be gripped by a robot. An example user interface can indicate to users how objects and grippers are intended to be placed or positioned.

[0003] In an example aspect, an automation computing system can generate a model of an automation system that includes a plurality of work components and an autonomous device configured to interact with the plurality of work components. The model can include data objects representative of the autonomous device and the plurality of work components. The computing system can attach a semantic marker to a first work component of the plurality of work components. Based on the semantic marker, the computing system can display an image of at least a portion of the autonomous device. The image can indicate a behavior associated with the semantic marker. In various examples, the image is independent of any specific operation performed by the autonomous device, such as plasma cutting or the like.

[0004] In an example, the semantic marker defines a location at which the first work component is grasped, and the image defines a gripper in a position to grasp the first work component. In another example, the semantic marker defines a location on the first work component at which a second work component is placed, and the image defines the second work component aligned with the location. In yet another example, the semantic marker defines a location at which a plurality of work components can be snapped together so as to define an assembly. The computing system can then generate a new object that defines the assembly. For example, responsive to a user actuation on one of the work components of the plurality of work components, the system can move the assembly and attach a new semantic marker to the assembly.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

[0005] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:

[0006] FIG. 1 shows an example user interface (UI) that defines a grip marker indicating a location on a work component that can be grasped by a gripper, in accordance with an example embodiment.

[0007] FIG. 2 shows the example UI with an image representative of the gripper that can grasp the work component at the location indicated by the grip marker, wherein the image defines the gripper in a closed position, in accordance with an example embodiment.

[0008] FIG. 3 shows the example UI with the image representative of the gripper that can grasp the work component at the location indicated by the grip marker, wherein the image defines the gripper in an open position, in accordance with an example embodiment.

[0009] FIG. 4 shows another example UI that defines a placement marker indicating a location on a work component or fixture in which another work component can be placed, in accordance with an example embodiment. [0010] FIG. 5 shows the example UI of FIG. 4 with an image representative of the work object that can be placed at the location indicated by the placement marker, in accordance with an example embodiment.

[0011] FIG. 6 shows another example UI that defines assembly markers indicating locations at which work components can be assembled together so as to define an assembly, in accordance with an example embodiment.

[0012] FIG. 7 is another view of the example UI shown in FIG. 6, defining another assembly marker.

[0013] FIG. 8 is yet another example UI that defines multiple placement markers in accordance with another example embodiment.

[0014] FIG. 9 illustrates a computing environment within which embodiments of the disclosure may be implemented.

DETAILED DESCRIPTION

[0015] By way of background, it is recognized herein that graphical overlays are generally produced by a system and generally act as visual-only drawings that are intended to be interpreted by the user, and not the system. In some computer-aided design (CAD) systems, two objects can be connected together, so as to form a kind of permanent weld in which moving a first object can move the other object that is welded to (or grouped with) the first object. Some CAD tools allow the user to indicate a connection to join a featureless frame object to another that can act like a position where other operations can occur. It is recognized herein that this is a generic kind of marker upon which the system is not performing much interpretation. That is, the user is simply using a mark to record a position and its use is solely in the user’s imagination.

[0016] It is further recognized herein that, in some systems, objects are preprogrammed to have interactions with other objects. For example, a container object might automatically instantiate the materials that it may contain. This is typically produced far ahead of time by the developers of the components, and the user has no means to change this behavior beyond the parameters the original developers provided. In some cases, the user may potentially produce a new custom component of one’s own to carry out new behaviors, which involves programming and is not part of the tool’s ordinary user interface. In such cases, specifying a new, unique application behavior using preprogrammed components is not possible since they can only be used in the way they were designed.

[0017] In various embodiments described herein, however, semantic markers can denote various behaviors in various automation systems or applications. For example, a computing automation system can define a low-code, programming environment that can be used to specify an automation application by using 3D graphical analogs of components that comprise the automation application, and of work materials that the automation application manages or modifies.

[0018] Referring generally to FIGs. 1 to 3, the computing automation system can define various user interfaces (UIs), for instance a UI 100 that allows users to attach semantic markers, for instance a grip marker 102, to material, equipment, and other objects, for instance a circuit board or work component 104. The UI 100 can define a palette that renders marker objects. In an example, when a user drags a given marker object (e.g., the grip marker 102) from the palette or when the marker object is otherwise moved, the UI 100 can display the marker object as a muted indication defining light or grey lines, dashes, or dots. When the marker object is dropped, in an example, the marker attaches to a surface of the 3D component that is behind the marker from the user’s viewpoint. By way of example, when the grip marker 102 is dropped, the system attaches the marker 102 to the surface of the circuit board component 104 that is rendered as a 3D component. The surface of the circuit board component 104 to which the marker 102 attaches is the surface viewed from a perspective in front of a display rendering the UI 100. The user may adjust the position and orientation of the marker 102 using graphical handles 106 of the marker 102. Additionally, or alternatively, the marker 102 may be associated with numerical properties corresponding to its position or orientation, and a user might change the position or orientation of the marker 102 by adjusting the numerical properties. Thus, markers can be attached to other objects and therefore can move with the objects when the objects are adjusted.

[0019] In an example, a marker that is dropped in an empty area having no objects near can become a mark in free space without being attached to another object. Free space markers can define various indications. For example, a grip marker in space may represent a pose for a robot associated with the grip marker. By way of example, in some cases, the user might park or drop an object (e.g., marker) in space when the component to which the object will become attached is not yet ready, so that the user can attach the component to the object when, for example, further editing is performed on the component.

[0020] Various UI operations (e.g., copy, paste, delete, etc.) can be affected by attaching an object (e.g., marker) to a given component. By way of example, when a user selects a component to which one or more objects are attached, choosing to copy, cut or delete that component (or the one or more objects), can similarly affect the recursively connected objects as well. By way of further example, if a robot is deleted, a gripper attached to that robot is also deleted and any markers attached to either the robot or the gripper are deleted. In various examples, the attached object may be selected manually or not, and the attachment process is recursive so any component attached to an attached object getting deleted will also be deleted. Likewise, when copying objects, the full hierarchy of attached objects can be copied as well.

[0021] Markers can define various types that can designate corresponding behavior in the design application or system. For example, and without limitation, marker types may include grip markers, place markers and assemble markers. In some cases, other marker types can be added to the automation system, for example, via extension libraries and user customization. Similar to other components, markers may have a different appearance when they are selected. For example, editing features can be indicated graphically when selected and removed from the view or UI when editing other objects to avoid clutter.

[0022] Grip markers, for instance the grip marker 102, can indicate where and how a material is intended to be held by equipment that can grasp other objects. By way of example, in robotics a gripper is a typical end-effector for a robot, but gripper markers can generally be applied to any equipment that can grasp an object. In an example, the grip marker 102 can be attached to the material to be grasped (e.g., circuit board component 104) by dragging the grip marker 102 over the surface of the material. Alternatively, or additionally, a grip marker may be intentionally placed in space without attaching to a material. In such cases, the gripper location may be considered fixed in that pose and might be used, for example, as a hand-off position between robots or a waiting spot for the robot to move toward to prepare for a part in transit. A given material object may have more than one grip marker attached to it that can be used to indicate different ways the material may be held. For example, multiple grip markers may be attached to a component to indicate how the component is held in different phases or aspects of a particular automation application process. [0023] An object associated with a given grip marker may define a property to indicate what kind of gripper performs the desired grasping. The property may indicate a particular gripper in the application, or it may indicate a class of grippers that may be used. A class of grippers may include various grippers that perform grasping in that same manner (e.g., suction-based or finger grippers) as each other in the class, such that identical orientations and offsets may be applied. Once the kind or type of gripper is known, a 3D image of the gripper, for instance a gripper image 108, can be displayed by the system to indicate where in the environment the gripper is currently set. The image (e.g., gripper image 108) can indicate a behavior associated with the semantic marker (e.g., marker 102). In various examples, the image is independent of any specific operation performed by the autonomous device, such as plasma cutting or the like. In particular, referring to FIGs. 1 to 3, the grip marker 102 indicates how the grip marker 102 is first created (FIG. 1) and then adjusted (FIGs. 2 and 3). The gripper image 108 associated with the grip marker 102 can be rendered by the system. The gripper image 108 can represent the physical gripper associated with the grip marker 102. In an example, the gripper image is shown, for instance only shown, when the associated grip marker 102 is selected as a single object, so as to prevent screen clutter.

[0024] In various examples, the gripper component represented by the gripper image 108 is associated with code for 3D models that can generate the image 108 of the gripper. By way of example, a given component can be associated with a specification that stipulates how to assemble models representative of the given component. The system or UI 100 may also include a component interface to create a model for the particular gripper that is represented by the grip marker 102. For example, the interface may be parameterized so that the gripper image 108 associated with the marker 102 can represent different property states. In particular, for example, the gripper image 108 associated with the marker 102 can be open so as to define an open gripper image 108a, or can be closed so as to define a closed gripper image 108b. The grip marker 102 can employ such an interface so as to adjust the property states (e.g., between open and closed) as the gripper image 108 approaches the component 104 and grasps the component 104.

[0025] With continuing reference to the examples illustrated in FIGs. 2 and 3, a user can manipulate the image 108 of the gripper to show the correct alignment of the gripper to the material being grasped (e.g., circuit board component 104). The gripper images 108 can define a first or transitory gripper image 110 and a second or destination gripper image 112. Referring to FIGs. 2 and 3, the destination gripper image 112 is shown grasping the material (component 104) and the transitory gripper image 110 illustrates a position of the gripper that is spaced from the material along the direction that the gripper approaches the material for grasping. In some examples, a user can, via the UI 100, manipulate the position and orientation of the gripper image 108 to place it precisely where it needs to go. For example, the gripper image 108 can further define graphical user interface (GUI) elements 112 that the user can manipulate to change the position of an object, for instance the gripper image 108. The image that is away from the material, for instance the transitory gripper image 110, may also be manipulated to show which direction the gripper moves to approach the material. Additionally, or alternatively, the user may also adjust parameters of the gripper in the property editor in which case, and the gripper image 108 may adjust or move responsive to the properties associated with the parameters being adjusted. In some examples, the user can change how far open the gripper is for grasping, or positions defined by the transitory gripper image 110 (e.g., distance the transitory image is from the material) for grippers that open and close.

[0026] Additionally, or alternatively, the grip marker 102 can estimate and display how a robot or similar device with a gripper will reach a given object that it can grasp. For example, a user may select the grip marker 102 and query the grip marker 102 as to how robots in the given environment operate. Response to the query, the UI 100 can render an interface that illustrates how robots in the vicinity of the mark 108 extend to place their respective grippers in the location defined by the grip marker 102. If a given robot is out of range or otherwise cannot grasp the object (e.g., the configuration of joints does not permit the grasp because portions of the robot would collide with the environment or itself), the UI 100 can display information that indicates why the given robot cannot perform the grasp at the location of the grip marker 102. In some examples, the user can move the component (circuit board or work component 104) attached to the grip marker 102, or move the grip marker 104 itself, so that the UI 100 displays how the robot can change configurations or positions so as to reach new grasping positions indicated by the grip marker 108.

[0027] Referring also to FIGs. 4 and 5, the computing automation system can define various user interfaces (UIs), for instance a UI 200 that allows users to attach placement markers, for instance a placement marker 202, to various fixtures, for instance a work component or fixture 204. The placement marker 202 can illustrate the position a work material object, for instance the work component 104, may be placed within the automation or design application or system. Placement markers can be used to represent a position in space or a position relative to another object, such as a carrying device for example. The placement marker 202 can differ from the grip marker 102 in that the object (e.g., component 206) located by the placement marker 202 is not attached to any automation equipment. By way of example, a robot can use its gripper to grasp a material object (e.g., work component 104) at the position indicated by the grip marker 102, and then relocate the material to the position indicated by the placement marker 202.

[0028] In some cases, the computing automation system can determine whether a particular component defines a work material or another object. Alternatively, or additionally, such information may be encoded as part of the component object’s type, or the user may designate a component to be a work material through the editor of the UI, or the like. The placement marker 202 can allow the user to designate which kind of material the marker 202 is being used to place. The marker 202 can be associated with 3D graphical images 206 of the material, for instance the component 104 as the graphical image 206, in the same way that the grip marker 102 can show the gripper represented by the gripper image 108. In some examples, a material object might not know that it is intended to be used as a material, such that the UI 200 can render a default image for the 3D image. Alternatively, in some cases, a custom interface may be employed for a given material that defines, for example, a parameterized appearance. Furthermore, the location where the material is being placed might be associated with a particular state.

[0029] In various examples, the user may manipulate the image of the material (e.g., image of component 206) to align it precisely with the intended position indicated by the placement marker 202. For example, the user may manipulate both the position and orientation using the graphical interface or through direct editing with the property editor. In an example, referring in particular to FIG. 5, the placement marker 202 can be associated with multiple images 206 of the component 104, for instance a final position image 208a and a transitory position image 208b. The transitory position image 208b can illustrate the direction along which the material (component 104) travels to be placed in the final position indicated by the placement marker 202 or alternately, the direction in which it could be removed. The final position image 208a can illustrate the position of the component 104 when the component 104 is placed in position indicated by the placement marker 202. The transitory position image 208b can be manipulated by the user, for instance graphically via the UI 200 or by adjusting properties related to the final position image 208a. [0030] In another example, the placement marker can define multiple markers that define multiple distinct transitory or directional markers. In particular, for example, a distinct transitory marker can be placed at each step along a path defining interactions that a material follows to reach its final destination, or to be removed from its current location. A separate marker or properties associated with the placement marker can then be used to indicate how the path is used. By way of example, the material might be transported by being held in a firm grip or the material might reach its position by being dropped or pushed. By having separate markers, the path can direct the automated device to employ multiple steps or interactions with the material.

[0031] Referring also to FIGs. 6 and 7, the computing automation system can define various user interfaces (UIs), for instance a UI 300 that allows users to attach markers, for instance an assemble maker 302, to material, equipment, and other objects, for instance a circuit board housing 304. In various examples, the assemble marker 302 shows users how a work material is placed relative to another object. For example, the assemble marker 302 can be positioned on one of the work material objects that are part of an assembly of multiple work material objects. [0032] In an example, a user can attach the assemble marker 302 to a base material, for instance the circuit board housing 304, and can select which material (e.g., the circuit board 104) is assembled with the base material. Thus, the UI 300 indicates where parts are attached to base materials, so as to define various assemblies. A user can interact with the graphical UI 300 or adjust properties associated with the circuit board housing 304 or the image of the circuit board 104, so as to adjust a position of the objects or parts (e.g., housing 304 and board 104) relative to each other. In an example, if the parts (e.g., housing 304 or board 104) are moved simultaneously, such as in two-handed robot interaction, then either part may be considered the base material. Alternatively, in some cases, the part that is stationary defines the base material and other part or parts that are moved into place define the assembled part. Assemble markers may also be combined with one or more placement markers for various complex assembly operations, such as for moving a part along a path while grasping the part, dropping a part at a particular location, or pushing the part along a path without grasping the part. By way of further example, a first assemble marker 302a can indicate how the circuit board 104 is added to the housing 304 (see FIG. 6), and a second assemble marker 302b can indicate how a cover 308 is attached to the housing 304 (see FIG. 7). [0033] The markers described herein can indicate various semantics for automation applications. In various examples, the markers define user interface functions that can be specifically tailored to application editing that would not otherwise be available. For example, the markers can indicate location information to provide snapping and attachment functions for the user interface. For example, referring to FIG. 8, if a given user assigns multiple placement markers 402 to an object used for carrying material (e.g., such as a tray), user interfaces, for instance a UI 400, can enable the user to put instances of the material objects into the tray via convenient dragging and snapping interactions. Material objects can also include grip markers 406. In various examples, the placement markers 402 define respective drag areas 408 indicated by circles, though it will be understood that the drag areas 408 can be alternatively shaped as desired, and all such drag areas are contemplated as being within the scope of this disclosure. For example, material objects 404 can be moved from one position to another precisely, for instance from one tray to another location easily, by dragging the objects 404 into the drag area 408 indicated by the respective placement maker. Furthermore, a material snapped in this way can become attached to that object in the same way a gripper becomes attached to a robot or the robot to a table. Thus, objects that are placed or contained in a given holder (e.g., tray 410) can be moved by moving the holder 410 that supports or contains the objects without having to individually select and move each object.

[0034] As described herein, the assemble marker 302 can also indicate position, such that a user can drag parts to snap together an assembly of parts based on the positions identified by the assemble markers 302 on the various parts. For example, when the base object is dragged, parts assembled to the base object can stay attached to the base object, such that the entire assembly moves and can be edited as one object or unit. In an example, the user interfaces can define an interface for denoting an assembled group of parts or objects as a new object. Consequently, in some cases, new placement and assemble markers can be generated for a new assembled part as a single entity, such that further grip and assemble markers can be added to the single entity.

[0035] Without being bound by theory, markers described herein can indicate or represent relationships between application objects in an automation application or system, such that relationships and interactions can be visualized by users. As described herein, in accordance with various embodiments, responsive to user actuations, objects can be automatically grouped. Additionally, images of the positions of objects are displayed to allow users to visualize various grasps, placements, or assemblies, thereby allowing users to adjust various positions. For placement and assembly, the ability to automatically group objects that are not strictly attached to one another is beneficial. By way of example, a kit of parts to be assembled may be assigned to slots in a carrier. Since the type and position of each kit part is known, instances of those parts may be snapped into the kit by the user quickly. Further, the parts may be removed by dragging each away. When the carrier is moved, however, the parts in the carrier stay with it. Thus, the user does not need to perform selection on the individual parts and the whole stays consistent.

[0036] FIG. 9 illustrates an example of a computing environment that can include the simulation system within which embodiments of the present disclosure may be implemented. A computing environment 900 includes a computer system 910 that may include a communication mechanism such as a system bus 921 or other communication mechanism for communicating information within the computer system 910. The computer system 910 further includes one or more processors 920 coupled with the system bus 921 for processing the information.

[0037] The processors 920 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s) 920 may have any suitable micro architecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The micro architecture design of the processor may be capable of supporting any of a variety of instruction sets. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.

[0038] The system bus 921 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 910. The system bus 921 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The system bus 921 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.

[0039] Continuing with reference to FIG. 9, the computer system 910 may also include a system memory 930 coupled to the system bus 921 for storing information and instructions to be executed by processors 920. The system memory 930 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 931 and/or random access memory (RAM) 932. The RAM 932 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The ROM 931 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 930 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 920. A basic input/output system 933 (BIOS) containing the basic routines that help to transfer information between elements within computer system 910, such as during start-up, may be stored in the ROM 931. RAM 932 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 920. System memory 930 may additionally include, for example, operating system 934, application programs 935, and other program modules 936. Application programs 935 may also include a user portal for development of the application program, allowing input parameters to be entered and modified as necessary.

[0040] The operating system 934 may be loaded into the memory 930 and may provide an interface between other application software executing on the computer system 910 and hardware resources of the computer system 910. More specifically, the operating system 934 may include a set of computer-executable instructions for managing hardware resources of the computer system 910 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 934 may control execution of one or more of the program modules depicted as being stored in the data storage 940. The operating system 934 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.

[0041] The computer system 910 may also include a disk/media controller 943 coupled to the system bus 921 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 941 and/or a removable media drive 942 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive). Storage devices 940 may be added to the computer system 910 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire). Storage devices 941, 942 may be external to the computer system 910.

[0042] The computer system 910 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 920 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 930. Such instructions may be read into the system memory 930 from another computer readable medium of storage 940, such as the magnetic hard disk 941 or the removable media drive 942. The magnetic hard disk 941 (or solid state drive) and/or removable media drive 942 may contain one or more data stores and data files used by embodiments of the present disclosure. The data store 940 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. The data stores may store various types of data such as, for example, skill data, sensor data, or any other data generated in accordance with the embodiments of the disclosure. Data store contents and data files may be encrypted to improve security. The processors 920 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 930. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.

[0043] As stated above, the computer system 910 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 920 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 941 or removable media drive 942. Non-limiting examples of volatile media include dynamic memory, such as system memory 930. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 921. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

[0044] Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction- set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, statesetting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.

[0045] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable medium instructions.

[0046] The computing environment 900 may further include the computer system 910 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 980. The network interface 970 may enable communication, for example, with other remote devices 980 or systems and/or the storage devices 941, 942 via the network 971. Remote computing device 980 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 910. When used in a networking environment, computer system 910 may include modem 972 for establishing communications over a network 971, such as the Internet. Modem 972 may be connected to system bus 921 via user network interface 970, or via another appropriate mechanism.

[0047] Network 971 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 910 and other computers (e.g., remote computing device 980). The network 971 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 971.

[0048] It should be appreciated that the program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 9 as being stored in the system memory 930 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the computer system 910, the remote device 980, and/or hosted on other computing device(s) accessible via one or more of the network(s) 971, may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG. 9 and/or additional or alternate functionality. Further, functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 9 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module. In addition, program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the program modules depicted in FIG. 9 may be implemented, at least partially, in hardware and/or firmware across any number of devices.

[0049] It should further be appreciated that the computer system 910 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 910 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in system memory 930, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub-modules of other modules.

[0050] Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. In addition, it should be appreciated that any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase “based on,” or variants thereof, should be interpreted as “based at least in part on.”

[0051] Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment.