Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR CORRECTING GRAMMATICAL ERRORS OF AN INPUT SENTENCE
Document Type and Number:
WIPO Patent Application WO/2013/191662
Kind Code:
A1
Abstract:
According to one aspect, there is provided a method for correcting grammatical errors of an input sentence, the method comprising: receiving the input sentence as a current hypothesis; generating a plurality of new hypotheses from the current hypothesis, each of the new hypotheses being a new sentence originating from the input sentence, with a portion of the input sentence being changed; analysing each of the new hypotheses to compute a score for each of the plurality of new hypotheses; comparing the scores of the plurality of new hypotheses; and generating an output sentence from the new hypotheses with the highest score.

Inventors:
DAHLMEIER DANIEL HERMANN RICHARD (SG)
NG HWEE TOU (SG)
Application Number:
PCT/SG2013/000261
Publication Date:
December 27, 2013
Filing Date:
June 24, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV SINGAPORE (SG)
International Classes:
G06F17/27
Foreign References:
US20090192787A12009-07-30
US7013262B22006-03-14
US6012075A2000-01-04
US6085206A2000-07-04
US7349840B22008-03-25
US20110313757A12011-12-22
Attorney, Agent or Firm:
SPRUSON & FERGUSON (ASIA) PTE LTD (1 Singapore, SG)
Download PDF:
Claims:
CLAIMS

1. A method for correcting grammatical errors of an input sentence, the method comprising:

receiving the input sentence as a current hypothesis;

generating a plurality of new hypotheses from the current hypothesis, each of the new hypotheses being a new sentence originating from the input sentence, with a portion of the input sentence being changed;

analysing each of the new hypotheses to compute a score for each of the plurality of new hypotheses;

comparing the scores of the plurality of new hypotheses; and

- - generating an output sentence from the new hypotheses with the highest score.

2. The method of claim 1, wherein the portion of the input sentence being changed to generate each new hypothesis comprises an incremental change to the input sentence.

3. The method of claim 2, wherein the incremental change corresponds to a correction of a word or phrase in the input sentence.

4. The method of claim 3, wherein a noun phrase is corrected.

5. The method of claim 3 or 4, wherein a propositional phrase is corrected.

6. The method of claims 3 to 5, wherein a plurality of a noun is corrected.

7. The method of claims 3 to 6, wherein a verb is corrected.

8. The method of claims 3 to 7, wherein the spelling of a word is corrected.

9. The method of claims 2 to 8, wherein one or more punctuations in the input sentence is corrected.

10. The method of any one of the preceding claims, wherein the input sentence is an original sentence or the input sentence is a sentence generated from a previous hypothesis.

11. The method of any one of the preceding claims, wherein the score of each of the plurality of new hypotheses is based on a probability of the corresponding new sentence.

12. The method of claim 11, wherein the probability of the corresponding new sentence is determined from the product of a probability of each word in the corresponding new sentence, wherein the probability of each word is determined by its preceding words.

13. The method of claim 11 or 12, wherein a N-gram language model is used to calculate the probability of the corresponding new sentence.

14. The method of claims 1 to 10, wherein the score of each of the plurality of new hypotheses is based on a confidence level provided by one or more classifiers that each analyses a grammatical aspect of each corresponding new sentence.

15. The method of claim 14, wherein the confidence level provided by each of the one or more classifiers is derived from a weight assigned to each approximately matching portion of the respective new sentence found in each of the one or more classifiers to which a grammatical aspect of the respective new sentence is mapped.

16. The method of claim 15, wherein the weight assigned to each approximately matching portion of the respective new sentence found in each of the one or more classifiers depends on the syntax -and/or semantics of the new sentence for each of the plurality of new hypotheses.

17. The method of claim 16, wherein each of the weights in each of the one or more classifiers is obtained from training each of the one or more classifiers on a set of training sentences and one or more correct syntaxes and/or semantics for each of the training sentences.

18. The method of claim 17, wherein the set of training sentences and the one or more correct syntaxes and/or semantics for each of the training sentences is used to train the one or more classifiers to map the grammatical aspect of the respective new sentence to the appropriate one or more classifiers.

19. The method of claims 14 to 18, wherein the one or more classifiers comprises a classifier configured to analyse an article for each noun phrase in each of the plurality of new hypotheses.

20. The method of claims 14 to 19, wherein the one or more classifiers comprises a classifier configured to analyse a preposition for each propositional phrase in each of the plurality of new hypotheses.

21. The method of claims 14 to 20, wherein the one or more classifiers comprises a classifier configured to analyse whether a noun in each of the plurality of new hypotheses is in singular or plural form.

22. The method of claims 14 to 21, wherein the one or more classifiers comprises a classifier configured to analyse a morphological form for a verb in each of the plurality of new hypotheses.

23. The method of claims 14 to 22, wherein computing the score for each of the plurality of new hypotheses comprises linearly combining the confidence levels provided by the one or more classifiers.

24. The method of any one of the preceding claims, wherein the plurality of new hypotheses are based on hypotheses with the highest score range from a previous iteration.

25. A computer system for correcting grammatical errors of an input sentence, the computer system comprising:

at least one processor; and

at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the computer system at least to perform:

receiving the input sentence as a current hypothesis;

generating a plurality of new hypotheses from the current hypothesis, each of the new hypotheses being a new sentence originating from the input sentence, with a portion of the input sentence being changed; analysing each of the new hypotheses to compute a score for each of the plurality of new hypotheses;

comparing the scores of the plurality of new hypotheses; and

generating an output sentence from the new hypotheses with the highest score.

Description:
Method for correcting grammatical errors of an input sentence

FIELD

[0001] Various embodiments relate to correcting grammatical errors of an input sentence, which find educational applications for natural language processing and in particular to automatic grammatical error correction.

BACKGROUND

[0002] Grammatical error correction has been recognized as an important application of natural language processing. This technology can particularly benefit learners of English as a foreign language. The.dominant paradigm that underlies most grammar correction, systems to date is supervised multi-class classification. A classifier is trained to predict a word from a confusion set of possible choices, given some feature representation of the surrounding sentence context. During test time, the classifier predicts the most likely word from the confusion set for each instance extracted from the test data. If the prediction differs from the observed word used by the writer and the classifier is sufficiently confident in its prediction, the observed word is replaced by the prediction.

[0003] This classification approach suffers from some serious drawbacks. For example, each classifier only corrects a very specific type of error and considers each instance independently of the rest of the sentence. This ignores dependencies between the words in the sentence. There is thus a need to perform joint inference over corrections of entire sentences which can contain multiple errors, instead of correcting each word independently. SUMMARY

[0004] Embodiments of the invention provide a computer system and method for automatic correction of grammatical errors made in texts written by learners of a language. This enables correction of complete sentences, which can contain multiple and interacting errors.

[0005] Herein disclosed is a computer system that comprises a decoder model that performs a beam search over possible hypotheses (i.e. corrected versions of the sentence) to find the best possible correction for an input sentence. The search starts from the original input sentence. At each search step, a set of proposers generates new hypotheses by making an incremental change to the current hypothesis. A set of experts scores these hypotheses on criteria of grammatical correctness. These experts include discriminative classifiers for specific error types, such as article and preposition errors. The final score for a hypothesis is a linear combination of the expert scores according to the decoder model. The weights of the decoder model are trained on a development set of error-annotated sentences.

[0006] According to a first aspect, there is provided a method for correcting grammatical errors of an input sentence, the method comprising: receiving the input sentence as a current hypothesis; generating a plurality of new hypotheses from the current hypothesis, each of the new hypotheses being a new sentence originating from the input sentence, with a portion of the input sentence being changed; analysing each of the new hypotheses to compute a score for each of the plurality of new hypotheses; comparing the scores of the plurality of new hypotheses; and generating an output sentence from the new hypotheses with the highest score.

[0007] According to a second aspect, there is provided a computer system for correcting grammatical errors of an input sentence, the computer system comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the computer system at least to perform: receiving the input sentence as a current hypothesis; generating a plurality of new hypotheses from the current hypothesis, each of the new hypotheses being a new sentence originating from the input sentence, with a portion of the input sentence being changed; analysing each of the new hypotheses to compute a score for each of the plurality of new hypotheses; comparing the scores of the plurality of new hypotheses; and generating an output sentence from the new hypotheses with the highest score.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] Example embodiments of the invention will be better understood and readily apparent to one of ordinary skill in the art from the following written description, by way of example only, and in conjunction with the drawings. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention, in which:

[0009] Figure 1 shows a flowchart that illustrates a method, according to a first embodiment, for correcting grammatical errors of an input sentence.

[0010] Figure 2 shows a computer system, according to a second embodiment, for correcting grammatical errors of an input sentence.

DEFINITIONS

[0011] The following provides sample, but not exhaustive, definitions for expressions used throughout various embodiments disclosed herein.

[0012] The phrase "input sentence" depends on the stage at which grammatical error correction is occurring. Thus the input sentence may mean an original sentence or a starting sentence which may contain no or any number of grammatical errors. On the other hand, the input sentence may be a sentence generated from the hypothesis of a previous iteration, i.e. the original sentence having undergone one or more successive iterations of grammatical error correction.

[0013] The term "hypothesis" may mean a sentence that is tested for its grammatical correctness or whether its syntax and semantics are in accordance with rules or conventions that are established for the language (e.g. English) in which the sentence is written. The form of the hypothesis depends on the stage at which the sentence is being tested for correctness. At initiation, the hypothesis is identical to the original sentence. At a later stage, the hypothesis becomes a sentence that is generated from a previous sentence which may already have undergone one or more successive such sentence generations. At such later stages, the hypothesis thus originates from the input sentence, i.e. it is based on the input sentence and may be referred throughout the present specification as a "new hypothesis/hypotheses".

[0014] The phrase "portion of the sentence" may mean a length of a sentence that encompasses only one word or spans over two or more words (i.e. a phrase) including punctuation, if any, that exists in the phrase.

[0015] The term "score" may mean a value, between 0.0 and 1.0, that measures a probability of the grammatical correctness of a hypothesis.

[0016] The phrase "incremental change" may mean a slight change to the input sentence, so that the sentence for each new hypothesis differs by a correction of one or more words or a phrase of the input sentence.

[0017] The term "classifier" may mean an algorithm programmed to determine whether it is configured to process a portion of each of the plurality of new hypotheses that is parsed to the algorithm. Exemplary classifiers include an article classifier, a preposition classifier, a noun form classifier and a verb form classifier. The list may also include classifiers that are programmed to process further grammatical aspects of the language being analysed.

[0018] The phrase "confidence level" may mean a probability score provided by each classifier that provides a measure of the grammatical correctness of the hypothesis being analysed.

[0019] The term "syntax" may mean the order in which words in a sentence is arranged, whereby the order may or may not be in accordance with the rules or the conventions that are established for the language in which the sentence is written.

[0020] The term "semantics" may mean the meaning of the words in a sentence as well as the meaning that is composed of several combined words or all the words in a sentence.

[0021] The term "iteration" may mean that when generating a plurality of new hypotheses of a current hypothesis, the results from processing one or more previous input sentences to correct their respective grammatical errors are considered. In various embodiments, the plurality of new hypotheses that are generated may be based on earlier hypotheses, which are generated from one or more previous input sentences, having the highest score range.

DETAILED DESCRIPTION

[0022] Figure 1 shows a flowchart 100 that illustrates a method, according to a first embodiment, for correcting grammatical errors of an input sentence.

[0023] In step 102, the input sentence is received as a current hypothesis. In step 104, a plurality of new hypotheses is generated from the current hypothesis. Each of the new hypotheses is a new sentence originating from the input sentence, with a portion of the input sentence being changed.

[0024] In step 106, each of the new hypotheses is analysed to compute a score for each of the plurality of new hypotheses. In step 108, the scores of the plurality of new hypotheses are compared. In step 1 10, an output sentence from the new hypotheses with the highest score is generated.

[0025] In comparing scores obtained for each of the plurality of new hypotheses and then forming an output sentence from the new hypothesis with the highest score, the grammatical correctness of all new sentences resulting from the input sentence having a portion changed would have been considered, so that the method disclosed above provides a means to consider the dependencies between words in an input sentence being corrected.

[0026] Embodiments of the invention comprise a beam-search decoder for grammatical error correction that combines the advantages of a classification approach with the ability to correct entire sentences with multiple and interacting errors. In such embodiments, the beam-search conducted by the decoder is the size of the search that the decoder performs, i.e. with reference to Figure 1, the number of new hypotheses that are generated, to determine a sentence that is grammatically correct, has a proper syntax and/or semantics. Starting from an original input sentence or a sentence generated from a previous hypothesis, the decoder performs a search over possible new hypotheses to find the best possible correction for the input sentence. The task of the decoder is to find the best hypothesis (i.e., corrected sentence) for a given input sentence.

[0027] To accomplish this, the decoder needs to be able to generate new hypotheses from current ones, and also to discriminate good hypotheses from bad ones. This is achieved by two groups of modules, called proposers and experts, respectively. Proposers take a hypothesis and output a set of new hypotheses, where each new hypothesis is the result of making an incremental change to the current hypothesis. Accordingly, proposers generate a plurality of new hypothesis from a current hypothesis. Experts subsequently score these hypotheses on particular aspects of grammaticality. Accordingly, experts analyse each of the new hypothesis to compute a score for each of the plurality of new hypotheses. This can be a general language model score, or scores on specific aspects such as article and preposition choice. The expert scores serve as features for the decoder. The final score for a hypothesis may be a linear combination of the expert scores according to the decoder model. The weights of the decoder model may be trained on a development set of error-annotated sentences. The modular design of the framework makes it easy to extend the decoder to new error categories by adding specific proposer and expert models.

PROPOSER MODULES

[0028] The proposer modules (also referred to as "proposers") generate new hypotheses from the current hypothesis. Because the space of all possible hypotheses is exponential, each proposer only makes a small incremental change to the current hypothesis in each step. Each change may correspond to a single correction of a word or phrase in the current hypothesis.

[0029] The following list shows examples of proposers for typical grammatical errors made by language learners. Additional proposers can be added to the framework to accommodate more error types.

• Article proposer: for each noun phrase (NP) in the hypothesis, propose two new hypotheses by changing the observed article. Possible article choices are "a/an, the," and the null article.

• Preposition proposer: for each prepositional phrase (PP), propose a set of new hypotheses by changing the observed preposition. For each preposition, define a confusion set of possible correction choices.

• Noun category proposer: for each noun in the hypothesis, propose a new hypothesis by changing the noun form from singular to plural or vice versa. • Verb inflection proposer: for each verb in the hypothesis, propose a set of new hypotheses by changing the verb's inflection, for example changing a verb in the base form to the third person singular form or vice versa.

• Punctuation proposer: propose new hypotheses by inserting missing commas, periods, and hyphens based on a set of rules.

• Spelling proposer: propose new hypotheses by replacing each word that is flagged by a spell checker, with its corrected form as proposed by the spell checker.

EXPERT MODULES

[0030] The expert modules (also referred to as "experts") score each hypothesis on particular aspects of grammaticality. This helps the decoder to discriminate grammatically fluent hypotheses from non-fluent ones. Two types of expert models are employed. The first is a standard N-gram language model. An N-gram language model computes the probability of a word conditioned on the N-1 previous words. The probability of a sentence is the product of the probabilities of its words. The probability of the hypothesis under the language model as a feature (see Equation 1 below) may be used. To avoid a bias towards shorter hypotheses, the probability is normalized by the length of the hypothesis. The intuition is that grammatical sentences should on average have higher probability than ungrammatical ones. The language model expert is not specialized for a particular type of error. score lm , , (1)

where h is a hypothesis sentence and |h| is the hypothesis length in tokens.

[0031] The second type of experts is based on supervised classifiers. Each of these one or more classifiers analyses a grammatical aspect of each corresponding new sentence (from a respective new hypothesis) to provide a score for each of the plurality of new hypotheses, based on a confidence level provided by these one or more classifiers.

[0032] In one embodiment where a classifier is a statistical model that approximates a mapping from some input variables X to a set of discrete classes Y, the confidence level provided by each of the one or more classifiers is derived from a weight assigned to each approximately matching portion of the respective new sentence found in each of the one or more classifiers to which a grammatical aspect of the respective new sentence is mapped. The weight assigned to each approximately matching portion of the respective new sentence found in each of the one or more classifiers depends on the syntax and/or semantics of the new sentence for each of the plurality of new hypotheses.

[0033] To learn the mapping, a classifier is first trained on a set of examples of inputs and their correct classes. In more detail, each of the weights in each of the one or more classifiers is obtained from training each of the one or more classifiers on a set of training sentences and one or more correct syntaxes and/or semantics for each of the training sentences. The set of training sentences and the one or more correct syntaxes and/or semantics for each of the training sentences is used to train the one or more classifiers to map the grammatical aspect of the respective new sentence to the appropriate one or more classifiers. After training, the classifier can be used to predict the classes of new, unseen examples.

[0034] Supervised classifiers can be used for particular grammatical errors by letting the classifier predict the correct word for a particular sentence context. The sentence context is encoded in a set of features which forms the input X; the possible correction choices form the classes Y. For example, for preposition errors, a classifier can be trained to predict the correct preposition, given a feature representation of the surrounding context, e.g., the words to the left and right of the preposition. [0035] The following list shows examples of expert classifiers for typical grammatical errors made by language learners. Natural language processing tools within these supervised classifiers automatically analyse the syntax of the hypothesis to determine which classifiers is to be used for each portion of the hypothesis. Additional experts can be added to the framework to accommodate more error types.

• Article classifier expert: the classifier predicts the correct article (a/an, the, null article) for a noun phrase (NP).

• Preposition classifier expert: the classifier predicts the correct preposition for a prepositional phrase (PP).

• Noun number classifier expert: the classifier predicts whether a noun should be in the singular or plural form.

• Verb form classifier expert: the classifier predicts the correct morphological form (e.g., base form, third person singular, gerund) for a verb.

[0036] A classifier can output a numerical confidence score (a real number) for each class. The class with the highest score does not have to be the class that corresponds to the word choice observed in the hypothesis. For example, assume a hypothesis with the text "He leaves at the morning." The preposition classifier expert would predict a numerical score for each preposition that can take the position of "at". Assume that these scores are: at (0.1), for (-0.3), in (0.9), of (0.2), on (0.1), with (0.1). Two types of features may be defined based on the classifier output. The first feature, called average score, is the average score assigned to the word choice observed in the hypothesis (see Equation (2) below). score avg = - l ∑{* T f{x y )) (2) where u is the classifier weight vector, and >f are the feature vector and the hypothesis class, respectively, for the z ' -th instance extracted from the hypothesis h, and f is a feature map that computes the expert classifier features. The average score reflects how much the expert model on average "likes" the hypothesis. In the present example, the average score would just be 0.1, the score assigned to the preposition "at". The intuition is that higher scores on average reflect more grammatical word choices.

[0037] The second feature, called delta score, is the difference between the highest scoring class and the score assigned to the hypothesis word choice in any instance from the hypothesis.

scoter = ιη ¾ (ιι Γ ίχ|, ) - u T f(* ,yf)), · (3)

The difference measures how much the model "disagrees" with the hypothesis. In this case, this would be 0.9 (in) - 0.1 (at) = 0.8. The intuition is that hypotheses that the model disagrees with are on average worse than hypotheses with small or no disagreement.

[0038] Features may be added that count how many corrections of each type have already been performed for a hypothesis. For example, there may be a feature that counts how often the article "tAe" has been corrected to the article "a." Aggregated correction count features may also be added for each error category, e.g., how many article corrections have been applied in total. This allows the decoder to learn a bias against overcorrecting sentences and learn which types of corrections are better and which ones are worse.

DECODER MODEL

[0039] A decoder performs a search for the best possible correction. The decoder needs to decide which hypotheses to keep pursuing, which hypotheses to discard, and which hypothesis is finally the best correction. To achieve this, the decoder combines the features associated with each hypothesis into an overall hypothesis score. This may be done through a linear model, which has the form as described in Equation (4) below:

s = w T ' f E (¾ (4) where w is the decoder model weight vector and f E is a feature map that computes the expert features for all experts in the set E for the hypothesis h. The weights w are tuned on a development set of error-annotated sentences using an optimization algorithm such as Minimum Error Rate Training or pair- wise ranking optimization. The Fl measure between the decoder corrections and the gold error-annotated corrections may be optimized. The term "gold error-annotated correction" here refers to a reference correction provided by a human expert. Other optimization objectives are possible.

[0040] Given a set of proposers, experts and a decoder model, the decoder may perform a beam search over possible hypothesis candidates to find the best hypothesis correction h for an input sentence e. The decoding process proceeds as follows. The decoder starts with the input sentence as the initial hypothesis, i.e., the initial hypothesis is that all words are correct. It then performs a beam search over the space of possible correction hypotheses. The search proceeds in iterations until the beam is empty or the maximum number of iterations has been reached. In every iteration, the decoder takes each hypothesis in the beam and generates new hypothesis candidates using all the available proposers. The hypotheses are evaluated by the expert models and scored using the decoder model. Hypotheses that have been explored before are not considered again to avoid cycles in the search.

[0041] As the search space grows exponentially, it is infeasible to perform an exhaustive search. Therefore, the search space may be pruned by only accepting the most promising hypotheses to the pool of hypotheses for future consideration. Thus, the plurality of new hypotheses, when performing grammatical error correction of an input sentence, may be based on hypotheses with the highest score range from a previous iteration. If a hypothesis has a higher score compared to the best hypothesis found so far in previous iterations, it is added to the pool. Otherwise, a simulated annealing strategy may be used where a hypothesis with a lower score can still be accepted with a certain probability which depends on the "temperature" of the system. The probability for a hypothesis with a lower score to be accepted decreases with the search as the temperature decreases. From all hypotheses in the pool, the top k hypotheses may be selected and added to the beam for the next search iteration.

THE BEAM-SEARCH DECODING ALGORITHM PSEUDO CODE

[0042] The pseudo code for the decoding algorithm is shown below. decode (e, w, P, E, k, M)

// takes an input sentence e and finds the best possible correction hbest

e : original sentence

w : decoder model weight vector

P : a set of proposers

E : a set of experts

k : beam width parameter

M : maximum number of iterations

T : temperature for simulated annealing

c : cooling schedule for simulated annealing (0 < c < 1)

beam <— {e}

previous — {e}

hbest <- e

Sbest <- W E(h b est )

i <- 0

while beam not empty and i < M

do pool <r- {}

for all h in beam

do

for all p in P

do

for all h' in p.propose(h)

do

if h' in previous

continue

end if

previous <— previous U {h'} Sh- -w T E (h')

if accept (sv,s best T)

pool <-pool U {(h', Sh-)} end if

end for

end for

end for

beam — {}

for all (h, Sh) in nbest(pool, k)

beam - beam U {h}

ifSh > Sbest

h best - h

Sbest h

end if

end for

T<-T*c

i<-i+l

end while

return hbest accept (s h , s bes t,T)

// decide whether to accept hypothesis with score Sj, delta <- s h - s bes t

if delta > 0

return true

if exp idelta / T) > randomQ

return true else return false

[0043] Figure 2 shows a computer system 200, according to a second embodiment, for correcting grammatical errors of an input sentence 203. The computer system 200 comprises at least one processor 201 and at least one memory 214 including computer program code.

[0044] The at least one memory 214 and the computer program code are configured to, with the at least one processor 201, cause the computer system 200 at least to perform the following: i) receive the input sentence 203 as a current hypothesis (represented using the arrow labeled 202); ii) generate a plurality of new hypotheses (204A, 204B, ... 204N) from the current hypothesis. Each of the new hypotheses is a new sentence originating from the input sentence 203, with a portion of the input sentence 203 being changed; iii) analyse each of the new hypotheses (204A, 204B, ... 204N) to compute a score for each of the plurality of new hypotheses (204A, 204B, ... 204N); iv) compare the scores of the plurality of new hypotheses (204A, 204B, ... -204N); and v) generate an output sentence 210 from the new hypotheses (204 A, 204B, ... 204N) with the highest score.

[0045] The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. [0046] Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.

[0047] Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.

[0048] Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. [0049] Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.

[0050] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.