Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD AND SYSTEM FOR PROCESSING DATA USING AN AUGMENTED NATURAL LANGUAGE PROCESSING ENGINE
Document Type and Number:
WIPO Patent Application WO/2016/203231
Kind Code:
A1
Abstract:
The present invention relates to a method of processing data. The method including the steps of at least one processor tokenising text within at least one part of the data; and at least one processor matching tokenised text against an ontological framework using a plurality of rules to identify relevant topics within the data. A system is also disclosed.

Inventors:
LECOURT JEAN-PHILIPPE (GB)
LECOURT-ALMA JELTJE (GB)
BASDEVANT JEROME (ES)
DERAZE EMMA (GB)
Application Number:
PCT/GB2016/051784
Publication Date:
December 22, 2016
Filing Date:
June 15, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EREVALUE LTD (GB)
International Classes:
G06F17/27; G06F17/30
Foreign References:
US20100074524A12010-03-25
US20150106157A12015-04-16
Other References:
PRAKASH M NADKARNI ET AL: "Natural language processing: an introduction", JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION (JAMIA), vol. 18, no. 5, 1 September 2011 (2011-09-01), US, pages 544 - 551, XP055298433, ISSN: 1067-5027, DOI: 10.1136/amiajnl-2011-000464
Attorney, Agent or Firm:
RATIONAL IP LIMITED (London EC2A 3AY, GB)
Download PDF:
Claims:
Claims

A method of processing data, including:

at least one processor tokenising text within at least one part of the data; and

at least one processor matching tokenised text against an ontological framework using a plurality of rules to identify relevant topics within the data.

A method as claimed in claim 1 , wherein at least some of the plurality of rules relates to grammar.

A method as claimed in any one of the preceding claims, wherein the relevance of each of the topics is assigned one of a plurality of relevancy levels.

A method as claimed in any one of the preceding claims, wherein the data is classified into one of a plurality of categories.

A method as claimed in claim 4, wherein each category is associated with at least one relevancy rule, and wherein each topic is assigned to one of a plurality of relevancy levels in accordance with the at least one associated relevancy rule.

A method as claimed in any one of claims 4 to 5, wherein the data is comprised of a plurality of parts and the category for the data defines the at least one part of the data that is to be tokenised.

A method as claimed in any one of the preceding claims, wherein the data is extracted from one of a plurality of sources.

8. A method as claimed in claim 7 when dependent on claim 4, wherein each source corresponds to one of the categories.

9. A method as claimed in any one of the preceding claims when dependent on claim 4, wherein the method is applied to a plurality of data and at least some of the plurality of data is classified within different categories.

10. A method as claimed in claim 9, further including a step of determining relevant topics across the plurality of data using a plurality of further rules.

1 1 . A method as claimed in claim 10, wherein at least some of the plurality of further rules are associated with a specific category of the plurality of categories and are used in relation to data within that specific category.

12. A method as claimed in any one of the preceding claims when dependent on claim 4, wherein the categories include one or more selected from the set of sustainability reports, financial reports, security filings, integrated reports and sustainability web sites.

13. A method as claimed in any one of the preceding claims, wherein at least one of the plurality of rules specifies only tokenised text within specified portions of the data are matched against the ontological framework.

14. A method as claimed in any one of the preceding claims, wherein the ontological framework comprises, at least, a plurality of topics, each associated with a plurality of terms.

15. A system for processing data, including: at least one processor configured for tokenising text within at least one part of the data, and matching tokenised text against an ontological framework using a plurality of rules to identify relevant topics within the data; and

at least one memory configured for storing the data, the ontological framework and the plurality of rules.

A method and system for processing data as herein described with reference to the Figures.

Description:
A Method And System For Processing Data Using An Augmented Natural Language Processing Engine

Field of Invention

The present invention is in the field of data processing. More particularly, but not exclusively, the present invention relates to processing data using an augmented natural language processing (NLP) engine. Background

Unstructured data is widespread across multiple industries and across the Internet. Techniques exist for processing unstructured data into a form that may be of greater use. In relation to text, the key cluster of techniques used are known as Natural Language Processing (NLP).

A standard technique from NLP in structuring data for analysis involves the use of tokenisation. Tokenisation breaks a text document down into words, phrases, or other meaningful elements called tokens. The number of occurrences of each token in the document can be used along with hand written rules to guess the nature of the document and the topics covered by the document.

The disadvantage of such generic approach is that it provides a broad approach to processing documents which results, therefore, in less accurate outcomes.

There is a desire for an improved data processing method and system which can be optimised for specific fields.

It is an object of the present invention to provide a method and system of processing data using an augmented natural language processing engine which overcomes the disadvantages of the prior art, or at least provides a useful alternative.

Summary of Invention

According to a first aspect of the invention there is provided a method of processing data, including:

at least one processor tokenising text within at least one part of the data; and at least one processor matching tokenised text against an ontological framework using a plurality of rules to identify relevant topics within the data.

At least some of the plurality of rules may relate to grammar.

The relevance of each of the topics may be assigned one of a plurality of relevancy levels (i.e. high, medium or low).

The data may be classified into one of a plurality of categories (for example, sustainability reports, financial reports, security filings, integrated reports, sustainability web sites).

Each category may be associated with at least one relevancy rule, and each topic may be assigned to one of a plurality of relevancy levels in accordance with the at least one associated relevancy rule. The data may be comprised of a plurality of parts and the category for the data may define the at least one part of the data that is to be tokenised.

The data may be extracted from one of a plurality of sources. Each source may correspond to one of the categories. The method may be applied to a plurality of data and at least some of the plurality of data is classified within different categories.

The method may further including a step of determining relevant topics across the plurality of data using a plurality of further rules.

At least some of the plurality of further rules may be associated with a specific category of the plurality of categories and are used in relation to data within that specific category.

At least one of the plurality of rules may specify only tokenised text within specified portions of the data are matched against the ontological framework.

The ontological framework may comprise, at least, a plurality of topics, each associated with a plurality of terms.

According to a further aspect of the invention there is provided a system for processing data, including:

at least one processor configured for tokenising text within at least one part of the data, and matching tokenised text against an ontological framework using a plurality of rules to identify relevant topics within the data; and

at least one memory configured for storing the data, the ontological framework and the plurality of rules. Other aspects of the invention are described within the claims.

Brief Description of the Drawings

Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which: Figure 1 : shows a block diagram illustrating a system in accordance with an embodiment of the invention;

Figure 2: shows a flow diagram illustrating a method in accordance with an embodiment of the invention;

Figure 3: shows a table illustrating a portion of an exemplary ontology for use with an embodiment of the invention; Figure 4: shows a flow diagram illustrating a method in accordance with an embodiment of the invention;

Figure 5: shows a document and table illustrating tokenisation of a sustainability document via application of a method in accordance with an embodiment of the invention;

Figure 6: shows a document and table illustrating annotation of a sustainability document via application of a method in accordance with an embodiment of the invention;

Figure 7: shows a CSV (comma separated variable) file generated from the annotated document of Figure 6 via a method in accordance with an embodiment of the invention; Figure 8: shows a screenshot illustrating scoring of topics via a method in accordance with an embodiment of the invention;

Figure 9: shows a table illustrating clustering of topics via a method in accordance with an embodiment of the invention; Figure 10: shows a document and table illustrating tokenisation of a 10K document via application of a method in accordance with an embodiment of the invention; Figure 1 1 : shows a document and table illustrating rules identifying sections via a method in accordance with an embodiment of the invention;

Figure 12: shows a document and table illustrating annotation of a 10K document via application of a method in accordance with an embodiment of the invention; and

Figure 13: shows a table illustrating an exemplary gazetteer extracted from an ontology used with an embodiment of the invention. Detailed Description of Preferred Embodiments

The present invention provides a method and system of processing data using an augmented natural language processing engine. The inventors have been attempting to solve the problem of processing publicly available documents within a specific field, namely, Corporate Social Responsibility and Sustainability. In solving this problem, the inventors noticed that the different documents available fall into different categories. Furthermore, within the specific field, certain words, combination of words and phrases are relevant to certain topics.

The inventors discovered that they can improve a natural language processing engine for processing documents in specific fields by constructing an ontology for that field where relevant words, combination of words and phrases are associated with certain topics, and by using rules to detect or weight the occurrence of words and phrases within the documents. These rules may be assigned to each category such that documents within that category utilise those assigned rules in processing. This results in a method and system which can identify relevant topics within and across documents.

In Figure 1 , a system 100 in accordance with an embodiment of the invention is shown.

The system 100 includes one or more processors 101 and a memory 102.

At least one of the processors 101 is configured to process data by tokenising text within the data.

At least one of the processors 101 is further configured to match the tokenised text against an ontological framework (e.g. as shown in Figure 3) using a plurality of rules to identify relevant topics within the data.

The memory 102 may be configured to store the data, the ontological framework, and the plurality of rules.

The system 100 may be a single apparatus or distributed across a plurality of apparatus and linked via a communications system or network.

Referring to Figure 2, a method 200 in accordance with an embodiment of the invention will be described. In step 201 , a processor (e.g. processor 101 ) tokenises text within data. Tokenisation involves splitting up a text using a vocabulary and rules into terms and punctuation.

In step 202, a processor (e.g. processor 101 ) matches the tokenised text against an ontological framework using a plurality of rules to identify relevant topics within the data. At least part of the tokenised text may be matched against the terms within the ontological framework. Each matching term is associated with a topic such that occurrences of a matching term within the document are associated with a topic.

The plurality of rules may include any of the following rules:

a) identifying locations within the data to use tokenised text;

b) identifying locations within the data to define weights for the tokenised text; and

c) classifying the located topics on degree of importance.

The plurality of rules may be selected from a larger set of rules. The plurality of rules may be selected based upon a category for the data. For example, when the method is applied in relation to Corporate Social Responsibility and Sustainability, the data may be categorised into one of sustainability report, SEC filing, financial annual report, integrated report and Web sites, and each of the categories may be associated with a specific selection of rules.

The number of associations within the document for each topic is used to generate a ranking for the topics from the ontology. One or more of the plurality of rules may identify whether the associations are used to contribute to the ranking based upon the location of the matched terms (e.g. only terms within specific sections of the document contribute). One or more of the plurality of rules may increase the weight given to certain associations (for example, based upon the location of the matched terms such as terms within a key section of the document).

In one embodiment, multiple matching terms associated with one topic within a single sentence are weighted the same as a single matching term associated with the topic in a sentence. One or more of the plurality of rules may be used to classify the topics on degree of importance (e.g. high, medium, low importance). For example, if the document has 40 topics, then the top 30% may be classified as high ; out of the remaining topics, if the topic has over 4 associations then it is classified as medium ; and the remaining topics may be classified as low.

In one embodiment, the above method is applied to a plurality of data as shown at step 503. Each data may represent a document. In this embodiment, a further step 504 may be included to determine the relevancy of topics across the plurality of data. At some of the data may fall within different categories. For example, one document may be a sustainability report, and another document may be a financial report. The further step may utilise rules based upon the category of the data to determine the weight or relevance of the classification of topics for that document. The further step may, for each topic, classify the topic across the plurality of data based upon any one or combination of the following:

a) where multiple data exist within a category, the highest classification given to that topic across that category;

b) if a predefined category specifies a higher classification for the topic than other categories, the classification of that predefined category (for example, if a financial report classifies a topic as high and a sustainability report classifies a topic as medium than the topic across the reports is classified as high); and/or

c) an average for the classification of the topic across the data.

In one embodiment, the method may include a yet further step of generating a proportional importance for a topic within a data or across a plurality of data may be generated for an entity (for example, a company). This proportional importance may be measured against other entities or against earlier or later generated data to provide a comparison within a cluster of entities or across time respectively. For illustrative purposes only, Figure 3 shows a portion of an exemplary ontological framework for use with an embodiment of the invention.

The ontological framework may include a plurality of categories (300a and 300b), each category (300a and 300b) associated with a plurality of topics (301 a to 301 e), and each topic (301 a to 301 e) may be associated with a plurality of terms (302a to 302e). The terms (302a to 302e) may be words, phrases or other units of information which can be tokenised within text. In the example shown in Figure 3, category 300a is "Corporate governance & Risk management". This is associated with three topics (301 a, 301 b, and 301 c). Topic 301 a is "Longterm shareholder value" and is associated with a plurality of terms 302a (lasting shareholder value, long-term value for shareholders, long-term financial value, long term dividend, long term dividends and others not shown).

With reference to Figures 4 to 13, embodiments of the invention will now be described where the data are documents relating to the field of Corporate Social Responsibility and Sustainability. It will be appreciated that, although embodiments of the invention may be particularly useful in this field, embodiments of the invention may be usefully applied in other fields.

Natural Language Processing (step 401) a) Documents are processed from original format (pdf or html) into text. In one embodiment, five different document types may be processed : sustainability report (susty), SEC filing (10k, 20f or 40f), Financial annual report (FinAR), Integrated report (Integ) and Web sites. b) The text is then parsed, sentences and pages extracted, and the text is tokenized. c) Tokens are matched against the ontology 401 a for identification of terms. Matches are called annotations. d) Generic grammar rules 401 b are then applied. Examples of rules include i) Terms count as only one topic;

ii) For sustainability reports:

CEO letter: search for a CEO letter at the beginning of the Susty report and give the topics that occur in it an advantage in the score; and

iii) For SEC filings:

Tokens are only matched to terms in specific sections (for example, items 1 , 1 A and 7 for 10k, items 3,4,6, 1 1 , 16G, 16H in 20f) e) Data is then exported in the form of three files:

i) One file that contains all annotations, their location in the document and their correspondence with the ontology;

ii) One file that contains document-specific information (number of words, annotations, sentences, pages. . .); and

iii) A fully annotated document for visualization.

Details of the NLP process

Each document is analysed following a predefined flow. Throughout this flow, the initial document is enriched with annotations (e.g. metadata).

The NLP process consists of the following steps:

Step A: a document reset process that clears all previously existing annotations in the document;

Step B: text is tokenized (individual items such as words and punctuation are identified) ; Step C: a Gazetteer is matched against the tokens found. The gazetteer is composed of a series of list files extracted from the Ontology 401 a (in table format). It defines a multi-level hierarchical structures (such as high to low: Categories, Buckets, Topics, Terms). Terms are indicators of a Topic's presence in the document and are annotated as such;

Step D: a Part-of-Speech tagger identifies the syntactic identity of each token; and

Step E: a series of Grammar rules are applied on the output of the previous steps (identifying metadata such as sentences and page numbers, and/or limit an ambiguous term or series of terms to one topic).

The output of that process is a document fully annotated with reference to the ontology framework (topics) and other relevant metadata

Topic Clustering (step 402)

The objective of topic clustering is to identify the importance of each topic in a given document.

At this stage a document is annotated with Topics.

A topic is only counted once per sentence (counts after this are counts of annotated sentences, not counts of words/terms.)

For each document, a score is assigned per topic. A topic can be High, Medium or Low. The High/Medium/Low calculation differs for each document type as show below: a) For susty documents: Key section application

Objective Previous step allocates a number of High topics per document. This allocation's size remains unchanged but the topics' distribution can be modified depending on what was defined as Key section:

Example 1

• doc has 15 High topics

• 12 topics were found in Key section

• 8 of those were previously High: those are unchanged

• 2 of those were previously Medium : they are now boosted up to High

• 2 of those were previously Low: they are now boosted up to Medium

• There are now 10 High topics that were in Key section.

• This leaves 5 more possible High topics.

• Out of the remaining topics (those not in key section), the highest 5 topics are now High.

• The rest are unchanged from the previous stage

Example 2:

• doc has 10 High topics

• 15 topics were found in Key section

• 8 of the Key section topics were High: those are unchanged

• the other 7 Key section topics were medium: only the highest 2 of those (=with highest annotation number) are boosted up to High

• the remaining 5 Key section Medium topic stay Medium

• Other topics unchanged

Topic Scoring - SUSTY

• HIGH:

o Rule 1 : N1 = ApT * 1000; with ApT = #annos / #tokens o Rule 2: N2 = #annotations * C1 (C1 is a scaling parameter that may defined through a calibration exercise based upon a corpus of documents),

o #Major topics= Min (N1 ,N2) • MEDIUM : topics below Major but with more than a lower threshold (lower_threshold=5) of annotated sentences.

• LOW: topics mentioned with less than lowerjhreshold b) for finAR documents: o If document has more than 40 topics:

o Top 30%: High

o Above 4 annotations: Med

o 4 annotations or less: Low

o If document has between 20 and 40 topics:

o Top 30%: High

o Above 3 annotations: Med

o 3 annotations or less: Low

o If document has less than 20 topics:

o Above 2 annotations: High

o 2 annotations or less: Low c) for SEC documents: o If document has 10 topics or more:

o Top 30%: High

o Above 1 annotation: Med

o 1 annotation: Low

o If document has less than 10 topics:

o Above 1 annotation: High

o 1 annotation: Low d) for integrated documents:

The same rules for Susty are used here. It will be appreciated that the numeric values above (for example, top 30% are assigned a high classification) are provided by way of example only and that alternative values may be utilised. After the scores have been assigned key section identification may be used to modify some scores.

Cross-document summarization (step 403) In one embodiment, a user can select any combination of document sources to look at. The following rules are applied to obtain summarized results from more than one document type:

Topic Ranking

2 pools of report types are defined:

o Pool A: Annual Financial report, SEC filing, Integrated report

o Pool B: Sustainability report Summary within pool: For each topic, within a pool, the score is the highest of its components' score.

o So, in case of 2 documents for the same company, one SEC the other one Financial Report, the Score for a given topic is the maximum value the topic has between those 2 docs.

Summary across Pools: for each topic:

o if A > B (i.e. financial reports score a topic higher), Total score = score of A

o if B > A (i.e. susty report ranks topic higher), Total score = Ceiling(mean(A, B)). o Note that if a topic is present in B but not in A, since A exist (e.g at least one report exist in Pool A but does not mention the topic), Total score = ceiling(mean(A,0))

o if B (or A) does not exist, then Total score = Score (A) (or Score B)

HitPercent Metric:

Represents the proportion of all 'hits' {an instance of one annotated sentence for one topic - one sentence can have only one hit per topic but can have 3 hits if it covers 3 topics) .

The sum of all hits may be used as a baseline to assess the relative importance/relevance of a topic in this particular document, and then the topic's 'Hit%' value can be used to track over time the evolution of this proportion over time (by comparing the same type of document for an entity at different times) and across entities/companies (by comparing the same type of document for different entities/companies).

The number of annotated sentences for each topic is exported after the main NLP process into a CSV file.

HitPercent = [# Topic Hit Sentence] / [Total # Hit Sentence]

Example:

o All 'sum' values for document add up to 85;

o Topic x has a 'sum' of 24 hits;

o Topic x's 'Hit%' value is (24/85) * 100 ~ 28.24%

An example will now be described with reference to Figures 5 to 13 and in relation to the following document types:

1 - Sustainability report (originally in pdf format) note: Financial annual reports and Integrated reports are of the same type - the difference is only in the way the topics' scores are calculated. The steps described for Sustainability reports processing are therefore identical if the report is an Integrated or Annual financial report.

2 - SEC filing: 10k (originally in html format)

3 - SEC filing: 20f (originally in html format)

Figure 5 illustrates tokenization and part-of-speech tagging for a sustainability report.

Tokens are shown in yellow (for example, "Mexico" 500a and "Joint" 500b).

And details on each token are shown in table 501 where 'category' = identified part-of-speech.

Figure 6 illustrates annotation of sentences with terms from the ontology.

Annotations are shown in pink (for example, "climate change" 600a and "people and operations safe" 600b).

Details of the annotations are shown in table 601 , including the structure from the ontology (which topic, bucket and category they belong to) and specifications such as precise location of the term and ID. Figure 7 illustrates the CSV document exported after the NLP process.

All annotation details are exported to a CSV file that is then processed through various scripts. Column 700 shows the terms that match Figure 6's annotated terms.

Figure 8 illustrates topic score calculation based upon the exported CSV document. A script calculates topics' scores based on the CSV file and a file containing document-level specifications (incl. number of tokens and annotations). As shown in Figure 8, the document contains 31685 tokens, the document contains 1281 annotations, the scaled AnnoPerToken value is 41 , and the second Ό.03 coefficient' is 39: this means this document will have 39 High topics. Figure 9 shows a table listing the topics located in the document and showing the associated scores calculated for each topic in column 900 and the high/medium/low classification for each topic shown in column 901 (3 denote the topic is classified as high, 2 denote the topic is classified as medium, and 1 denote the topic is classified as low).

Figure 10 illustrates tokenization and part-of-speech tagging for a 10K report.

Tokens are shown in yellow (for example, "United" 1000a and "States" 1000b).

And details on each token are shown in table 1001 where 'category' = identified part-of-speech.

Figure 1 1 illustrates section identification within the 10K report.

Only sections shown in red are matched to terms (for example, 1 100). And details on each section are shown in table 1 101 . Figure 12 illustrates annotation of sentences with terms from the ontology. Annotations are shown in pink (specifically, "equal employment" 1200a and "commodity" 1200b).

Details of the annotations are shown in table 1201 , including the structure from the ontology (which topic, bucket and category they belong to) and specifications such as precise location of the term and ID.

A visualization of a Gazetteer is shown in Figure 13. Language Processing Rules Library

As part of the NLP process, a set of processing rules may be activated and applied depending on the stage and type of document.

Footer Rule · Identifies the footers of a document by determining sentences of the document that occur most frequently.

• Flags topic annotations appearing in the footer so as not to include them.

Negative topic

• Identifies topic annotations which have negative connotations by identifying negators surrounding them.

Strong topic · Identifies topic annotations with positive connotations by identifying ones that have forms surrounding them that reinforce the topic.

Table of contents • Identifies the table of contents in a document by searching for various string sequences and structures that would appear in a table of contents.

• An example of such a sequence is "1 . Introduction 2. Stakeholders Merge Topics

• Merges annotations of a topic from different terms that appear next to each other into one annotation.

• Merges a topic annotation appearing next to it's parent topic so that the more specific one (further down the tree) remains.

o ' ...reducing our emission factor'... both 'emission' and 'emission factor' are terms that appear under the same topic, Emissions(general). They will count as one annotation instead of 2.

o ' ... implement policies to connect corporate and societal values...' both 'societal values' and 'connect corporate and societal value ' are terms under the same topic 'enhanced value'. They will count as one annotation,

o Merging only occurs amongst terms that belong to the same topic. GRI Framework identification

• Identifies the Global Reporting Initiative identifiers to link the report's topics and GRI's framework, (locates GRI topic indicator (e.g. ΈΝ18', 'LA5'), tags paragraph corresponding to this GRI topic ) Key Sentences

• Identifies sentences with X or more annotated topics appearing in them.

Key Paragraph • Identifies the paragraph/s that have the highest density of key sentences.

Page Number Of Sentence

• For each sentence, adds metadata regarding the page it occurs on.

The same is also done for topic annotations.

Key Section SEC

• Identify the string 'ITEM' followed by a number between 0 and 99 and then an A or a B and potentially a period.

• Add the number to each item tag.

• Use the tags to identify different key sections of the SEC document, based on a known structure of these documents, and knowledge of which sections are of interest to us.

Collocation

Will be used for two purposes:

1 . identifying topics that occur next to each other (in same sentence &/or same paragraph) - for ontology analysis, research and topics-tracking purposes;

2. identifying co-occurrences of different strategic levels for one given topics ('aware' vs 'action') - this can be used for assessing report quality and the maturity level of the company w.r.t. specific topics.

CEO Letter Identification

The following steps are used to identify letters from senior management (i.e. CEO, Chairman, etc.) in Sustainability and Integrated reports:

1 . Apply an ontology of potential terms and phrases that appear at the start and end of letters, based on analysis of a large sample of these documents. Apply a set of language processing rules that utilize the annotations from (1 ) and knowledge of the semantic structure of starts and ends of letters to identify and annotate potential starts and ends.

Example 'start': Letter from James Jameson, CEO ...

The above rule would detect an instance of one of the strings indicating a 'message', followed by the name of a person, potentially a comma, and then the title of a leader.

Next, a set of rules would be applied to filter out starts and finishes that occur past page 15 and in the middle of the page, depending on which of the rules from phase 2 were triggered.

The next set of rules identify the entire content of the letter based on the remaining starts and finishes. A staggered approach is taken where the following rules are applied in succession:

a) The first start is chosen and the furthest finish within the following 5 pages is found and all of the content in-between is annotated.

b) If the previous rule is not triggered, every start is found and the page containing it is annotated.

c) If neither a) or b) are triggered, find all ends in the document and annotate the page containing them.

Flag all topic annotations appearing in a CEO letter.

Boost topics appearing in CEO letter in post NLP computation.

A potential advantage of some embodiments of the present invention is that relevant topics within data or documents comprising text can be more accurately identified, particularly, in specific fields. This improve data processing technology can lead to superior classification of documents and, therefore, better searching or analysis of those documents.

While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in considerable detail, it is not the intention of the applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of applicant's general inventive concept.