Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR PROCESSING DATA STORED IN A DATABASE
Document Type and Number:
WIPO Patent Application WO/2015/084757
Kind Code:
A1
Abstract:
Disclosed are systems and methods for extracting facts from unstructured text files. Embodiments receive text files as input, and perform extraction and disambiguation of entities, topics, and facts. Facts are extracted by comparing features, like keywords, against a fact templates, and associating facts with events or topics. Extracted facts are stored in a datastore. Also disclosed are methods and systems for discovering "knowledge" in stored corpora; which includes applying in-memory analytics to database records based on user- selected precision. Also disclosed are systems and methods for building a knowledgebase using co-occurring features, like keywords, extracted from corpora. Embodiments include feature-extraction software that extract features from document files in a stored corpus. Embodiments may include a knowledgebase aggregator software module that counts the number of co-occurrences of features in the various documents of a corpus, and identifies which feature co-occurrences to store in a knowledgebase.

Inventors:
DAVE RAKESH (US)
BODDHU SANJAY (US)
LIGHTNER SCOTT (US)
FLAGG ROBERT (US)
Application Number:
PCT/US2014/067994
Publication Date:
June 11, 2015
Filing Date:
December 02, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
QBASE LLC (US)
International Classes:
G06F17/00
Foreign References:
US20110161333A12011-06-30
US20040243645A12004-12-02
US20080077570A12008-03-27
US20090222395A12009-09-03
US20070156748A12007-07-05
US20020031260A12002-03-14
Attorney, Agent or Firm:
SOPHIR, Eric (P.O. Box 061080Wacker Drive Station, Willis Towe, Chicago IL, US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method comprising:

receiving, by an entity extraction computer, an electronic document having unstructured text;

extracting, by the entity extraction computer, an entity identifier from the unstructured text in the electronic document;

extracting, by a topic extraction computer, a topic identifier from the unstructured text in the electronic document;

extracting, by a fact extraction computer, a fact identifier from the unstructured text in the electronic document by comparing text string structures in the unstructured text to a fact template database, the fact template database having stored therein a fact template model identifying keywords pertaining to specific fact identifiers and corresponding keyword weights; and

associating, by a fact relatedness estimator computer, the entity identifier with the topic identifier and the fact identifier to determine a confidence score indicative of a degree of accuracy of extraction of the fact identifier.

2. The method of claim 1 wherein the confidence score is based on a distance between a part of text in the electronic document from where the fact identifier was extracted and a part of text from where at least one of the topic identifier and the entity identifier was extracted.

3. The method of claim 2 wherein the distance in text is calculated using tokenization.

4. The method of claiml wherein the confidence score is based on comparing co- occurring entity identifiers in the electronic document.

5. The method of claim 1 wherein the fact template model includes metadata.

6. The method of claim 5 wherein the metadata includes a count of a number of times a sentence structure corresponding to the fact template model is repeated across a plurality of electronic documents.

7. The method of claim 5 wherein the confidence score is stored in the metadata.

8. A system comprising:

one or more server computers having one or more processors executing computer readable instructions for a plurality of computer modules including:

an entity extraction module configured to receive an electronic document having unstructured text and extract an entity identifier from the unstructured text in the electronic document;

a topic extraction module configured to extract a topic identifier from the unstructured text in the electronic document;

a fact extraction module configured to extract a fact identifier from the unstructured text in the electronic document by comparing text string structures in the unstructured text to a fact template database, the fact template database having stored therein a fact template model identifying keywords pertaining to specific fact identifiers and corresponding keyword weights; and

a fact relatedness estimator module configured to associate the entity identifier with the topic identifier and the fact identifier to determine a confidence score indicative of a degree of accuracy of extraction of the fact identifier.

9. The system of claim 8 wherein the confidence score is based on a distance between a part of text in the electronic document from where the fact identifier was extracted and a part of text from where at least one of the topic identifier and the entity identifier was extracted.

10. The system of claim 9 wherein the distance in text is calculated using tokenization.

11. The system of claim 8 wherein the confidence score is based on comparing co- occurring entity identifiers in the electronic document.

12. The system of claim 8 wherein the fact template model includes metadata.

13. The system of claim 12 wherein the metadata includes a count of a number of times a sentence structure corresponding to the fact template model is repeated across a plurality of electronic documents.

14. The system of claim 12 wherein the confidence score is stored in the metadata.

15. A non-transitory computer readable medium having stored thereon computer executable instructions comprising:

receiving, by an entity extraction computer, an electronic document having unstructured text;

extracting, by the entity extraction computer, an entity identifier from the unstructured text in the electronic document;

extracting, by a topic extraction computer, a topic identifier from the unstructured text in the electronic document;

extracting, by a fact extraction computer, a fact identifier from the unstructured text in the electronic document by comparing text string structures in the unstructured text to a fact template database, the fact template database having stored therein a fact template model identifying keywords pertaining to specific fact identifiers and corresponding keyword weights; and

associating, by a fact relatedness estimator computer, the entity identifier with the topic identifier and the fact identifier to determine a confidence score indicative of a degree of accuracy of extraction of the fact identifier.

16. The computer readable medium of claim 15 wherein the confidence score is based on a distance between a part of text in the electronic document from where the fact identifier was extracted and a part of text from where at least one of the topic identifier and the entity identifier was extracted.

17. The computer readable medium of claim 16 wherein the distance in text is calculated using tokenization.

18. The computer readable medium of claim 15 wherein the confidence score is based on comparing co-occurring entity identifiers in the electronic document.

19. The computer readable medium of claim 15 wherein the fact template model includes metadata.

20. The computer readable medium of claim 19 wherein the metadata includes a count of a number of times a sentence structure corresponding to the fact template model is repeated across a plurality of electronic documents.

21. A method comprising:

receiving, by a search manager computer, a search query from a user computing device configured to receive a selection, from a user, of an analytic computer that processes search query results for presentation to the user;

submitting, by the search manager computer, the search query to a search conductor computer module for conducting a search;

receiving, by the search manager computer, the search query results from the search conductor computer, the search query results having one or more records matching the search query;

forwarding, by the search manager computer, the search query results to the analytic computer selected by the user to process the search query results for the presentation;

receiving, by the search manager computer, the search query results processed by the analytic computer module selected by the user; and

returning, by the search manager computer, the search query results to the user device for the presentation to the user in accordance with the processing.

22. The method of claim 21 wherein the analytic computer disambiguates features in the search query results.

23. The method of claim 22 wherein the selection from the user includes a level of precision for at least one of the search query results and disambiguation of the features in the search query results.

24. The method of claim 21 wherein the analytic computer links the search query results that includes scoring the one or more records in the search query results to indicate a degree of matching to the search query and arranging related features by clusters.

25. The method of claim 21 wherein the selection of the analytic computer is from a set of analytic computers, each analytic computer module in the set executing a disambiguation computer algorithm having respective weights and confidence scores associated with feature attributes in the search query results.

26. The method of claim 21 wherein the selection from the user includes a threshold of acceptance of the search query results.

27. The method of claim 21 wherein the search query is constructed using a markup language.

28. The method of claim 21 wherein the search query is constructed in a binary format.

29. The method of claim 21 wherein the search manager computer processes the search query via one or more processes selected from the group consisting of: address standardization, proximity boundaries, and nickname interpretation.

30. The method of claim 21 wherein the search manager computer combines the search query results among multiple searches.

31. A system comprising:

one or more server computers having one or more processors executing computer readable instructions for a plurality of computer modules including:

a search manager computer module configured to receive a search query from a user computing device that is configured to receive a selection, from a user, of an analytic computer module that processes search query results for presentation to the user, the search manager computer module being further configured to:

submit the search query to a search conductor computer module configured to conduct a search,

receive the search query results from the search conductor computer module, the search query results having one or more records matching the search query,

forward the search query results to the analytic computer module selected by the user to process the search query results for the presentation, receive the search query results processed by the analytic computer module selected by the user, and

return the search query results to the user device for the presentation to the user in accordance with the processing.

32. The system of claim 31 wherein the analytic computer module is configured to disambiguate features in the search query results.

33. The system of claim 32 wherein the selection from the user includes a level of precision for at least one of the search query results and disambiguation of the features in the search query results.

34. The system of claim 31 wherein the analytic computer module is configured to link the search query results by being further configured to score the one or more records in the search query results to indicate a degree of matching to the search query and arrange related features by clusters.

35. The system of claim 31 wherein the selection of the analytic computer module is from a set of analytic computer modules, each analytic computer module in the set being configured to execute a disambiguation computer algorithm having respective weights and confidence scores associated with feature attributes in the search query results.

36. The system of claim 31 wherein the selection from the user includes a threshold of acceptance of the search query results.

37. The system of claim 31 wherein the search query is constructed using a markup language.

38. The system of claim 31 wherein the search query is constructed in a binary format.

39. The system of claim 31 wherein the search manager computer module is configured to process the search query via one or more processes selected from the group consisting of: address standardization, proximity boundaries, and nickname interpretation.

40. The system of claim 31 wherein the search manager computer module is configured to combine the search query results among multiple searches.

41. A method comprising:

crawling, via an entity extraction computer, a corpus of electronic documents;

extracting, via the entity extraction computer, a plurality of features from each of the crawled documents in the corpus;

aggregating, via a knowledge base aggregator computer, instances of co-occurrence of two or more of the plurality of features across the crawled documents to determine a count of the instances of co-occurrence; and adding, via the knowledge base aggregator computer, an instance of co-occurrence of the two or more features to a feature co-occurrence database when the count of the instances of co-occurrence exceeds a predetermined threshold.

42. The method of claim 41 wherein each of the plurality of features is selected from the group consisting of a person, a location, an organization name, a topic, an event, and a fact.

43. The method of claim 41 further comprising adding, via the knowledge base aggregator computer module, the instance of the co-occurrence of the two or more features to the feature co-occurrence database along with metadata pertaining to the instance.

44. The method of claim 43 wherein the metadata is selected from the group consisting of feature type, document identifier, document corpus identifier, distance in text between co- occurring features, and a confidence score.

45. The method of claim 43 where in the metadata includes a confidence score indicative whether the two or more of the plurality of features associated with a respective instance of co-occurrence are unique.

46. The method of claim 45 wherein the confidence score is calculated using a plurality of parameters including a number of co-occurrences in a single document, a number of cooccurrences in the corpus of electronic documents, a size of the corpus of electronic documents, a number of co-occurrences in different corpora of electronic documents, distance in text from co-occurring features, and an indicator of human verification.

47. A system comprising:

an entity extraction computer configured to crawl a corpus of electronic documents and extract a plurality of features from each of the crawled documents in the corpus; and a knowledge base aggregator computer configured to aggregate instances of cooccurrence of two or more of the plurality of features across the crawled documents to determine a count of the instances of co-occurrence, and add an instance of co-occurrence of the two or more features to a feature co-occurrence database when the count of the instances of co-occurrence exceeds a predetermined threshold.

48. The system of claim 47 wherein each of the plurality of features is selected from the group consisting of a person, a location, an organization name, a topic, an event, and a fact.

49. The system of claim 47 wherein the knowledge aggregator computer is further configured to add the instance of the co-occurrence of the two or more features to the feature co-occurrence database along with metadata pertaining to the instance.

50. The system of claim 49 wherein the metadata is selected from the group consisting of feature type, document identifier, document corpus identifier, distance in text between co- occurring features, and a confidence score.

51. The system of claim 49 where in the metadata includes a confidence score indicative whether the two or more of the plurality of features associated with a respective instance of co-occurrence are unique.

52. The system of claim 41 wherein the knowledge base aggregator computer is further configured to calculate the confidence score using a plurality of parameters including a number of co-occurrences in a single document, a number of co-occurrences in the corpus of electronic documents, a size of the corpus of electronic documents, a number of cooccurrences in different corpora of electronic documents, distance in text from co-occurring features, and an indicator of human verification.

53. A non-transitory computer readable medium having stored thereon computer executable instructions comprising:

crawling, via an entity extraction computer, a corpus of electronic documents;

extracting, via the entity extraction computer, a plurality of features from each of the crawled documents in the corpus;

aggregating, via a base aggregator computer, instances of co-occurrence of two or more of the plurality of features across the crawled documents to determine a count of the instances of co-occurrence; and

adding, via the base aggregator computer, an instance of co-occurrence of the two or more features to a feature co-occurrence database when the count of the instances of cooccurrence exceeds a predetermined threshold.

54. The computer readable medium of claim 53 wherein each of the plurality of features is selected from the group consisting of a person, a location, an organization name, a topic, an event, and a fact.

55. The computer readable medium of claim 53 wherein the instructions further comprise adding, via the entity extraction computer, the instance of the co-occurrence of the two or more features to the feature co-occurrence database along with metadata pertaining to the instance.

56. The computer readable medium of claim 55 wherein the metadata is selected from the group consisting of feature type, document identifier, document corpus identifier, distance in text between co-occurring features, and a confidence score.

57. The computer readable medium of claim 55 where in the metadata includes a confidence score indicative whether the two or more of the plurality of features associated with a respective instance of co-occurrence are unique.

58. The computer readable medium of claim 57 wherein the instructions further comprise calculating, via the knowledge base aggregator computer, the confidence score using a plurality of parameters including a number of co-occurrences in a single document, a number of co-occurrences in the corpus of electronic documents, a size of the corpus of electronic documents, a number of co-occurrences in different corpora of electronic documents, distance in text from co-occurring features, and an indicator of human verification.

Description:
SYSTEMS AND METHODS FOR PROCESSING DATA STORED IN A DATABASE

TECHNICAL FIELD

[0001] The present disclosure relates in general to information data mining from document sources, and more specifically to extraction of facts from documents. The present disclosure relates in general to information data, and more specifically to a method for building a knowledge base storage of feature co-occurrences. The present disclosure relates in general to in-memory databases, and more specifically to search methods for discovering and exploring feature knowledge within in-memory databases.

BACKGROUND

[0002] Electronic document corpora may contain vast amounts of information. For a person searching for specific information in a document corpus, identifying key information may be troublesome. Manually crawling each document and highlighting or extracting important information may even be impossible depending on the size of the document corpus. At times a reader may only be interested in facts or asserted information. The use of intelligent computer systems for extracting features in an automated matter may be commonly used to aid in fact extraction. However, current intelligent systems fail to properly extract facts and associate them with other extracted features such as entities, topics, events and other feature types.

[0003] Thus a need exists for a method of extracting facts and accurately associating them with features to improve accuracy of information.

[0004] Traditional search engines allow users to find just pieces of information that are relevant to an entity, and while millions or billions of documents may describe that entity the documents are generally not linked together. In most cases, it may not be viable to try to discover a complete set of documents about a particular feature. Additionally, methods that pre-link data are limited to a single method of linking and are fed by many entity extraction methods that are ambiguous and are not accurate. These systems may not be able to use live feeds of data; they may not perform these processes on the fly. As a consequence the latest information is not used in the linking process.

[0005] There is therefore a need for a dynamic and accurate method of discovering and navigating through feature knowledge that allows the user to tune how things are linked, depending on specific requirements.

[0006] Searching information about entities (i.e., people, locations, organizations) in a large amount of documents, including sources such as the World Wide Web, may often be ambiguous, which may lead to imprecise text processing functions and thus imprecise data analysis. For example, a reference to "Paris," could refer to a city in the country of France, cities in the States of Texas, Tennessee or Illinois, or even a person (e.g., "Paris Hilton"). Associating entities with co-occurring features may prove helpful in disambiguating different entities.

[0007] Large companies or organizations may contain vast amounts of information stored in large electronic document repositories. Generally, information stored in document format may be written in an unstructured manner. Searching or identifying specific information in these document repositories may be tedious and/or troublesome. Identifying co-occurrence of different features together with entities, topics, events, keywords and the like in a document corpus may help to better identify specific information in the same. The need for intelligent electronic assistants to aid in locating and/or discovering useful or desired information amongst the vast amounts of data may be significant.

[0008] Thus a need exists for an intelligent electronic system for detecting and recording co-occurring features in a corpus of documents. SUMMARY

[0009] A system and method for extracting facts from unstructured text are disclosed.

The system includes an entity extraction computer module used to extract and disambiguate independent entities from an electronic document, such as a text file. The system may further include a topic extractor computer module configured to determine a topic related to the text file. The system may extract possible facts described in the text by comparing text string structures against a fact template store. The fact template store may be built by revising documents containing facts and recording a commonly used fact sentence structure. The extracted facts may then be associated with extracted entities and topics to determine a confidence score that may serve as an indication of the accuracy of the fact extraction.

[0010] In one embodiment, a method is disclosed. The method comprises receiving, by an entity extraction computer, an electronic document having unstructured text and extracting, by the entity extraction computer, an entity identifier from the unstructured text in the electronic document. The method further includes extracting, by a topic extraction computer, a topic identifier from the unstructured text in the electronic document, and extracting, by a fact extraction computer, a fact identifier from the unstructured text in the electronic document by comparing text string structures in the unstructured text to a fact template database, the fact template database having stored therein a fact template model identifying keywords pertaining to specific fact identifiers and corresponding keyword weights. The method further includes associating, by a fact relatedness estimator computer, the entity identifier with the topic identifier and the fact identifier to determine a confidence score indicative of a degree of accuracy of extraction of the fact identifier.

[0011] In another embodiment, a system is disclosed. The system comprises one or more server computers having one or more processors executing computer readable instructions for a plurality of computer modules. The computer modules include an entity extraction module configured to receive an electronic document having unstructured text and extract an entity identifier from the unstructured text in the electronic document, a topic extraction module configured to extract a topic identifier from the unstructured text in the electronic document, and a fact extraction module configured to extract a fact identifier from the unstructured text in the electronic document by comparing text string structures in the unstructured text to a fact template database, the fact template database having stored therein a fact template model identifying keywords pertaining to specific fact identifiers and corresponding keyword weights. The system further includes a fact relatedness estimator module configured to associate the entity identifier with the topic identifier and the fact identifier to determine a confidence score indicative of a degree of accuracy of extraction of the fact identifier.

[0012] In yet another embodiment, a non-transitory computer readable medium having stored thereon computer executable instructions. The instructions comprise receiving, by an entity extraction computer, an electronic document having unstructured text, extracting, by the entity extraction computer, an entity identifier from the unstructured text in the electronic document, and extracting, by a topic extraction computer, a topic identifier from the unstructured text in the electronic document. The instructions further include extracting, by a fact extraction computer, a fact identifier from the unstructured text in the electronic document by comparing text string structures in the unstructured text to a fact template database, the fact template database having stored therein a fact template model identifying keywords pertaining to specific fact identifiers and corresponding keyword weights, and associating, by a fact relatedness estimator computer, the entity identifier with the topic identifier and the fact identifier to determine a confidence score indicative of a degree of accuracy of extraction of the fact identifier. [0013] Methods for discovering and exploring feature knowledge are disclosed. The methods may include the application of in-memory analytics to records, where the analytic methods applied to the records and the level of precision of the methods may be dynamically selected by a user.

[0014] According to some embodiments, when a user starts a search, the system may score records against the one or more queries, where the system may score the match of one or more available fields of the records and may then determine a score for the overall match of the records. The system may determine whether the score is above a predefined acceptance threshold, where the threshold may be defined in the search query or may be a default value.

[0015] In further embodiments, fuzzy matching algorithms may compare records temporarily stored in collections with the one or more queries being generated by the system.

[0016] In some embodiments, numerous analytics computer modules may be plugged to the in-memory data base and the user may be able to modify the relevant analytical parameters of each analytics computer module through a user interface.

[0017] In one embodiment, a method is disclosed. The method comprises receiving, by a search manager computer, a search query from a user computing device configured to receive a selection, from a user, of an analytic computer that processes search query results for presentation to the user. The method further includes submitting, by the search manager computer, the search query to a search conductor computer for conducting a search, receiving, by the search manager computer, the search query results from the search conductor computer, the search query results having one or more records matching the search query, and forwarding, by the search manager computer, the search query results to the analytic computer selected by the user to process the search query results for the presentation. The method also includes receiving, by the search manager computer, the search query results processed by the analytic computer selected by the user, and returning, by the search manager computer, the search query results to the user device for the presentation to the user in accordance with the processing.

[0018] In another embodiment, a system is disclosed. The system comprises one or more server computers having one or more processors executing computer readable instructions for a plurality of computer modules including a search manager computer module configured to receive a search query from a user computing device that is configured to receive a selection, from a user, of an analytic computer module that processes search query results for presentation to the user. In the disclosed system, the search manager computer module is further configured to: submit the search query to a search conductor computer module configured to conduct a search, receive the search query results from the search conductor computer module, the search query results having one or more records matching the search query, forward the search query results to the analytic computer module selected by the user to process the search query results for the presentation, receive the search query results processed by the analytic computer module selected by the user, and return the search query results to the user device for the presentation to the user in accordance with the processing.

[0019] A system and method for building a knowledge base of feature co-occurrences from a document corpus. The system may include a plurality of software and hardware computer modules to extract different features such as entities, topics, events, facts and/or any other features that may be derived from a document. The system may crawl each document in a document corpus and extract features from each individual document. After different features are extracted from a document they may be submitted to a knowledge base aggregator where the co-occurrences of two or more features may be aggregated with co- occurrences of the same features in different documents. Once the aggregation for the co- occurring features reach a determined threshold the co-occurrences and additional metadata related to the co-occurring features may be stored in the knowledge base. The knowledge base of co-occurring features may serve to assist in subsequent disambiguation of features. The knowledge base may be created using a single document corpus or a plurality of document corpora.

[0020] In one embodiment, a method is provided that includes crawling, via an entity extraction computer, the corpus of electronic documents, extracting, via the entity extraction computer, a plurality of features from each of the crawled documents in the corpus and aggregating, via a knowledge base aggregator computer, instances of co-occurrence of two or more of the plurality of features across the crawled documents to determine a count of the instances of co-occurrence. The method further includes adding, via the knowledge base aggregator computer, an instance of co-occurrence of the two or more features to the feature co-occurrence database when the count of the instances of co-occurrence exceeds a predetermined threshold.

[0021] In another embodiment, a system is provided. The system includes an entity extraction computer configured to crawl a corpus of electronic documents and extract a plurality of features from each of the crawled documents in the corpus. The system further includes a knowledge base aggregator computer configured to aggregate instances of cooccurrence of two or more of the plurality of features across the crawled documents to determine a count of the instances of co-occurrence, and add an instance of co-occurrence of the two or more features to a feature co-occurrence database when the count of the instances of co-occurrence exceeds a predetermined threshold. [0022] In yet another embodiment, a non-transitory computer readable medium having stored thereon computer executable instructions comprises crawling, via an entity extraction computer, a corpus of electronic documents; extracting, via the entity extraction computer, a plurality of features from each of the crawled documents in the corpus; aggregating, via a base aggregator computer, instances of co-occurrence of two or more of the plurality of features across the crawled documents to determine a count of the instances of co-occurrence; and adding, via the base aggregator computer, an instance of co-occurrence of the two or more features to a feature co-occurrence database when the count of the instances of co-occurrence exceeds a predetermined threshold.

[0023] Numerous other aspects, features and benefits of the present disclosure may be made apparent from the following detailed description taken together with the drawing figures.

BRIEF DESCRIPTION OF THE DRAWINGS

[0024] The present disclosure can be better understood by referring to the following figures. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure. In the figures, reference numerals designate corresponding parts throughout the different views.

[0025] FIG. 1 is a diagram of a fact extraction system, according to an embodiment.

[0026] FIG. 2 is diagram of a system for training a fact concept store, according to an embodiment.

[0027] FIG. 3 is a flow chart of a method for building a fact template store of FIG. 2, according to an embodiment. [0028] FIG. 4 is a flowchart of a search method for discovering and exploring feature knowledge, according to an embodiment.

[0029] FIG. 5 is a flowchart of process executed by a link on-the-fly module, according to an embodiment.

[0030] FIG. 6 is a system employed for disambiguating features according to an exemplary embodiment.

[0031] FIG. 7 is a diagram of a central computer server system for building a knowledge base of co-occurrences, according to an embodiment.

[0032] FIG. 8 is a diagram of a co-occurring aggregation method, according to an embodiment.

DEFINITIONS

[0033] As used here, the following terms may have the following definitions:

[0034] "Entity Extraction" refers to information processing methods for extracting information such as names, places, and organizations.

[0035] "Corpus" refers to a collection of one or more documents

[0036] "Features" is any information which is at least partially derived from a document.

[0037] "Event Concept Store" refers to a database of Event template models.

[0038] "Event" refers to one or more features characterized by at least the features' occurrence in real-time. [0039] "Event Model" refers to a collection of data that may be used to compare against and identify a specific type of event.

[0040] "Module" refers to a computer or software components suitable for carrying out at least one or more tasks.

[0041] "Facts" refers to asserted information about features found in an electronic document.

[0042] "Document" refers to a discrete electronic representation of information having a start and end.

[0043] "Facet" refers to clearly defined, mutually exclusive, and collectively exhaustive aspects, properties or characteristics of a class, specific subject, topic or feature.

[0044] "Knowledge base" refers to a computer database containing disambiguated features or facets.

[0045] "Live corpus" refers to a corpus that is constantly fed as new electronic documents are uploaded into a network.

[0046] "Memory" refers to any hardware component suitable for storing information and retrieving said information at a sufficiently high speed.

[0047] "Analytics Parameters" refers to parameters that describe the operation that an analytic computer module may have to perform in order to get specific results.

[0048] "Link on-the-fly module" refers to a linking computer module that updates data as a live corpus is updated. [0049] "Node" refers to a computer hardware configuration suitable for running one or more modules.

[0050] "Node Cluster" refers to a set of one or more nodes.

[0051] "Query" refers to an electronic request to retrieve information from one or more suitable databases.

[0052] "Record" refers to one or more pieces of information that may be handled as a unit.

[0053] "Collection" refers to a discrete set of records.

[0054] "Partition" refers to an arbitrarily delimited portion of records of a collection.

[0055] "Prefix" refers to a string of a given length that includes the longest string of key characters shared by all subtrees of the node and a data record field for storing a reference to a data record.

[0056] "Database" refers to any computer system including any combination of node clusters and computer modules suitable for storing one or more collections and suitable to process one or more queries.

[0057] "Analytics Agent" or "Analytics Module" refers to a computer or computer module configured to at least receive one or more records, process said one or more records, and return the resulting one or more processed records.

[0058] "Search Manager" or " SM" refers to a computer or computer module configured to at least receive one or more queries and return one or more search results. [0059] "Search Conductor" or "SC" refers to a computer or computer module configured to at least run one or more search queries on a partition and return the search results to one or more search managers.

[0060] "Sentiment" refers to subjective assessments associated with a document, part of a document, or feature.

[0061] "Topic" refers to a set of thematic information which is at least partially derived from a corpus.

[0062] "Feature attribute" refers to metadata associated with a feature; for example, location of a feature in a document, confidence score, among others.

DETAILED DESCRIPTION

[0063] Reference will now be made in detail to the preferred embodiments, examples of which are illustrated in the accompanying drawings. The embodiments described above are intended to be exemplary. One skilled in the art recognizes that numerous alternative components and embodiments may be substituted for the particular examples described herein and still fall within the scope of the invention. Other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the present disclosure. The illustrative embodiments described in the detailed description are not meant to be limiting of the subject matter presented here.

[0064] It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Alterations and further modifications of the inventive features illustrated here, and additional applications of the principles of the inventions as illustrated here, which would occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the invention. [0065] The present disclosure describes a system and method for detecting, extracting and validating events from a plurality of sources. Sources may include news sources, social media websites and/or any sources that may include data pertaining to events.

[0066] Various embodiments of the systems and methods disclosed here collect data from different sources in order to identify independent events.

[0067] FIG. 1 depicts an embodiment of a system 100 for extracting facts from an electronic document. Embodiments of the disclosed system may be implemented in various operating environments that include personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, and distributed computing environments.

[0068] The document corpus computer module 102 may provide an input of an electronic document containing unstructured text such as, for example, a news feed article, a file from a digital library, a blog, a forum, a digital book and/or any file containing natural language text.

[0069] The process may involve crawling through document file received from the corpus 102. An electronic document may include information in unstructured text format which may be crawled using natural language processing techniques ( LP). Some NLP techniques include, for example, removing stop words, tokenization, stemming and part-of speech tagging among others known in the art.

[0070] An individual file may first go through an entity extraction computer module

104 where entities (e.g., a person, location, or organization name) are identified and extracted. Entity extraction module 104 may also include disambiguation methods which may differentiate ambiguous entities. Disambiguation of entities may be performed in order to attribute a fact to an appropriate entity. A method for entity disambiguation may include, for example, comparing extracted entities and co-occurrences with other entities or features against a knowledge base of co-occurring features in order to identify specific entities the document may be referring to. Other methods for entity disambiguation may also be used and are included within the scope of this disclosure. In an embodiment, entity extraction computer module 104 may be implemented as a hardware and/or software module in a single computer or in a distributed computer architecture.

[0071] The file may then go through a topic extractor computer module 108. Topic extractor module 108 may extract the theme or topic of a single document file. In most cases a file may include a single topic, however a plurality of topics may also exist in a single document. Topic extraction techniques may include, for example, comparing keywords against models built with a multi-component extension of latent Dirichlet allocation (MC- LDA), among other techniques for topic identification. A topic may then be appended to a fact in order to provide more accurate information.

[0072] System 100 may include a fact extractor computer module 112. Fact extractor module 112 may be a hardware and/or software computer module executing programmatic logic that may extract facts by crawling through the document. Fact extractor module 112 may compare text structures against fact models stored in a fact template store 114 in order to extract and determine the probability of an extracted fact and the associated fact type.

[0073] In the illustrated embodiment, once all features are extracted, a fact relatedness estimator computer module 116 may correlate all features in order to determine a fact relation to other features and assign a confidence score that may serve as an indication that an extracted fact is accurate. Fact relatedness estimator module 116 may calculate a confidence score based on a text distance between parts of text from where a fact was extracted and where a topic or entity was extracted. For example, consider the fact example "President said the bill will pass" extracted from a document where the identified topic was "immigration". Fact relatedness estimator module 116 may measure the distances between the fact sentence "President said the bill will pass" and the sentence from where the topic "immigration" was extracted. The shorter the distance in text, the more likelihood that the fact is indeed related to immigration. The fact relatedness estimator module 116 may also calculate confidence score by comparing co-occurring entities in the same document file. For example, considering the same example used before the entity "president" may be mentioned at different parts in the document. A co-occurrence of an entity mentioned in a fact with the same entity in a different part of the document may increase a confidence score associated with the fact. The distances between co-occurring entities in relation to facts may also be used in determining confidences scores. Distances in text may be calculated using methods such as tokenization or any other NLP methods.

[0074] Whenever the confidence score for an extracted fact exceeds a predetermined threshold, such fact may be stored in a verified fact store 118. Verified fact store 118 may be a computer database used by various applications in order to query for different facts associated with the purpose of a given application.

[0075] Those skilled in the art will realize that FIG. 1 illustrates an exemplary embodiment and is in no way limiting the scope of the invention. Additional modules for extracting different features not illustrated in FIG. 1 may also be included and are to be considered within the scope of the invention. As those of skill in the art will realize, all hardware and software modules described in the present disclosure may be implemented in a single special purpose computer or in a distributed computer architecture across a plurality of special purpose computers.

[0076] FIG. 2 is an embodiment of a training computer system 200 for building a fact template store 214. A plurality of documents 202 may be tagged, for example by a computer process, in order to identify key words pertaining to specific facts and assign weights to those keywords. For example, an embodiment of a fact template model 206 may be "The President said the bill will pass." The tagging process of the system 200 can identify, tag and record the sentence structure of the fact. In the example, to build a model the person may identify the keyword "said" preceded by an entity (e.g., the "President") and proceeded by some string (e.g., "the bill will pass") which may represent the value of the fact. The model may then be stored in fact template store 214 along with metadata such as for example, a count of how many times that sentence structure is repeated across different documents, a fact type classification, a confidence score that serves as an indication of how strongly the sentence structure may resemble a fact. Fact template models 206 may be used in subsequent text comparisons in order to extract facts from document files.

[0077] FIG. 3 is an embodiment of a method for building a fact template store of

FIG. 2. In step 300, the computer system 200 (FIG. 2) tags electronic documents in a corpus of documents to identify keywords pertaining to facts. In step 302, the system 200 assigns weights to tagged keywords. In step 304, the system 200 selects a fact template model having the identified keywords (from other electronic documents in the corpus) and stores the fact template in the fact template store database along with the metadata, as discussed above in connection with FIG. 2. Finally, in step 306, the fact template model is used in text comparisons in the process of fact extraction, as discussed in FIG. 1 above. [0078] FIG. 4 is a flow chart describing a search method 400 for discovering and exploring feature knowledge, according to an embodiment.

[0079] The process may start when a user generates a search query, step 402. One or more user workstations (e.g., personal computer, smartphone, tablet computer, mobile device, or the like), which displays a user interface to the user, may generate and transmit one or more search queries. The user interfaces may receive from a user workstation a selection of an option of one or more of a set of analytic methods that may be applied to the results of the search query. The user workstations may also allow for the selection of thresholds of acceptance of different levels (e.g., of search query results). In an alternative embodiment, these queries and thresholds can be generated automatically, may be transmitted from a computing device, or may be predetermined.

[0080] Then, the query may be received, in step 404, by one or more search manager computer modules (SM) embodied on a computer readable medium and executed by a processor. In this step, the one or more queries generated by the interaction of one or more users with one or more user interfaces may be received by one or more search manager computer modules. In one or more embodiments, the queries may be represented in a markup language, including XML and HTML. In one or more other embodiments, the queries may be represented in a data structure, including embodiments where the queries are represented in JSON. In some embodiments, a query may be represented in compact or binary format.

[0081] Afterwards, the received queries may be parsed by the one or more SM computer modules, in step 406. This process may allow the system to determine if field processing is desired, in step 408. In one or more embodiments, the system may be capable of determining if the process is required using information included in the query. In one or more other embodiments, the one or more search manager computer modules may automatically or dynamically determine which one or more fields may undergo a desired processing.

[0082] If the system determined that field processing for the one or more fields is desired, the one or more SM computer modules may apply one or more suitable processing techniques to the one or more desired fields, during the search manager processes fields step 410. In one or more embodiments, suitable processing techniques may include address standardization, proximity boundaries, and nickname interpretation, among others. In some embodiments, suitable processing techniques may include the extraction of prefixes from strings and the generation of non-literal keys that may later be used to apply fuzzy matching techniques.

[0083] Then, when the one or more SM computer modules construct the search query, in step 412, they may construct additional search queries associated with the current search query. In one or more embodiments, the search queries may be constructed so as to be processed as a stack-based search.

[0084] Subsequently, one or more SM computer modules may send search query to one or more search conductor computer modules (SC), in step 414, where said one or more SC computer modules may be associated with collections specified in the one or more search queries.

[0085] The one or more search conductors may score records against the one or more queries, where the search conductors may score the match of one or more fields of the records and may then determine a score for the overall match of the records with the one or more queries. The system may determine whether the score is above a predefined acceptance threshold, where the threshold may be defined in the search query or may be a default value. In one or more embodiments, the default score thresholds may vary according to the one or more record fields being scored. If the SC computer module determines that the scores are above the desired threshold, the records may be added to a results list. The SC computer module may continue to score records until it determines that a record is the last in the partition. If the SC computer module determines that the last record in a partition has been processed, the SC computer module may then sort the resulting results list. The DC computer module may then return the results list to a SM computer module.

[0086] When SM computer module receives and collates search results from SC computer modules, step 416, the one or more search conductors return the one or more search results to the one or more search managers; where, in one or more embodiments, said one or more search results may be returned asynchronously. The one or more SM may then compile results from the one or more SC computer modules into one or more results lists.

[0087] The one or more SM computer modules may automatically determine which one or more fields may undergo one or more desired analytic processes. Then, the one or more SM computer modules may send the search results to analytic computer modules, in step 418. The one or more results lists compiled by one or more SM computer modules may be sent to one or more analytics agent computers, where each analytics agent computer may include one or more analytics computer modules configured to execute a corresponding one of the one or more suitable processing techniques.

[0088] In one or more embodiments, suitable techniques may include rolling up several records into a more complete record, performing one or more analytics on the results, and determining information about neighboring records, amongst others. In some embodiments, analytics agent computers may execute disambiguation computer modules, link computer modules, link on-the-fly computer modules, or other suitable computer modules and corresponding algorithms. The system may allow for user workstation to customize the analytics modules according to particular inputs.

[0089] After processing, according to some embodiments, the one or more analytics agents may return one or more processed results lists, step 420, to the one or more SM computer modules.

[0090] A SM computer module may return search results to the user device's user interface, step 422. In some embodiments, the one or more SM computer modules may decompress the one or more results list and return them to the user interface that initiated the query. According to some embodiments, the search results may be temporarily stored in a knowledge base database and returned to the user interface of the user computing device (e.g., workstation). The knowledge base may be used to temporarily store clusters of relevant disambiguated features. When new documents may be loaded into an in-memory database (MEMDB), the new disambiguated set of features may be compared with the existing knowledge base in order to determine the relationship between features and automatically determine if there is a match between the new features and previously extracted features. If the features comparison results in a match, the knowledge base may be automatically updated and the identification (ID) of the matching features may be returned. If the features compared do not match with any of the already extracted features, a unique ID is assigned to the disambiguated entity or feature, and the ID is associated with the cluster of defining features and stored within the knowledge base of the MEMDB.

[0091] When a user receives search results through a user interface of a user computing device, the user computing device may determine if a query needs further modification, in step 424, to achieve the desired results. If the desired results are achieved, the process may end, in step 426. If the desired results are not achieved, the user computing device may generate a new query by changing the type of analytics desired (e.g., by selecting a different analytics computer module executing a different analytics algorithm) or the level of precision and the user computing device may adjust how knowledge is linked to find stronger or looser relationships. In some embodiments, a new search may be generated and combined it with a current search.

[0092] Link On-the-Fly (OTF) Processing

[0093] FIG. 5 is a flowchart of a process 500 executed by a link OTF computer sub- module, which may be employed for disambiguating features in the search method 400 (FIG.4), according to an embodiment. Link OTF sub-module may be capable of constantly evaluating, scoring, linking, and clustering a feed of information. Link OTF sub-module may perform dynamic records linkage using multiple algorithms. In step 502, search results may be constantly fed into the link OTF computer sub-module. The input of data may be followed by a match scoring algorithm application, step 504, where one or more match scoring algorithms may be applied simultaneously in multiple search nodes of the MEMDB while performing fuzzy key searches for evaluating and scoring the relevant results, taking in account multiple feature attributes, such as string edit distances, phonetics, and sentiments, among others.

[0094] Afterwards, a linking algorithm application step 506 may be added to compare all candidate records, identified during match scoring algorithm application step 504, to each other. Linking algorithm application step 506 may include the use of one or more analytical linking algorithms capable of filtering and evaluating the scored results of the fuzzy key searches performed inside the multiple search nodes of the MEMDB. In some examples, cooccurrence of two or more features across the collection of identified candidate records in the MEMDB may be analyzed to improve the accuracy of the process. Different weighted models and confidence scores associated with different feature attributes may be taken into account for linking algorithm application 506.

[0095] After linking algorithm application of step 506, the linked results may be arranged in clusters of related features and returned to the user interface, as part of return of linked records clusters, step 508.

[0096] FIG. 6 is an illustrative diagram of an embodiment of a system 600 for disambiguating features in unstructured text and including the link OTF sub-module 612 discussed above in connection with FIG.5. The system 600 hosts an in-memory database and comprises one or more nodes.

[0097] According to an embodiment, the system 600 includes one or more processors executing computer instructions for a plurality of special-purpose computer modules 601, 602, 608, 611, 612, and 614 to disambiguate features within one or more documents. As shown in FIG. 6, the document input modules 601, 602 receive documents from internet based sources and/or a live corpus of documents. A large number of new documents may be uploaded substantially simultaneously from a user workstation 606 or other computing device into the document input module 602 through a network connection (NC) 604. Therefore, the source may be constantly receiving an input of new knowledge, using updated information provided by user workstations 606, where such new knowledge is not pre-linked in a static way. Thus, the number of documents to be evaluated may be infinitely increasing. The system 600 is therefore able to process large volumes of documents in a more efficient manner to discover and explore feature knowledge.

[0098] An in-memory database (MEMDB) computer 608 may facilitate a faster disambiguation process, such as by executing a disambiguation process on-the-fly, which may facilitate reception of the latest information that is going to contribute to MEMDB 608. Various methods for linking the features may be employed, which may essentially use a weighted model for determining which entity types are most important, which have more weight, and, based on confidence scores, determine how confident the extraction and disambiguation of the correct features has been performed, and that the correct feature may go into the resulting cluster of features. As shown in FIG. 6, as more system nodes are working in parallel, the process may become more efficient.

[0099] According to the exemplary embodiment, when a new document arrives into the system 600 via the document input module 601, 602 through a network connection 604, an extraction module 611 performs feature extraction and, then, a feature disambiguation sub- module 614 may perform feature disambiguation on the new document. Extraction module 611 and feature disambiguation module 614 are components of system 600. In the exemplary embodiment extraction module 611 and disambiguation module 614 are separate modules of the system 600, though extraction module 611 and disambiguation module 614 can be configured as a single module, hosted on a single computer, or each can be configured as a separate computer. In one configuration, extraction module 614 and disambiguation module 614 may each be executed by the MEMDB 608.

[0100] In one embodiment, after feature disambiguation of the new document is performed by the disambiguation module 614, the extracted new features 610 may be included in the MEMDB 608 to pass through link OTF sub-module 612; where the features may be compared and linked, and a feature ID of disambiguated feature 610 may be returned to the user workstation 606 as a result from a query. In addition to the feature ID, the resulting feature cluster defining the disambiguated feature may optionally be returned to the user workstation 606. [0101] MEMDB computer 608 can be a database storing data in records controlled by a database management system (DBMS) (not shown) configured to store data records in a device's main memory, as opposed to conventional databases and DBMS modules that store data in "disk" memory. Conventional disk storage requires processors (CPUs) to execute read and write commands to a device's hard disk, thus requiring CPUs to execute instructions to locate (i.e., seek) and retrieve the memory location for the data, before performing some type of operation with the data at that memory location. In-memory database systems access data that is placed into main memory, and then addressed accordingly, thereby mitigating the number of instructions performed by the CPUs and eliminating the seek time associated with CPUs seeking data on hard disk.

[0102] In-memory databases may be implemented in a distributed computing architecture, which may be a computing system comprising one or more nodes configured to aggregate the nodes' respective resources (e.g., memory, disks, processors). As disclosed herein, embodiments of a computing system hosting an in-memory database may distribute and store data records of the database among one or more nodes. In some embodiments, these nodes are formed into "clusters" of nodes. In some embodiments, these clusters of nodes store portions, or "collections," of database information.

[0103] Various embodiments of the system of FIG. 6 provide a computer system executing a feature disambiguation technique that employs an evolving and efficiently linkable feature knowledge base that is configured to store secondary features, such as co- occurring topics, key phrases, proximity terms, events, facts and a trending popularity index. The disclosed embodiments may be performed via various linking algorithms that can vary from simple conceptual distance measure to sophisticated graph clustering approaches based on the dimensions of the involved secondary features that aid in resolving a given extracted feature to a stored feature in the knowledge base.

[0104] FIG. 7 is a central server computer system 700 for building a knowledge base

720 of co-occurring features from a document corpus 702 and including a plurality of special- purpose computer modules having one or more processors executing programmatic logic described below. Additional embodiments of the system may be implemented in various operating environments that include personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments and so on.

[0105] Document corpus 702 may be any collection of documents such as, for example, a database of digital documents from a company or the World Wide Web.

[0106] The process may involve crawling through each document in document corpus

702. A document may include information in unstructured text format which may be crawled using natural language processing techniques (NLP). Some NLP techniques include, for example, removing stop words, tokenization, stemming and part-of speech tagging among others know in the art.

[0107] An individual file may first go through an entity extraction module 704 where entities (e.g., a person, location, or organization name) are identified and extracted. Entity extraction module 704 may be a software module with programmatic logic that may extract entities by crawling through the document. The extracted entities may then be passed to a knowledge base aggregator 706. [0108] The file may then go through a topic extractor module 708, which may be executed by an in-memory database computer. The in-memory database computer can be a database storing data in records controlled by a database management system (DBMS) (not shown) configured to store data records in a device's main memory, as opposed to conventional databases and DBMS modules that store data in "disk" memory. Conventional disk storage requires processors (CPUs) to execute read and write commands to a device's hard disk, thus requiring CPUs to execute instructions to locate (i.e., seek) and retrieve the memory location for the data, before performing some type of operation with the data at that memory location. In-memory database systems access data that is placed into main memory, and then addressed accordingly, thereby mitigating the number of instructions performed by the CPUs and eliminating the seek time associated with CPUs seeking data on hard disk.

[0109] In-memory databases may be implemented in a distributed computing architecture, which may be a computing system comprising one or more nodes configured to aggregate the nodes' respective resources (e.g., memory, disks, processors). As disclosed herein, embodiments of a computing system hosting an in-memory database may distribute and store data records of the database among one or more nodes. In some embodiments, these nodes are formed into "clusters" of nodes. In some embodiments, these clusters of nodes store portions, or "collections," of database information.

[0110] Various embodiments provide a computer executed feature disambiguation technique that employs an evolving and efficiently linkable feature knowledge base that is configured to store secondary features, such as co-occurring topics, key phrases, proximity terms, events, facts and trending popularity index. The disclosed embodiments may be performed via a wide variety of linking algorithms that can vary from simple conceptual distance measure to sophisticated graph clustering approaches based on the dimensions of the involved secondary features that aid in resolving a given extracted feature to a stored feature in the knowledge base. Additionally, embodiments can introduce an approach to evolves the existing feature knowledge base by a capability that not only updates the secondary features of the existing feature entry, but also expands it by discovering new features that can be appended to the knowledge base.

[0111] Topic extractor module 708 may extract the theme or topic of a single document file. In most cases a file may include a single topic, however a plurality of topics may also exist in a single document. Topic extraction techniques may include, for example, comparing keywords against latent Dirichlet allocation (LDA) models or any other techniques for topic identification. The extracted topic may also be passed to knowledge base aggregator 706 for further processing.

[0112] System 700 may also include an event detection module 710 for extracting events from a document file. Different types of events may include an accident (e.g., car accident, a train accident, etc.), a natural disaster (e.g., an earthquake, a flood, a weather event, etc.), a man-made disaster (e.g., a bridge collapse, a discharge of a hazardous material, an explosion, etc.), a security event (e.g., a terrorist attack, an act of war, etc.), and/or any other event that may trigger a response from authorities and/or first responders and/or may trigger a notification to a large quantity (e.g., greater than some threshold) of user devices (e.g., acts associated with a major sporting event or concert, election day coordination, traffic management due to road construction, etc.). Event detection module 710 may be a software module with programmatic logic that may detect events by extracting keywords from a file and comparing them against event template models stored in an event concept store database. The extracted events may also be passed to knowledge base aggregator 706 for further processing. [0113] A fact extractor module 712 may also be implemented. Fact extractor module

712 may be a software module with programmatic logic that may extract facts by crawling through the document. Fact extractor module 712 may extract facts by comparing factual text descriptions in a document and comparing them against a fact-word table. Other methods for identifying facts in documents may also be implemented. Identified facts may be passed to knowledge base aggregator 706 for further processing.

[0114] All extracted features from a single document may then be processed by the knowledge base aggregator 706. Knowledge base aggregator 706 may include a cooccurrence module 714 and a co-occurrence store aggregator 716. Co-occurrence module 714 may be a software module with programmatic logic that may aggregate co-occurrence of features across a plurality of documents and record the count of co-occurrences in cooccurrence store aggregation 716. Whenever the co-occurrence of features across documents in a document corpus 702 exceed a determined threshold the co-occurrence of entities may be added to knowledge base 720 along with any metadata pertaining to the co-occurring features. Metadata that may be added to knowledge base 720 may include, for example, the type of the features, the document from where the co-occurrence was extracted, the document corpus, distance in text between co-occurring features, a confidence score that may serve has an indication that the co-occurrence of the features may resemble unique individual features. A confidence score may be calculated by using parameters such as, for example, number of co-occurrences in a single file, number of co-occurrences in a document corpus, size of document corpus, number of co-occurrences in different document corpora, distance in text from co-occurring features, human verification and or any combination thereof.

[0115] It may be understood that FIG. 7 illustrates an exemplary embodiment and is in no way limiting the scope of the invention. Additional modules for extracting different features not illustrated in FIG. 7 may also been included and are to be considered within the scope of the invention. All software modules may be implemented in a single computer or in a distributed computer architecture across a plurality of computers, whereby the one or more modules may be embodied on at least one computer readable medium and executed by at least one processor.

[0116] FIG. 8 is an example embodiment of a co-occurring aggregation method 800 using the system described in FIG. 7. In the illustrated embodiment document corpus 802 may include three different document files. Features extracted from first document 804 may include "Bill", "Gates", "Microsoft", "Billionaire". Features extracted from second document 806 may include "Bill", "Gates", "President", "Microsoft". Features extracted from third document 808 may include "Melinda" and "Gates". Co-occurrence module 814 may then crawl each document in document corpus 802, store all possible co-occurring feature combinations for a single document and aggregate them with same feature co-occurrences from the other documents of the same corpus. The aggregation process may be done and stored in co-occurrence store aggregator 816. For example in FIG. 8 the entities "Bill" and "Gates" co-occur twice, once in first document 804 and once in third document 808 while the entities "Melinda" and "Gates" co-occur once in second document 806.

[0117] In some instances, the exemplary search method for discovering and exploring feature knowledge is applied. In this example a user initiates a search with the name of a feature, the results return six different disambiguated features with the same name. The user decides to narrow the search and indicates in the user interface that a higher threshold or different features for the disambiguation should be used, all the data is processed again in one or more analytics agents and the new set of results returns only two different disambiguated features with the same name. [0118] In some instances, a number of different interfaces serving different purposes for different groups of people are fed from the same MEMDB. Each interface was developed to facilitate the manipulation of the analytical parameters relevant to each application.

[0119] In some instances, the exemplary search method for discovering and exploring feature knowledge is applied to images. In this example image processing techniques are utilized to extract features from the documents and suitable analytics modules used to process the search results.

[0120] While various aspects and embodiments have been disclosed, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

[0121] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the steps in the foregoing embodiments may be performed in any order. Words such as "then," "next," etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function. [0122] The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

[0123] Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.

[0124] The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the invention. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein. [0125] When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor- readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non- transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.

[0126] It is to be appreciated that the various components of the technology can be located at distant portions of a distributed network and/or the Internet, or within a dedicated secure, unsecured and/or encrypted system. Thus, it should be appreciated that the components of the system can be combined into one or more devices or co-located on a particular node of a distributed network, such as a telecommunications network. As will be appreciated from the description, and for reasons of computational efficiency, the components of the system can be arranged at any location within a distributed network without affecting the operation of the system. Moreover, the components could be embedded in a dedicated machine.

[0127] Furthermore, it should be appreciated that the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. The term module as used herein can refer to any known or later developed hardware, software, firmware, or combination thereof that is capable of performing the functionality associated with that element. The terms determine, calculate and compute, and variations thereof, as used herein are used interchangeably and include any type of methodology, process, mathematical operation or technique.

[0128] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.

[0129] The embodiments described above are intended to be exemplary. One skilled in the art recognizes that numerous alternative components and embodiments that may be substituted for the particular examples described herein and still fall within the scope of the invention.