Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
HIGH DENSITY TIME-SERIES DATA INDEXING AND COMPRESSION
Document Type and Number:
WIPO Patent Application WO/2020/243022
Kind Code:
A1
Abstract:
Time-series columnar-based information is received and indexed in a compute infrastructure for cost-effective cloud-based object storage. The approach leverages a file format that enables highly-performant search and retrieval of the data stored in the cloud. In operation, an indexer receives the time- series information, indexes that information according to the file format, and forwards the indexed information for storage to the object store, where it is stored as a set of time-based partitions. A partition comprises a set of files, namely, a manifest file, a data file, and an index file. These files are structured as a compact instance of a set of raw unstructured data that comprises the given partition. Highly-performant information retrieval is enabled in response to a time-bounded query, because operations at a query peer (with respect to one or more partitions) are carried out in real-time during query processing and without requiring retrieval of the data file as a whole.

Inventors:
ALAYLI HASAN (US)
Application Number:
PCT/US2020/034426
Publication Date:
December 03, 2020
Filing Date:
May 23, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HYDROLIX INC (US)
International Classes:
G06F16/22; G06F16/18; G06F16/182; G06F16/188
Foreign References:
US20040199530A12004-10-07
US20100088318A12010-04-08
US7263536B12007-08-28
US9298761B22016-03-29
KR20160128166A2016-11-07
Attorney, Agent or Firm:
JUDSON, David, H. (US)
Download PDF:
Claims:
CLAIMS

1. A computing system, comprising:

one or more hardware processors;

computer memory holding computer program code executed on the one or more hardware processors, the program code configured as an indexer service, and a merger service;

the indexer service configured (i) to receive from a data source time-series columnar-based information, the information comprising a set of time-based partitions, (ii) to index the information according to a database file format, and (iii) to transfer the indexed information to a cloud-based object store; and

the merger service configured (i) merge one or more partitions into a larger time-based partition prior to transfer to the cloud-based object store.

2. The computing system as described in claim 1 wherein the database file format comprising a set of files, the set of files comprising a manifest file, a data file, and an index file, wherein the manifest file, the index file and the data file for a given partition together comprise a compact instance of a set of raw unstructured data that comprises the given partition.

3. The computing system as described in claim 2 wherein the manifest file includes a dictionary of data strings seen in a column during indexing of the information together with byte-range data configured to selectively retrieve data from the data and index files.

4. The computing system as described in claim 3 wherein the data file contains a posting-list for each data string in the manifest file.

5. The computing system as described in claim 4 wherein the columnar-based information is stored in the data file in contiguous byte-ranges.

6. The computing system as described in claim 1 wherein the merger service executes on-demand, periodically or continuously.

7. The computing system as described in claim 6 wherein the one or more partitions are merged based on one of: data volume, network location, and available processing resources.

8. The computing system as described in claim 1 wherein the database file format is schema- less.

9. The computing system as described in claim 1 wherein the indexer service is further configured to issue a notification to a catalog service to catalog the indexed information that has been transferred.

10. A method for data indexing, comprising:

receiving, from one or more data sources, time-series columnar-based information, the information comprising a set of time-based partitions;

indexing the time-series columnar-based information into a set of files, the set of files comprising a manifest file, a data file, and an index file, the manifest file including a dictionary of data strings seen in a column during indexing of the information together with byte-range data configured to selectively retrieve data from the data and index files, the data file storing column data seen during the indexing, and the index file containing a posting-list for each data string in the manifest file, wherein the column data is stored in the data file in contiguous byte-ranges.

11. The method as described in claim 10 further including:

transferring the set of files for storage in a cloud-based object store.

12. The method as described in claim 11 further including:

merging one or more partitions into a larger time-based partition prior to transfer to the cloud-based object store.

13. The method as described in claim 12 wherein the one or more partitions are generated on-demand, periodically or continuously.

14. The method as described in claim 10 further including issuing a notification upon transfer of the set of files, wherein the notification is an instruction to identifying the set of files in a searchable catalog.

15. The method as described in claim 10 wherein indexing the time-series columnar-based information into a set of files includes:

identifying a coordinating stream peer;

assigning, by the coordinating stream peer, each of a set of stream peers to process a subset of time-based partitions;

at a particular stream peer:

for each given time-based partition, indexing the given time-based partition to generate data;

aggregating data for all of the given time-based partitions assigned; and returning to the coordinating stream peer a partial result.

16. The method as described in claim 15 further including:

at the coordinating stream peer, aggregating the partial results to generate the set of files.

17. The method as described in claim 10 wherein indexing the time-series columnar-based information includes:

identifying one or more batch peers; and

at a particular batch peer, batch importing time-series columnar-based information from previously- stored data.

18. A computer program product in a non-transitory computer-readable medium, the computer program product comprising program code executed in one or more hardware processors and configured to provide data indexing the program code comprising code configured to:

receive, from one or more data sources, time-series columnar-based information, the information comprising a set of time-based partitions; and

index the time-series columnar-based information into a set of files, the set of files comprising a manifest file, a data file, and an index file, the manifest file including a dictionary of data strings seen in a column during indexing of the information together with byte-range data configured to selectively retrieve data from the data and index files, the data file storing column data seen during the indexing, and the index file containing a posting-list for each data string in the manifest file, wherein the column data is stored in the data file in contiguous byte-ranges.

19. The computer program product as described in claim 18 wherein the program code is further configured to:

transfer the set of files for storage in a cloud-based object store.

20. The computer program product as described in claim 19 wherein the program code is further configured to:

merge one or more partitions into a larger time-based partition prior to transfer to the cloud-based object store.

Description:
High density time-series data indexing and compression BACKGROUND OF THE INVENTION Technical Field

This application relates generally to time series-based data storage and retrieval.

Background of the Related Art

Streaming data is data that is continuously generated by different sources. Data generated from certain data sources, such as devices in the Internet of Things (IoT), or IT services, include (or can be modified to include) a timestamp. Streamed time-series data of this type is being generated continuously, driving a need for new and efficient information storage and retrieval services. Known techniques for storing and retrieving time-series data include cloud-based object storage services (e.g., Amazon ® S3, Google ® Cloud, and the like). These services are advantageous, as theoretically they are highly-scalable and reliable. That said, as the volume of time- series data being stored to the cloud increases, information retrieval (e.g., for data analysis, etc.) becomes very difficult. The problem is exacerbated for OLAP (online analytical processing) applications, where reading a high volume of data records (e.g., for aggregation) is a common use case. The problem arises because reading from a remote storage is much slower than reading from a local storage, thereby requiring a different data storage and retrieval strategy to store and read the data records. Practically, the slowness derives from the requirement that every (theoretically local) disk seek is equivalent to an HTTP request over the network to the remote store, and local disk throughput is significantly higher than the throughput obtained from remote object store when requesting a single file. As data volumes continue to increase exponentially, efficient and cost-effective information storage and retrieval for the type of data is an intractable problem.

There remains a need to provide for new techniques for information storage, search and retrieval of time-series based data to address these and other problems of the known art. BRIEF SUMMARY

According to this disclosure, time-series data and, in particular, time-series columnar- based information, is received and indexed in a compute infrastructure for cost-effective cloud-based object storage, yet in a unique database file format that enables highly- performant search and retrieval of the data stored in the cloud. The database file format (referring to herein as an“HDX file”) advantageously enables the compute infrastructure (indexing and information retrieval) to be separated from the remote storage, thereby enabling both to scale. Using the HDX file format, the data is stored in a much more cost-effective manner (in the cloud object store), while still enabling that data to be efficiently searched, accessed and retrieved back to the compute infrastructure as if were present locally.

In one embodiment, the compute infrastructure comprises several components (services) including an indexer (for data ingest and storage), and a search engine (for query and information retrieval). The infrastructure may also include additional components (services) to facilitate or support the information storage, search and retrieval operations.

The compute infrastructure interoperates with a network-accessible remote store, such as a cloud-based object store. Typically, the cloud-based object store is managed by another entity (e.g., a cloud service provider). In operation, the indexer receives the time-series columnar-based information from a data source (as an input), indexes that information according to the database file format, and forwards the indexed information for storage to the cloud-based object store, where it is stored as a set of time-based partitions. Preferably, the information is stored across the cloud-based object store in directories, each of which include a set of files that comprise the HDX file format.

According to one aspect of this disclosure, the set of files preferably comprise a manifest file, a data file, and an index file. The manifest file includes a dictionary of data strings seen in a column during indexing of the information, together with byte-range data configured to selectively retrieve data from the data and index files. The data file stores column data seen during the indexing, and the index file contains a listing (e.g., a posting-list) for each data string in the manifest file. In this approach, the column data is stored in the data file in contiguous byte-ranges. As data is streamed into the compute infrastructure, it is continuously processed by the indexer and transferred to cloud-based object store where it is stored in the set of time-based partitions and according to the HDX file format.

The techniques herein provide for efficient storage (at the remote object store) and, in particular, because the manifest file, the index file and the data file for the given partition together comprise a compact instance of a set of raw unstructured data that comprises the given partition.

The foregoing has outlined some of the more pertinent features of the disclosed subject matter. These features should be construed to be merely illustrative. Many other beneficial results can be attained by applying the disclosed subject matter in a different manner or by modifying the subject matter as will be described.

BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of the subject matter herein and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:

FIG. 1 depicts a high level architecture of a set of services that comprise a solution for time-series data compression and retrieval according to this disclosure;

FIG. 2 depicts a representative structure of the HDX database file format of this disclosure;

FIG. 3 depicts a representative manifest . hdx file;

FIG. 4 identifies the fields of the manifest . hdx file and their accompanying definitions;

FIG. 5 depicts a representative tag portion of the data . hdx file;

FIG. 6 depicts a representative values portion of the data . hdx file;

FIG. 7 depicts a representative timestamps portion of the data . hdx file;

FIG. 8 depicts a representative index, hdx file;

FIG. 9 depicts a sample data set;

FIG. 10 depicts a manifest . hdx file derived from the sample data set;

FIG. 11 depicts a tag portion of the data . hdx file derived from the sample data set;

FIG. 12 depicts a timestamps portion of the data . hdx file derived from the sample data set; FIG. 13 depicts a values portion of the data . hdx file derived from the sample data set; and

FIG. 14 depicts a index . hdx file derived from the sample data set.

DETAILED DESCRIPTION

FIG. 1 is an overall system 100 in which the techniques of this disclosure may be carried out. As noted, typically the data stored using the techniques herein is of a particular type, namely, time-series columnar-based information. In this usual case, streamed time- series data of this type is being generated continuously from one or more data sources 102, such as IoT devices, log sources, or the like. The nature and type of these data source(s) is not an aspect of this disclosure. Typically, the data is configured for storage in a network- accessible data store, such as a cloud-based object store 104. There may be multiple such object store(s), and the nature, number and type of these object store(s) is not an aspect of this disclosure either. Representative object storage is Amazon S3, Google Cloud, and many others. Stated another way, the techniques herein assume one or more data source(s) 102 of the time-series data, as well as the existence of one or more data store(s) 104 for that data, but these constructs typically are external to the compute infrastructure itself.

The compute infrastructure (or platform) 106 preferably comprises a set of services (or components) namely, an indexer service 108, a search service 110, a merger service 112, and a catalog service 114. One or more of these services may be combined with one another. A service may be implemented using a set of computing resources that are co-located or themselves distributed. Typically, a service is implemented in one or more computing systems. FIG. 1 is a logical diagram, as typically only the indexer service 108 sits between the data sources and the cloud-based object store. The computing platform (or portions thereof) may be implemented in a dedicated environment, in an on-premises manner, as a cloud-based architecture, or some hybrid. A typical implementation of the compute infrastructure is in a cloud-computing environment. As is well-known, cloud computing is a model of service delivery for enabling on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. Available services models that may be leveraged in whole or in part include: Software as a Service (SaaS) (the provider’s applications running on cloud infrastructure); Platform as a service (PaaS) (the customer deploys applications that may be created using provider tools onto the cloud infrastructure); Infrastructure as a Service (IaaS) (customer provisions its own processing, storage, networks and other computing resources and can deploy and run operating systems and applications).

The platform of this disclosure may comprise co-located hardware and software resources, or resources that are physically, logically, virtually and/or geographically distinct. Communication networks used to communicate to and from the platform services may be packet-based, non-packet based, and secure or non-secure, or some combination thereof.

More generally, the techniques described herein are provided using a set of one or more computing-related entities (systems, machines, processes, programs, libraries, functions, or the like) that together facilitate or provide the described functionality described above. In a typical implementation, a representative machine on which the software executes comprises commodity hardware, an operating system, an application runtime environment, and a set of applications or processes and associated data, that provide the functionality of a given system or subsystem. As described, the functionality may be implemented in a standalone machine, or across a distributed set of machines.

Referring back to FIG. 1, the basic operation of the indexer service 108 is to receive the time- series information from the one or more data sources 102, and to convert this data into a unique format. As referenced above, the format is sometimes referred to herein as the HDX file format (or database). This nomenclature is not intended to be limiting. As will be seen, the HDX DB is a time-series, columnar, and schema-less storage format comprised of a root directly, and subdirectories containing so-called HDX files (preferably of three (3) distinct types) that are optimized for remote access. As will be described, this optimized file format allows the indexer service 108 to store the information (as the HDX DB) in the one or more cloud-based object stores 104 for efficient access and retrieval via a set of individual requests (typically, HTTP or HTTPS GET requests) that, collectively, comprise a search query. To this end, the basic operation of the search service is to receive a search query, interrogate the catalog service 114 to find potentially-relevant partitions of the time-series (stored in the remote data store(s)) to fetch, assign the identified partitions (for retrieval) to one or more computing resources (e.g., query peers), and then actively retrieve the HDX DB files (and their associated data) from the remote data store(s) for assembly into a response to the query. In one embodiment, the search service typically exposes an interface, e.g., a web interface, by which a query is formulated and executed. In an alternative embodiment, a query is generated automatically or programmatically, and then received for action (search and retrieval). By virtue of the HDX DB structure, queries can be of various types (e.g., full- text index, sequential access, random access, etc.). Without intending to be limiting, typically a query is designed for online analytical processing (OLAP), where reading a high volume of records (from the remote store(s)) is the common use case. The particular purpose of the query, and/or what is done with the information retrieved, however, are not a limitation of this disclosure.

In one embodiment, the indexer service 108 comprises one or more stateless“stream” peers. A stream peer typically is a physical computing machine, or a virtual machine executing in a virtualized environment. For example, a physical computing machine is a rack-mounted server appliance comprising hardware and software; the hardware typically includes one or more processors that execute software in the form of program instructions that are otherwise stored in computer memory to comprise a“special purpose” machine for carrying out the query peer functionality described herein. Alternatively, the stream peer is implemented as a virtual machine or appliance (e.g., via VMware ® , or the like), as software executing in a server, or as software executing on the native hardware resources of some other system supported a virtualized infrastructure (such as a hypervisor, containers, and the like). Stream peers may be configured as co-located computing entities or, more typically, as a set of distributed computing entities.

As information to be indexed streams into the architecture, a stream peer (e.g., one acting as a leader or head) distributes the indexing workload to one or more stream peers, thereby enabling a set of stream peers to take part in the indexing process. When multiple stream peers are used, the time-based partitions being indexed are spread evenly across a set of the stream peers, although this is not a requirement. As will be described, each stream peer then indexes the HDX partition it was assigned, does a partial aggregation of the results, and then returns back the partial results to the stream peer head that is coordinating the overall indexing operation. Once the stream peer head receives the partial aggregate results from its peers, it performs a final aggregation and forwards the resulting set of HDX files (the manifest, data, and index) to the cloud store.

In another embodiment, the indexer service 108 uses one or more peers configured as batch peers for importing previously stored data.

During the data indexing process, preferably the indexer service builds small inverted index files (the HDX files described below) and stores them in the remote storage, as has been described. Having a large number of small files to evaluate during search, however, can degrade performance. To address this, the merger service 112 is provided. On-demand, periodically or continuously, the merger service 112 (e.g., configured as a cluster of merger computing peers) examines the catalog of files in the remote storage (as identified by the catalog service 114) and configures jobs identifying files to be merged (in the cloud).

Preferably, the merger service configures a merger job based on various factors, such as volume of data, network location, local processing resources, etc.

The HDX file (storage) format is a highly-compacted format that generally contains an index, together with compacted raw data. This construct is now described in detail. As previously mentioned, according to this disclosure HDX DB is a time-series, columnar, and schema-less storage format comprised of a root directory (or folder), and subdirectories (or subfolders) containing HDX files that are optimized for remote access. In a preferred embodiment, the directory structure is as follows:

<namespace>

<day>

part<0>

manifest. hdx

data. hdx

index. hdx

part<n>

The HDX file format overcomes the seek and throughput limitations in object stores through various optimizations, which are now described. Indexing

Indexing is performed by the indexing service. Preferably, indexing occurs in batches. Once the indexer service receives enough messages or records, the service indexes them into a part<n> including the. hdx format files, pushes the resulting message segment to remote storage, and notifies the catalog service to catalog the new files. In one example, assume that the data arrives in messages that can be organized into a nested form, such as follows:

timestamp=<uint 64>

tag_namel=<string>

tag_name2=<string> metrics

metric_namel=<double>

metric_name2=<double>

Because the HDX database preferably is schema-less, and in the example assume that tag_names and metric_names (and their combinations) vary from one record or message to another. When the data is indexed, the indexing service preferably groups the records that have the same tag_names and metric_names, and it gives them a group_id that is then used during the indexing process to determine a most-efficient sorting order for this group. In one implementation, a dictionary of tag_names with a prepended key type identifier and an embedded group number (such as ! ! t : : <tag_name> : : <group_id> is generated. Similarly, preferably tag_vaiues are listed in a dictionary in a format such as follows:

<tag_name>: : :<group_id>: : : <value> and metric_names as !M: : <metric_name>.

Preferably, the dictionary entries are used to limit the impact of cardinality/entropy of the values to the group. This is useful when various data sources are sending records that have some common tag_names. The impact limitation happens when the tags are stored in the database dictionary they get sorted before being stored. Once they are stored, the tags get assigned position ids; because the <tag_name> : : : <group_id> is prepended to the tag values, however, the ids belonging to the same group are next to each other. This operation improves storage and retrieval efficiency. Preferably, the sorting order of all records in the batch is determined on a per group basis. Within a group, the order is controlled by the tag_names cardinality. The tag names are reorganized from lowest-to-highest cardinality. This approach works particularly well when tag_names with high cardinality are less likely to be filtered on in queries, or when tag_names with low cardinality are more likely to be filtered on. In some applications, tag_names with high cardinality tend to be aggregated, which requires fetching most of the messages that include the high cardinality tag_name. Ordering messages by groups and then by increasing tag cardinality within groups improves compressibility in case of high cardinality tags.

Consider the following example:

host=<value> ip=<value> cluster=<value> cpu=l net_io=2 timestamp=1234 host=<value> ip=<value> cluster=<value> cpu=l net_io=2 timestamp=1234 host=<value> ip=<value> cluster=<value> cpu=l net_io=2 timestamp=1234 host=<value> ip=<value> cluster=<value> cpu=l net_io=2 timestamp=1234

To determine the best sorting order, preferably the indexer service considers the cardinality of each of the tags. Assume the result is cluster, host, ip from lowest-to- highest cardinality. Then when sorting all the records in the batch, the records in this group get sorted in this order.

File format and layout

Preferably, the HDX DB uses dictionary encoding for all string values. Accordingly, typically there will one global dictionary that contains all unique string values that are seen during the indexing process. For unsorted doubles and ints, preferably delta zigzag encoding is used; for sorted ints, preferably FOR encoding is used.

In one embodiment, the following terminology is adopted. A“tag_id” is an integer representing a string in an array of strings (dictionary). A tag_name, tag_vaiue,

metric_name, etc. are all represented as tag_ids. A“block” is a structure containing a list of values, and the values can be of type int, double or string. The bracket notation [ ] [ ] refers to a list of lists.

Preferably, the HDX file format comprises a set of files, which are now described. As noted above, the nomenclature herein is not intended to be limited. With reference to FIG. 2, the file“manifest . hdx” 200 preferably contains all of the information necessary to navigate the other HDX files, namely,“data . hdx” 202 and “index . hdx” 204. The manifest . hdx file 200 contains the unique strings dictionary that the data . hdx file references. As will be described, this allows the search service (a query peer in particular) to download necessary blocks directly during a search and without having to seek and navigate the file to reach a particular block.

FIG. 3 depicts a representative manifest . hdx file 200, and FIG. 4 identifies the fields of this file and their accompanying definitions.

The tags portion of the data . hdx file stores dict_ids contiguously in blocks for each tag_name. Preferably, no blocks from other tag_names interleave. This minimizes the number of requests that are needed to download the values for a particular tag. The format for this file is depicted in FIG. 5.

The values portion of the data . hdx file follows a similar structure, and it is depicted in FIG. 6. This file stores metric values contiguously in blocks for each metric_name.

The timestamps portion of the data . hdx file also follows a similar structure, and it is depicted in FIG. 7.

The index . hdx file contains posting-lists for each value in the dictionary. These posting list blocks preferably are downloaded for each term in the query. The format is depicted in FIG. 8.

The following is an example of the HDX data layout. In particular, assume that the indexer service receives the data set shown in FIG. 9. There are two (2) schemas detected in this data set, namely: Schema 1 (pop node turbine_version namespace

namespace_version push_messages push_errors queue_depth); and Schema 2 (pop node namespace, namespace_version push_messages push_errors queue depth).

The dictionary (diet) value have : : o : : and : : l : : appended to the tag’s values. Further, there will be two arrays for each tag name’s values in the tags portion of the data . hdx file, in the values portion of the data . hdx file, and in the timestamps portion of the data . hdx file. Each array belongs to a group, however, they are still stored contiguously in the respective file, as previously described. After this data set is ingested and indexed by the indexer service, the resulting HDX files are then shown in FIG. 10 (manifest . hdx), FIG. 11 (data . hdx tags portion), FIG. 12 (data . hdx timestamps portion), FIG. 13 (data . hdx values portion), and FIG. 14 (index. hdx).

The high density time-series data indexing and compression described above facilitate an efficient search and retrieval of the time-series based columnar-based information. The following describes a representative process flow to search an HDX part for records matching given criteria in the query. The process starts with the search service downloading

manifest . hdx if it is not found in local cache on disk (at the query peer). Then, the manifest . hdx file is decoded lazily by first decoding the block information arrays (offsets, types, sizes); at this step the dictionary also is lazy decoded without decompressing the dictionary blocks. Using the dictionary, the terms to be searched are identified. Then, the query peer then issues HTTP GET requests on index . hdx to obtain the posting lists for the terms found. The posting-lists obtained are then intersected to obtain the final biock_ids that are needed to be fetched. The query peer then issues GET requests to download the block_ids for each of tags . hdx/values . hdx/timestamps . hdx. Once the block_id = X is received from each of the files, a ColumnsBiock is composed containing the sub blocks. The result is then passed on (e.g., a query execution engine) for further processing. This operation also includes materializing each tag block and converting its tag_id to the string value it references.

Generalizing the above, the HDX file format comprises a set of files, namely, at least a manifest file, a data file, and an index file. The manifest file includes a dictionary of data strings seen in a column during indexing of the information, together with byte-range data configured to selectively retrieve data from the data and index files. The data file stores column data seen during the indexing, and the index file contains a listing (e.g., a posting-list) for each data string in the manifest file. In this approach, the column data is stored in the data file in contiguous byte-ranges. As data is streamed into the compute infrastructure, it is continuously processed by the indexer and transferred to cloud-based object store where it is stored in the set of time-based partitions and according to the HDX file format.

While the above describes a particular order of operations performed by certain embodiments of the disclosed subject matter, it should be understood that such order is exemplary, as alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, or the like. References in the specification to a given embodiment indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic.

While the disclosed subject matter has been described in the context of a method or process, the subject matter also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including an optical disk, a CD-ROM, and a magnetic- optical disk, a read-only memory (ROM), a random access memory (RAM), a magnetic or optical card, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. A computer-readable medium having instructions stored thereon to perform the ingest, index, search and retrieval functions is non-transitory.

A given implementation of the disclosed subject matter is software written in a given programming language that runs on a server on commodity hardware platform running an operating system, such as Linux. As noted above, the above-described ingest, index, search and retrieval functions may be implemented as well as a virtual machine or appliance, or in any other tangible manner.

While given components of the system have been described separately, one of ordinary skill will appreciate that some of the functions may be combined or shared in given instructions, program sequences, code portions, and the like.

The functionality may be implemented with other application layer protocols besides HTTP/HTTPS, or any other protocol having similar operating characteristics.

There is no limitation on the type of computing entity that may implement the client- side or server-side of any communication. Any computing entity (system, machine, device, program, process, utility, or the like) may act as the client or the server.

While given components of the system have been described separately, one of ordinary skill will appreciate that some of the functions may be combined or shared in given instructions, program sequences, code portions, and the like. Any application or functionality described herein may be implemented as native code, by providing hooks into another application, by facilitating use of the mechanism as a plug-in, by linking to the mechanism, and the like.

The platform functionality may be co-located or various parts/components may be separately and run as distinct functions, perhaps in one or more locations (over a distributed network).

What is claimed is as follows.