Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR FACILITATING DATA DISCOVERY
Document Type and Number:
WIPO Patent Application WO/2011/075205
Kind Code:
A1
Abstract:
A system for facilitating data discovery on a network, wherein the network has one or more data storage devices. The system may include a crawler program configured to select at least a first set of files and a second set of files, each of the first set of files and the second set of files being stored in at least one of the one or more data storage devices. The system may also include a data fetcher program configured to obtain a copy of the first set of files, the data fetcher program being further configured to resist against obtaining a copy of the second set of files. The system may also include circuit hardware implementing one or more functions of one or more of the crawler program and the data fetcher program.

Inventors:
MAUNDER ANURAG S (US)
TRYFONAS CHRISTOS (US)
SUDHAKAR MUDDU (US)
Application Number:
PCT/US2010/052349
Publication Date:
June 23, 2011
Filing Date:
October 12, 2010
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EMC CORP (US)
MAUNDER ANURAG S (US)
TRYFONAS CHRISTOS (US)
SUDHAKAR MUDDU (US)
International Classes:
G06F7/00; G06F17/30
Foreign References:
US20030135487A12003-07-17
US20080263257A12008-10-23
US20080059734A12008-03-06
US20060053157A12006-03-09
US20060282494A12006-12-14
Other References:
See also references of EP 2510431A4
Attorney, Agent or Firm:
JOE, Ted, K. (Three Embarcadero Center Suite 41, San Francisco CA, US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system for facilitating data discovery on a network, the network having one or more data storage devices, the system comprising:

a crawler program configured to select at least a first set of files and a second set of files, each of the first set of files and the second set of files being stored in at least one of the one or more data storage devices;

a data fetcher program configured to obtain a copy of the first set of files, the data fetcher program being further configured to resist against obtaining a copy of the second set of files; and circuit hardware implementing one or more functions of one or more of the crawler program and the data fetcher program.

2. The system of claim 1 wherein the crawler program is further configured to create at least a checkpoint when the crawler program scans the first set of files, the checkpoint providing at least status notification of scanning performed by the crawler program, the crawler program resuming the scanning from the checkpoint after an interruption of the scanning.

3. The system of claim 1 wherein a quantity of files in the first set of files is different from a quantity of files in the second set of files.

4. The system of claim 1 wherein a quantity of files in the first set of files changes over time. 5. A system for facilitating data discovery on a network, the network having one or more data storage devices, the system comprising:

a crawler program configured to select at least a first set of files, a second set of files, a third set of files, and a fourth set of files, each of the first set of files, the second set of files, the third set of files, and the fourth set of files being stored in at least one of the one or more data storage devices; a data fetcher program configured to obtain a copy of the first set of files, a copy of the second set of files, and a copy of the third set of files, the data fetcher program being further configured to resist against obtaining a copy of the fourth set of files;

a processing program configured to perform one or more services on the copy of the first set of files and the copy of the second set of files, the processing program being further configured to resist against performing any services on the copy of the third set of files;

a search indexing program configured to generate at least a search index using the copy of the first set of files, the search indexing program being further configured to resist against generating any search index from the copy of the second set of files; and

circuit hardware implementing one or more functions of one or more of the crawler program, the data fetcher program, the processing program, and the search indexing program.

6. The system of claim 5 wherein the one or more services include extracting metadata from at least one of the copy of the first set of files and the copy of the second set of files.

7. The system of claim 5 wherein the one or more services include generate at least a hash code using data contained in at least one of the copy of the first set of files and the copy of the second set of files. 8. The system of claim 5 wherein the crawler program resists against scanning the fourth set of files when one or more file path lengths associated with one or more files of the third set of files exceed a file path length threshold.

9. The system of claim 5 wherein the crawler program resists against scanning the fourth set of files when one or more filer lengths associated with one or more files of the third set of files exceed a filer length threshold.

10. The system of claim 5 wherein the data fetcher program is further configured to notify the crawler program that the data fetcher is ready to obtain the copy of the four set of files after the data fetcher has resisted against obtaining the copy of the fourth set of files.

1 1. The system of claim 5 wherein the data fetcher program resists against obtaining the copy of the fourth set of files when one or more file sizes associated with one or more files of at least one of the first set of files, the second set of files, and the third set of files are smaller than a file size threshold.

12. The system of claim 5 wherein the data fetcher program resists against obtaining the copy of the fourth set of files when one or more file sizes associated with one or more files of the fourth set of files are smaller than a file size threshold. 13. The system of claim 5 wherein the data fetcher program resists against obtaining the copy of the fourth set of files when one or more amounts of files of at least one of the first set of files, the second set of files, and the third set of files exceed a file quantity threshold.

14. The system of claim 5 wherein the data fetcher program resists against obtaining the copy of the fourth set of files when an amount of files of the fourth set of files exceeds a file quantity threshold.

15. The system of claim 5 wherein the processing program is further configured to notify the data fetcher program that the processing program is ready to perform at least a service on the copy of the third set of files after the processing program has resisted against performing any services on the copy of the third set of files.

1 6. The system of claim 5 wherein the processing program resists against performing any services on the copy of the third set of files when one or more file formats associated with one of more files of at least one of the first set of files and the second set of files do not belong to a predetermined set of file formats.

17. The system of claim 5 wherein the processing program resists against performing any services on the copy of the third set of files when one or more file formats associated with one of more files of the third set of files do not belong to a predetermined set of file formats.

18. The system of claim 5 wherein the search indexing program is further configured to notify the processing program that the search indexing program is ready to generate a search index from the copy of the second set of files after the search index program has resisted against generating any search index from the copy of the second set of files.

19. The system of claim 5 wherein the search indexing program resists against generating any search index from the copy of the second set of files in at least one of a first condition and a second condition, the first condition being that an amount of text to index in the first set of files exceeds a first text amount threshold, the second condition being that an amount of text to index in the second set of files exceeds a second text amount threshold.

20. A system for facilitating data discovery on a network having a plurality of data storage devices, the system comprising:

a crawler program configured to select at least a first set of files, a second set of files, and a third set of files, each of the first set of files, the second set of files, and the third set of files being stored in one or more of the data storage devices;

a processing program configured to perform one or more services on at least one of the first set of files, a copy of the first set of files, the second set of files, and a copy of the second set of files, the processing program being further configured to resist against performing any services on at least one of the third set of files and a copy of the third set of files;

a search indexing program configured to generate at least a search index using at least one of the first set of files and the copy of the first set of files, the search indexing program being further configured to resist against generating any search index from at least one of the second set of files and the copy of the second set of files; and

circuit hardware implementing one or more functions of one or more of the crawler program, the processing program, and the search indexing program,

wherein the plurality of data storage devices includes a first data storage device and a second data storage device, the first data storage device complying with a first set of standards but not complying with a second set of standards that is different from the first set of standards, the second data storage device complying with the second set of standards but not complying with the first set of standards.

Description:
SYSTEMS AND METHODS FOR FACILITATING DATA DISCOVERY

CROSS-REFERENCE TO RELATED APPLICATIONS

[ 1 ] This international patent application claims priority under PCT Rule 4.10 and PCT

Article 8 to U.S. Patent Application No. 12/638,067, entitled "SYSTEMS AND METHODS FOR FACILITATING DATA DISCOVERY," filed on December 15, 2009 at the United States Patent and Trademark Office, and which is hereby incorporated by reference.

BACKGROUND

[2] The present invention relates to data discovery, such as legal data discovery.

Organizations today face various challenges related to data discovery. Increased digitized content, retention of data due to regulatory requirements, the prevalence of productivity tools, the availability of data on communication networks, and other factors have been driving rapid growth of data volumes in organizations. In response to the rapid data growth, many

organizations have been expanding data storage with various data storage devices and have been implementing data discovery utilizing various tools provided by various suppliers to perform various data discovery tasks. Typically, time scale differences and speed mismatch between the tools and the tasks performed may result in issues such as missed data and latency in responding to data discovery requests.

[3] In general, data discovery may involve tasks such as identification, collection, culling, processing, analysis, review, production, and preservation. Typically the tasks may be performed by different tools provided by different suppliers. For example, the tasks of identification and collection may be performed by an identification-collection tool, and the task of processing may be performed by a separate processing tool coupled to the identification- collection tool. Since identification and collection may be performed substantially faster than processing, the identification-collection tool may unnecessarily collect too much data such that the processing tool may be unable to timely process all the collected data. As a result, a substantial portion of the collected data may be dropped without being processed. Consequently, some critical data may not be appropriately analyzed and preserved. In addition, if the user of the tools expects the data discovery tools to respond to data discovery requests at a speed consistent with the data collection speed, the user may experience substantial latency caused by the delay at the processing tool.

[4] In some arrangements, data may need to be manually transferred between some of the data discovery tools. The manual process may cause a substantial amount of errors in the tools and in the data discovery process.

SUMMARY

[5] An embodiment of the present invention relates to a system for facilitating data discovery on a network, wherein the network has one or more data storage devices. The system may include a crawler program configured to select at least a first set of files and a second set of files, each of the first set of files and the second set of files being stored in at least one of the one or more data storage devices. The system may also include a data fetcher program configured to obtain a copy of the first set of files, the data fetcher program being further configured to resist against obtaining a copy of the second set of files. The system may also include circuit hardware implementing one or more functions of one or more of the crawler program and the data fetcher program.

[6] The above summary relates to only one of the many embodiments of the invention disclosed herein and is not intended to limit the scope of the invention, which is set forth in the claims herein. These and other features of the present invention will be described in more detail below in the detailed description of the invention and in conjunction with the following figures.

BRIEF DESCRIPTION OF THE FIGURES

[7] The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:

[8] Fig. 1 A shows a schematic representation illustrating a system for facilitating data discovery and an example operating environment of the system in accordance with one or more embodiments of the present invention. [9] Fig. IB shows a block diagram illustrating some components of a system for facilitating data discovery in accordance with one or more embodiments of the present invention.

[10] Fig. 2A shows a schematic representation illustrating an arrangement for facilitating data discovery in accordance with one or more embodiments of the present invention.

[11] Fig. 2B shows a table illustrating conditions for triggering additional coordination between data discovery tasks in facilitating data discovery in accordance with one or more embodiments of the present invention.

DETAILED DESCRIPTION

[12] The present invention will now be described in detail with reference to a few embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present invention. [13] Various embodiments are described herein below, including methods and techniques. It should be kept in mind that the invention might also cover articles of manufacture that includes a computer readable medium on which computer-readable instructions for carrying out embodiments of the inventive technique are stored. The computer readable medium may include, for example, semiconductor, magnetic, opto-magnetic, optical, or other forms of computer readable medium for storing computer readable code. Further, the invention may also cover apparatuses for practicing embodiments of the invention. Such apparatus may include circuits, dedicated and/or programmable, to carry out tasks pertaining to embodiments of the invention. Examples of such apparatus include a general-purpose computer and/or a dedicated computing device when appropriately programmed and may include a combination of a computer/computing device and dedicated/programmable circuits adapted for the various tasks pertaining to embodiments of the invention.

[ 14] One or more embodiments of the present invention relate to a system for facilitating data discovery on a network having one or more data storage devices. The system may include a crawler program for scanning batches (or sets) of files to identify relevant data and/or to identify where the data is stored. For example, the batches (or sets) of files may include a first set of files, a second set of files, a third set of files, and a fourth set of files. Each of the first set of files, the second set of files, the third set of files, and the fourth set of files may be stored in at least one of the one or more of the data storage devices on the network. [15] The system may also include a data fetcher program. The data fetcher program may obtain a copy of the first set of files, a copy of the second set of files, and a copy of the third set of files for subsequent processing. For regulating the speeds associated with different data discovery tasks, the data fetcher program may provide a "backpressure" or resistance (e.g., against the crawling program) to resist against obtaining a copy of the fourth set of files, given that the scanning speed of the crawling program may be substantially faster than the fetching speed of the data fetcher program. The backpressure may be applied by the data fetcher program when one or more conditions are met. For example, a backpressure condition may be that the quantity of files in (the copy of) the first set of files, (the copy of) the second set of files, and/or (the copy of) the third set of files exceeds a file quantity threshold. Advantageously, the scanning speed of the crawling program may be appropriately tuned according to the fetching speed of the data fetcher program, such that dropping of files/data may be prevented.

[16] The system may also include a processing program. The processing program may perform one or more services on the copy of the first set of files and the copy of the second set of files fetched by the data fetcher program. For example, the one more services may include extracting data and/or generating hash codes using the data. For regulating the speeds associated with different data discovery tasks, the processing program may provide a "backpressure" or resistance (e.g., against the data fetcher program) to resist against performing any services on the copy of the third set of files, given that the fetching speed of the data fetcher program may be substantially faster than the processing speed of the processing program. The backpressure may be provided by the processing program when one or more conditions are met. For example, a backpressure condition may be that one or more file formats associated with one of more files of (the copy of) the first set of files and/or (the copy of) the second set of files do not belong to a predetermined set of file formats. Advantageously, the fetching speed of the data fetcher program may be appropriately tuned according to the processing speed of the processing program, such that dropping of files/data may be prevented.

[ 17] The system may also include a search indexing program. The search indexing program may generate at least a search index using the copy of the first set of files. For regulating the speeds associated with different data discovery tasks, the search indexing program may provide a "backpressure" or resistance (e.g., against the processing program) to resist against generating any search index from the copy of the second set of files, given that the processing speed of the processing program may be substantially faster than the search index generating speed of the search indexing program. The backpressure may be provided by the search indexing program when one or more conditions are met. For example, a backpressure condition may be that the amount of text to index in the first set of files exceeds a text amount threshold. Advantageously, the processing speed of the processing program may be appropriately tuned according to the search index generating speed of the search indexing program, such that dropping of files/data may be prevented.

[18] The system may also include circuit hardware that may implement one or more functions of one or more of the crawler program, the data fetcher program, the processing program, and the search indexing program. The system may also include a computer readable medium storing one or more of the programs.

[19] By regulating the speeds associated with various data discovery tasks, the system may effectively prevent latency and dropped data in performing data discovery.

[20] The features and advantages of the invention may be better understood with reference to the figures and discussions that follow.

[21 ] Fig. 1A shows a schematic representation illustrating a system 100 for facilitating data discovery and an example operating environment of system 100 in accordance with one or more embodiments of the present invention. System 100 may perform and/or facilitate data discovery tasks such as one or more of identification, collection, culling, processing, analysis, and review. In contrast with prior art arrangements, system 100 may perform and/or facilitate multiple data discovery tasks in an integrated fashion with coordinated speeds for the tasks. As illustrated in the example of Fig. 1A, system 100 may be coupled with a network 102 for facilitating data discovery on network 102, which may include various data sources, as illustrated by file system(s) 104, email repository/repositories 106, laptop computer(s) 108, desktop computer(s) 110, enterprise content management repository/repositories 112, enterprise search portal(s) 114, and/or load import source(s) 116 (e.g., compact disks, USB drives, etc.).

[22] System 100 may also be coupled with various terminal devices through a network 120 (e.g., a wide area network), such that authorized users may have access to system 100 for operating and/or maintaining system 100. The users may include information technology (IT) users 192 (such as corporate system engineers) and legal users 194 (such as attorneys and paralegals involved in a particular legal case). [23] System 100 may also be coupled with one or more file systems, such as file system 182, for preservation of data. IT users 196 may retrieve data from file system 182 for generating particular reports according to specific requirements.

[24] System 100 may also be coupled with one or more production partners, such as production partner 184. System 100 may export data and metadata (e.g., in an XML format) to production partner 184. Additionally or alternatively, production partner 184 may import data and metadata from file system 182 and/or other file systems. Using the data and metadata, production partner 184 may generate reports and/or documents for use by legal users 198. At the same time, production partner 184 also may be a data source, such that the reports and documents generated by production partner 184 may be provided to system 100 for performing relevant data discovery tasks.

[25] System 100 may include various software and hardware components for performing and/or facilitating data discovery tasks in an integrated and coordinated fashion. System 100 may include computer readable media, such as computer readable medium 124, for storing the software components. System 100 may also include circuits, such as circuit hardware 122, for implementing functions associated with the software components. Computer readable medium 124 and circuit hardware 122 may be implemented inside the same enclosure of system 100. Some components of system 100 are discussed with reference to the example of Fig. IB. [26] Fig. IB shows a block diagram illustrating some components of system 100 for facilitating data discovery in accordance with one or more embodiments of the present invention. System 100 may include various functional modules/programs, such as job manager 132, one or more crawlers 134 (or crawler programs 134), a queue manager 136, one or more service profiles 138, a data fetcher program 140, a decision engine 158, one or more service providers 142 (or processing programs 142), and a memory management program 144. The functional

modules/programs may be stored in computer readable medium 124 illustrated in the example of Fig. 1 . [27] Job manager 132 may perform one or more of job scheduling, crawling management, and failover management. Job scheduling may involve allowing a user to start/stop/monitor data processing and/or data discovery jobs. Job manager 132 may accept user input through a command-line interface (CLI) and/or a graphical user interface (GUI). For starting jobs, job manager 132 may spawn a crawler in an appropriate node. For

stopping/monitoring jobs, job manager 132 may interact with queue manager 136.

[28] Job manager 132 may schedule jobs on a periodical basis or based on a calendar.

A main task of these jobs may be to walk through a file hierarchy (local or remote) by utilizing one or more of crawlers 134 to identify location of files/objects, to select files/objects, and/or to perform various actions on selected files/objects.

[29] The distribution of files to be processed may be performed utilizing a set of centralized queues managed by queue manager 136. Queue manager 136 may be implemented in job manager 132, coupled to job manager 132, and/or implemented in a node. Queue manager 136 may distribute the files/load in separate service providers 142 that perform file processing.

[30] The one or more crawlers 134 may include one or more of file/email crawler(s)

168, metadata crawler(s), Centera™ crawler(s), search result logic, database result logic, etc. [31] In accordance with one or more embodiments of the invention, a crawler may include logic for performing the tasks of enumerating a source data set and applying any filters/policies as required for determining the objects (or files) that are eligible candidates for processing. The crawler may scan files according to one or more of NFS (Network Filesystem) and CIFS (Common Internet Filesystem) protocols. The crawler may then feed the list of eligible objects (or files) along with a service profile (among service profiles 138, e.g., determined by logic implemented in the crawler or implemented in decision engine 158) that needs to be applied on the eligible objects as service items to queue manager 136. A crawler in accordance with one or more embodiments of the invention may be configured to scan only metadata without accessing content data, and may advantageously operate with higher efficiency than a conventional "crawler" that is well-known in the art. Further, the crawler according to the invention may classify unstructured data (or files containing unstructured data) according to metadata.

[32] A crawler may perform, for example, one or more of the following action on selected objects: data integrity of filesystems at the object (file) level, nearline, cataloguing (often referred to as shallow or basic classification), and deep parsing. Nearline may involve copying of the object (file) to another location (usually in some location inside one or more filesystems). Cataloguing may involve extracting the user/environmental parameters of selected documents/files present at the remote filesystems and creates a unique fingerprint of the document. Deep parsing may involve analyzing the objects (files) based on a set of keyword- based, regular-expression-based or semantic-based rules.

[33] A crawler may be started by job manager 132 (or a scheduler implemented in or coupled to job manager 132); a crawler may be stopped by job manager 132 (or the scheduler) or may self-terminate based on scheduling specifications. In case of node failure, a crawler may obtain a restart point from queue manager 136. The crawler can be agnostic about the node in which queue manager 136 is running.

[34] In one or more embodiments, a crawler may create one or more checkpoints when the crawler scans a set of files. The checkpoint(s) may provide status information associated with scanning performed by the crawler, such that the crawler may resume the scanning from an appropriate checkpoint after an interruption of the scanning, e.g., caused by shut-down of a data storage device.

[35] The number of crawlers 134 may be adjusted (e.g., increased or decreased) according to the number and/or volume or repositories.

[36] The one or more service profiles 138 may include one or more of basic classification, deep classification, data integrity, database recovery, search index recovery, action(s) (e.g., move, copy, and/or delete), etc. A service profile may define one or more services or orders and combinations of services provided by one or more of service providers 142 for data to be processed. Multiple services may be mixed and matched by a service profile. If the specified service profile requires deep classification, data fetcher 140 may obtain a copy of the selected file(s). If the specified service profile requires only basic classification without requiring deep classification, there may be no need for data fetcher 140 to obtain a copy of the selected file(s).

[37] The one or more service providers 142 may be configured to perform one or more of metadata population, creation of (basic) metadata, database population, rule-based content extraction, transparent migration, policy classification, action(s) (e.g., move, copy, and/or delete), etc. in processing data/file(s). For example, service providers 142 may include a hash and metadata extraction program 162, a basic metadata creation program 166, a search indexing program 164, etc.

[38] System 100 may also include control path modules/programs such as

authentication module 146 and policy engine 152.

[39] Authentication module 146 may be configured to authenticate users (utilizing an

NFS or CIFS interface) and application servers (utilizing an API). Authentication module 146 may authenticate a user during connection establish time. Authentication module 146 may perform the mapping of user IDs and predefined security IDs into user names. Authentication module 146 may perform authentication by linking and invoking a library, such as in NIS server 150 (Network Information Services server 150, e.g., for UNIX systems) or in active directory server 148 (e.g., for WINDOWS® systems). The library may take the username and password credentials and attempt to authenticate the user against one or more authentication services. [40] Policy engine 152 may include a management part that stores and manages the policies into a LDAP repository 154 (Lightweight Directory Access Protocol repository 154, or LDAP 154).

[41] Policy engine 152 may also include policy enforcement modules. For example, Policy engine 152 may include one or more of the following enforcement modules: an access control enforcer (ACE) module, a parsing rules module, a search policy module, etc.

[42] The ACE module may be configured to enforce one or more of access control rights, file retention policies, WORM (write-once-read-many), etc. The ACE module may interfaces with CIFS, APIs (application interfaces), etc.

[43] The parsing rules module may employ document parsing rules (managed by policy engine 152) in LDAP 154 to extract relevant information from documents. These parsing rules may be based on at least one of keyword, regular expression, Boolean logic, and advanced content analytics. An option to have full-content extraction also may be provided.

[44] The search policy module may perform the lookup to identify whether a particular user should view the search results of a search query. The search policy module may interface with a search engine.

[45] The implementation of policy engine 152 may be based one or more concepts, such as the categorization of information based on the content, the actions (or services) associated with different policy groups, etc. [46] System 100 may employ rules to identify and categorize the content data in an enterprise/organization. The rules may be arbitrary regular expressions along with one or more actions (or services) specified. Each rule can be assigned a name. Different set of rules may be applicable to different set of objects. The actions (or services) that can be specified utilizing policy engine 1 52 (or a rule engine) may include key-value pairs. [47] Policy engine 152 may be configured to categorize data into different buckets.

The categorization may be useful for identifying contents that need regulatory compliance. For example, a rule may be that any document with content of "social security number" or "SSN" or "xxx-xxx-xxxx" where x is a digit [0, 9] should be categorized as HIPAA (Health Insurance Portability and Accountability Act). This rule may be formulated as a regular expression, and the action (or service) may be specified to map the group to appropriate regulatory policy in metadata.

[48] The rules may be stored in LDAP 154. A parser engine may download the one or more of the rules before parsing any file. The content of the file may then be matched with the specified rule, and appropriate memberships may be assigned.

[49] Policy engine 152 may also define a policy group (including one or more rules) in metadata. A policy group may represent an abstraction that stores the enforcement rules applicable for a given policy group. For example, HIPAA may correspond to 7 year enforcement with rigid ACLs (Access Control Lists) specific to the organization, and SEC (Securities and Exchange Commission) may have 5 year enforcement with loose deletion requirement.

Furthermore these regulatory requirements may change over time. Therefore, the metadata of each object stores the policy group it belongs to, but the consequence of belonging to this group is maintained in the policy grouping information in LDAP 154.

[50] The enforcement modules (e.g., the ACE module, the parsing rules module, and the search policy module) consult the requirements and take appropriate action on the object at appropriate time. [51] System 100 may also include housekeeping modules such as a system services module, a system log module, an error propagation module 156 (for propagating error information across the nodes), etc. [52] Fig. 2A shows a schematic representation illustrating an arrangement 200 for facilitating data discovery in accordance with one or more embodiments of the present invention. Arrangement 200 may include one or more components of system 100 illustrated in the example of Figs. 1 A- IB and/or components similar to components of system 100. Arrangement 200 may also include functions and actions associated with the components. In one or more embodiments, arrangement 200 may include a file crawler 202, a data fetcher 204, a file processing program 206, and a search indexing program 208 to perform data discover tasks. As an example, file crawler 202, data fetcher 204, file processing program 206, and search indexing program 208 may represent file/email crawler 168, data fetcher 140, one or more of service providers 142 (such as hash and metadata extraction program 162 and/or basic metadata creation program 166), and search indexing program 164, respectively, illustrated in the example of Fig. IB. In one or more embodiments, the components may operate on the same batch (or set) of data/files sequentially. In one or more embodiments, the components may operate on different batches (or sets) of data/files simultaneously. Different batches of files may include the same amount of files or different amount of files. As an example, a first set of files may include a first quantity of files, and a second set of files may include a second quantity of files that is different from the first quantity of files. The sizes of the batches may be dynamic. For example, the first quantity of files may change over time.

[53] For regulating operating speeds to overcome potential problems caused by speed mismatch, one or more of the components may providing "backpressure" (or resistance) to one or more preceding components that perform one or more preceding tasks. For example, crawler 202 may select multiple sets atches of files to be processed (each set batch of files including one or more files), but data fetcher 204 may resist against and/or delay obtaining a copy of one or more of the selected files, as illustrated by backpressure 214 applied to crawler 202 in the example of Fig. 2A. Advantageously, the operating speeds of crawler 202 and data fetcher 204 may be coordinated, and potential dropping of files and/or potential latency caused by speed mismatch may be prevented.

[54] As illustrated in the example of Fig. 2A, crawler 202 may select at least batch 1 (or a first set of files), batch 2 (or a second set of files), batch 3 (or a third set of files), and batch 4 (or a fourth set of files) to be processed. Each of batch 1, batch 2, batch 3, and batch 4 may be stored in one or more data sources 250, which may include, for example, one or more data sources and/or data storage devices on network 102 illustrated in the example of Fig. 1A. Data fetcher 204 may obtain a copy of batch 1 , a copy of batch 2, and a copy of batch 3 for subsequent processing. However, data fetcher 204 may resist against and/or delay obtaining a copy of batch 4, e.g., until data fetcher 204 and/or one or more following components that perform subsequent data discovery actions are ready and/or have sufficient capacity to perform responsible data discovery actions. In one or more embodiments, data fetcher 204 may notify file crawler 202 when data fetcher 204 is ready to obtain a copy of the next set of files, batch 4, thereby enabling file crawler 202 to adjust the scanning/crawling speed accordingly.

[55] As another example, file processing program 206 may resist against and/or delay processing one or more of the copies of files obtained by data fetcher 204, as illustrated by backpressure 216 applied to data fetcher 204 in the example of Fig. 2A, for coordinating speeds of data fetcher 204 and file processing program 206, thereby preventing potential file dropping and/or potential latency. As illustrated in the example of Fig. 2A, although data fetcher 204 may have obtained a copy of each of batch 1, batch 2, and batch 3, file processing program 206 may process only the copy of batch 1 and the copy of batch 2. File processing program 206 may resist against and/or delay processing the copy of batch 3 until file processing program 206 and/or one or more following components that perform subsequent data discovery actions are ready and/or have sufficient capacity to perform responsible tasks. In one or more embodiments, file processing program 206 may notify data fetcher 204 when file processing program 206 is ready to perform one or more services on the copy of batch 3, thereby enabling data fetcher 204 to adjust the data-fetching speed accordingly and/or enabling data fetcher 204 to timely provide the copy of batch 3 to file processing program 206 for processing. [56] In one or more embodiments, file processing program 206 may extract metadata from the copy of batch 1 and the copy of batch 2 for facilitating subsequent search indexing. In one or more embodiments, file processing program 206 may generate hash codes utilizing the content of the files in the copy of batch 1 and the copy of batch 2. The hash codes may be utilized to identify files, such that files having the same content may be identified by the same hash code even if the files have different filenames and/or different metadata. As a result, duplication of data discovery actions on the same content data may be prevented.

Advantageously, data discovery efficiency may be substantially improved, and/or cost associated with performing data discovery may be reduced.

[57] As another example, search indexing program 208 may resist against and/or delay generating any search index using one or more of the files that have been processed by file processing program 206, as illustrated by backpressure 218 applied to file processing program 208, for coordinating speeds of file processing program 206 and search indexing program 208, to prevent potential file dropping and/or potential latency. As illustrated in the example of Fig. 2A, although file processing program 206 has processed the copy of batch 1 and the copy of batch 2, search indexing program 208 may resist against and/or delay generating any search index using the copy of batch 2 until search index program 208 (and/or one or more following components that perform subsequent data discovery actions) are ready and/or have sufficient capacity to perform responsible tasks. In one or more embodiments, search indexing program 208 may notify file processing program 206 when search indexing program 208 is ready to generate a search index using the copy of batch 2, thereby enabling file processing program 206 to adjust the file-processing speed accordingly and/or enabling file processing program 206 to timely provide the copy of batch 2 to search indexing program 208 for search indexing.

[58] Incorporating "backpressure" or resistance, arrangement 200 enables file crawler

202, data fetcher 204, file processing program 206, and search indexing program 208 to operate in a coordinated fashion. Advantageously, no files or very few files may be dropped between data discovery tasks, and users' data discovery needs may be satisfied without significant latency experienced by the users. [59] Fig. 2B shows a table illustrating examples of conditions under which

"backpressure" or resistance is provided in facilitating data discovery in accordance with one or more embodiments of the present invention. [60] As illustrated in the example of Fig. 2B, conditions of backpressure associated with file crawler 202 may include condition 222, which may include long file paths. For example, with reference to the example of Fig. 2A, file crawler 202 may resist against scanning batch 4 when one or more file path lengths associated with one or more files of batch 3 exceed a file path length threshold. Additionally or alternatively, condition 222 may include one or more long file paths. For example, file crawler 202 may resist against scanning batch 4 when one or more filer lengths associated with one or more files of batch 3 exceed a filer length threshold. Each of the thresholds may be predetermined or may be dynamically updated according to status of components involved in performing data discovery tasks. [61 ] As also illustrated in the example of Fig. 2B, conditions of backpressure associated with data fetcher 204 may include condition 224, small files and/or many files. For example, with reference to the example of Fig. 2A, data fetcher 204 may resist against obtaining the copy of the batch 4 when one or more file sizes associated with one or more files of batch 1 , batch 2, and/or batch 3 are smaller than a file size threshold. As another example, data fetcher 204 may resist against obtaining the copy of batch 4 when one or more file sizes associated with one or more files of batch 4 are smaller than a file size threshold. As another example, data fetcher 204 may resist against obtaining the copy of batch 4 when one or more amounts of files of batch 1 , batch 2, and/or batch 3 exceed a file quantity threshold. As another example, data fetcher 204 may resist against obtaining the copy of batch 4 when an amount of files of batch 4 exceeds a file quantity threshold.

[62] As also illustrated in the example of Fig. 2B, conditions of backpressure associated with file processing program 206 may include condition 226, difficult file formats. For example, with reference to the example of Fig. 2A, file processing program 206 may resist against performing any services on the copy of batch 3 when one or more file formats associated with one of more files of batch 1 and/or batch 2 do not belong to a predetermined set of readily- recognizable file formats. As another example, file processing program 206 may resist against performing any services on the copy of batch 3 when one or more file formats associated with one of more files of batch 3 do not belong to a predetermined set of readily-recognizable file formats.

[63] As also illustrated in the example of Fig. 2B, conditions of backpressure associated with search indexing program 208 may include condition 228, amount of text to index. For example, with reference to the example of Fig. 2A, search indexing program 208 may resist against generating any search index from the copy of batch 2 when the amount of text to index in batch 1 exceeds a text amount threshold. As another example, search indexing program 208 may resist against generating any search index from the copy of batch 2 when the amount of text to index in batch 2 exceeds a text amount threshold.

[64] As can be appreciated from the foregoing, embodiments of the present invention may include an integrated system for facilitating/performing data discovery and may incorporate "backpressure" in the data discovery workflow to prevent potential speed mismatch issues. As a result, various data discovery tasks may be performed in a coordinated manner. Advantageously, no files or very few files/data may be dropped between data discovery tasks, and users' data discovery needs may be effectively satisfied without significant latency experienced by the users; less storage space may be required to create a copy of all the collected data (e.g., collected data for legal processing); much less time is needed for data discovery since the integrated system is adaptive in nature with very few manual steps; end-to-end auditing and chain of custody are much more accurate since the collection, processing, analysis, review, and production may all be performed on the single integrated system; and the users of the data need to be trained on only one tool such that learning is simplified for the users.

[65] Embodiments of the invention may generate hash codes utilizing content data of files and may utilize the hash codes to identify files, such that files having the same content may be identified by the same hash code even if the files have different filenames and/or different metadata. As a result, duplication of data discovery actions on the same content data may be prevented. Advantageously, data discovery efficiency may be substantially improved, and/or cost associated with performing data discovery may be reduced.

[66] Embodiments of the invention may incorporate checkpoints for providing at least status information associated with scanning performed by crawlers. The crawlers may resume the scanning from the checkpoint after an interruption of the scanning, for example, caused by shut-down of a data source (e.g., a data storage device), without repeatedly scanning data that has been previously scanned. Advantageously, data discovery efficiency and/or cost may be optimized.

[67] While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and apparatuses of the present invention. Furthermore, embodiments of the present invention may find utility in other applications. The abstract section is provided herein for convenience and, due to word count limitation, is accordingly written for reading convenience and should not be employed to limit the scope of the claims. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.