Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR QUERY PROCESSING OVER TENSOR RUNTIMES
Document Type and Number:
WIPO Patent Application WO/2023/146628
Kind Code:
A1
Abstract:
Example aspects include techniques for query processing over deep neural network runtimes. These techniques may include receiving a query including one or more query operators and determining a query representation based on the one or more query operators. In addition, the techniques may include determining a neural network program based on the query representation, the neural network program including one or more neural network operators for performing the query in a neural network runtime, generating a neural network data structure based on a dataset associated with the query, and executing the neural network program in the neural network runtime over the neural network data structure to generate a query result.

Inventors:
INTERLANDI MATTEO (US)
KARANASOS KONSTANTINOS (US)
HE DONG (US)
BANDA DALITSO HANSINI (US)
CAMACHO RODRIGUEZ JESUS (US)
SEN RATHIJIT (US)
NAKANDALA SUPUN CHATHURANG (US)
Application Number:
PCT/US2022/050997
Publication Date:
August 03, 2023
Filing Date:
November 25, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MICROSOFT TECHNOLOGY LICENSING LLC (US)
International Classes:
G06F16/2453; G06F16/245; G06F16/2452
Other References:
HOLANDA PEDRO ET AL: "Relational Queries with a Tensor Processing Unit", PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON DATA MANAGEMENT ON NEW HARDWARE, 1 July 2019 (2019-07-01), New York, NY, USA, pages 1 - 3, XP093035014, Retrieved from the Internet [retrieved on 20230327], DOI: 10.1145/3329785.3329932
Attorney, Agent or Firm:
CHATTERJEE, Aaron C. et al. (US)
Download PDF:
Claims:
CLAIMS

1. A method comprising: receiving a query including one or more query operators; determining a query representation based on the one or more query operators; determining a neural network program based on the query representation, the neural network program including one or more neural network operators for performing the query in a neural network runtime; generating a neural network data structure based on a dataset associated with the query; and executing the neural network program in the neural network runtime over the neural network data structure to generate a query result.

2. The method of claim 1, wherein the query representation is a query plan, and determining the query representation comprises generating the query plan via a query optimizer.

3. The method of claim 1, wherein determining the neural network program based on the query representation comprises: identifying a query operator of the one or more query operators; and determining a neural network operator of the one or more neural network operators, the neural network operator configured to perform at least a function of the query operator.

4. The method of claim 1, wherein the neural network program includes a tensor program, the one or more neural network operations include a tensor operation, and the neural network runtime includes a tensor runtime.

5. The method of claim 1 , wherein the dataset includes columnar data, and generating the neural network data structure based on the dataset comprises generating an n-dimensional array based at least in part on a data type of the columnar data.

6. The method of claim 1, wherein the one or more query operators includes a structured query language operator and the one or more neural network operators includes a transformation operator, a reduction operator, an arithmetic operator, or a logical operator.

7. The method of claim 1, wherein the neural network runtime is configured to compile the neural network program over a plurality of processing hardware.

8. The method of claim 1, wherein the query includes a machine learning operator executable within the neural network runtime.

9. A system comprising: a memory storing instructions thereon; and at least one processor coupled with the memory and configured by the instructions to: receive a query including one or more query operators; determine a query representation based on the one or more query operators; determine a neural network program based on the query representation, the neural network program including one or more neural network operators for performing the query in a neural network runtime; generate a neural network data structure based on a dataset associated with the query; and execute the neural network program in the neural network runtime over the neural network data structure to generate a query result.

10. The system of claim 9, wherein the query representation is a query plan, and to determine the query representation, the at least one processor is further configured by the instructions to generate the query plan via a query optimizer.

11. The system of claim 9, wherein to determine the neural network program based on the query representation, the at least one processor is further configured by the instructions to: identify a query operator of the one or more query operators; and determine a neural network operator of the one or more neural network operators, the neural network operator configured to perform at least a function of the query operator.

12. The system of claim 9, wherein the neural network program includes a tensor program, the one or more neural network operations include a tensor operation, and the neural network runtime includes a tensor runtime.

13. The system of claim 9, wherein the dataset includes columnar data, and to generate the neural network data structure based on the data, the at least one processor is further configured by the instructions to generate an n-dimensional array based at least in part on a data type of the columnar data.

14. The system of claim 9, wherein the one or more query operators includes a structured query language operator and the one or more neural network operators includes a transformation operator, a reduction operator, an arithmetic operator, or a logical operator.

15. The system of claim 9, wherein the query includes a machine learning operator executable within the neural network runtime.

Description:
METHOD AND SYSTEM FOR QUERY PROCESSING OVER TENSOR RUNTIMES

BACKGROUND

Deep Learning (DL) has created a growing demand for simpler ways to develop complex models and efficient ways to execute them. Thus, significant effort has gone into development of frameworks to support a variety of DL models and run seamlessly over heterogeneous and distributed hardware. Increasingly, specialized hardware and hardware acceleration are being used in DL applications to support DL models. Moreover, the specialized hardware and hardware acceleration techniques are tailored for performance of DL operations. As a result, query processing system (e.g., database management systems), which are typically configured to employ computer processing units (CPUs), are unable to perform database operations on DL systems. Consequently, query processing systems are currently prevented from reaping the benefits of advances due to investment in DL, much less combined with DL applications to leverage machine learning with data management.

SUMMARY

The following presents a simplified summary of one or more implementations of the present disclosure in order to provide a basic understanding of such implementations. This summary is not an extensive overview of all contemplated implementations, and is intended to neither identify key or critical elements of all implementations nor delineate the scope of any or all implementations. Its sole purpose is to present some concepts of one or more implementations of the present disclosure in a simplified form as a prelude to the more detailed description that is presented later.

In an aspect, a method may include receiving a query including one or more query operators, determining a query representation based on the one or more query operators, and determining a neural network program based on the query representation, the neural network program including one or more neural network operators for performing the query in a neural network runtime. Further, the method may include generating a neural network data structure based on a dataset associated with the query and executing the neural network program in the neural network runtime over the neural network data structure to generate a query result.

In another aspect, a device may include a memory storing instructions, and at least one processor coupled with the memory and to execute the instructions to: receive a query including one or more query operators, determine a query representation based on the one or more query operators, determine a neural network program based on the query representation, the neural network program including one or more neural network operators for performing the query in a neural network runtime, generate a neural network data structure based on a dataset associated with the query and execute the neural network program in the neural network runtime over the neural network data structure to generate a query result.

In another aspect, an example computer-readable medium (e.g., non-transitory computer-readable medium) storing instructions for performing the methods described herein and an example apparatus including means of performing operations of the methods described herein are also disclosed.

Additional advantages and novel features relating to implementations of the present disclosure will be set forth in part in the description that follows, and in part will become more apparent to those skilled in the art upon examination of the following or upon learning by practice thereof.

BRIEF DESCRIPTION OF THE DRAWINGS

The Detailed Description is set forth with reference to the accompanying figures, in which the left-most digit of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in the same or different figures indicates similar or identical items or features.

FIG. 1 illustrates an example architecture of a computing system implementing query processing over deep neural network (DNN) runtimes, in accordance with some aspects of the present disclosure.

FIG. 2A is a flow diagram illustrating an example method for generating a DNN program, in accordance with some aspects of the present disclosure.

FIG. 2B is a flow diagram illustrating an example method for generating a DNN data structure, in accordance with some aspects of the present disclosure.

FIG. 3 is a flow diagram illustrating an example method for query processing over DNN runtimes, in accordance with some aspects of the present disclosure.

FIG. 4 is a block diagram illustrating an example of a hardware implementation for a computing device(s), in accordance with some aspects of the present disclosure.

DETAILED DESCRIPTION

The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known components are shown in block diagram form in order to avoid obscuring such concepts.

This disclosure describes techniques for implementing query processing over deep neural network (DNN) runtimes. In particular, aspects of the present disclosure provide a query processing system configured to generate a DNN program from a database query, and execute the DNN program over a multi -platform DNN runtime. Accordingly, for example, a data processing system may employ the query processing system to perform queries via hardware specialized and/or optimized for DNNs, thereby improving performance of query processing while reducing development effort, leveraging the cross-platform compilation capabilities of DNN runtimes, and providing the ability to perform queries including machine learning prediction.

In accordance with some aspects of the present disclosure, a database may receive a query, determine a DNN program consisting of DNN operations corresponding to the database operations and conditions of the query.

Illustrative Environment

FIG. 1 is a diagram showing an example of a data processing system 100, in accordance with some aspects of the present disclosure.

As illustrated in FIG. 1, the data processing system 100 may include a query processing system 102 configured to process queries 104 over a data store 106. The data processing system 100 may further include a database component 108, a program generator 110, a data formatter 112, a DNN runtime 114, and one or more processing components 116.

In some aspects, the query processing system 102 may be a client device. Some examples of a client device include computing devices, smartphone devices, Internet of Things (loT) devices, drones, robots, process automation equipment, sensors, control devices, vehicles, transportation equipment, tactile interaction equipment, virtual and augmented reality (VR and AR) devices, industrial machines, virtual machines, etc. In some aspects, the query processing system 102 may be a cloud computing platform that provides other computing devices with distributed storage and access to software, services, files, and/or data via one or more network(s), e.g., cellular networks, wireless networks, local area networks (LANs), wide area networks (WANs), personal area networks (PANs), the Internet, or any other type of network configured to communicate information between computing devices. As an example, the data processing system 100 may be a provider of software as a service (SaaS), search engine as a service (SEaaS), database as a service (DaaS), storage as a service (STaaS), big data as a service (BDaaS) in a multi-tenancy environment via the Internet, and the query processing system 102 may be used to services queries 104(l)-(n) submitted to the data processing system 100.

The database component 108 may be configured to organize a collection of data on the data store 106. In some aspects, the data store 106 and database component 108 may reside on a single storage device or system or on multiple storage devices or systems such as available at one or more data centers. Further, the database component 108 may include various types of database services (e.g., relational, non-relational, structured query language (SQL), noSQL) for storing, querying, and updating data. As illustrated in FIG. 1, in some aspects, the database component 108 may receive the queries 104(l)-(n) and transmit corresponding query responses 118(l)-(n). Further, the database component 108 may organize data of the data store 106 for any of various types of data processing services (e.g., query processing to perform functions such as anomaly detection, machine learning, data lookup, or any other type of data processing operation).

As illustrated in FIG. 1, the database component 108 may include a query optimizer 120 configured to generate query representations 122(l)-(n). For instance, the query optimizer 120 may receive the query 104(1) and generate the query representation 122(1) corresponding to the query 104(1). In some aspects, a query representation 122 may be a query plan. As used herein, a “query plan” may refer to one or more commands for executing a query over data. Further, in some aspects, the query 104(1) may include a first plurality of commands and the query representation 122(1) may include a second plurality of commands that are optimized to perform the query 104(1) over the data store 106. Additionally, a query representation 122 may be of a different format than the corresponding query 104. For example, the query representation 122(1) may encode the query as a graph and the commands of the query representation may be nodes of the graph. Additionally, or alternatively, the query representations may be generated in a JSON or XML format.

The program generator 110 may be configured to generate DNN programs 124(l)-(n) based on the query representations 122(l)-(n). For example, the program generator 110 may be configured to generate a DNN program 124(1) that employs tensor operations to perform the query 104(1) as represented by the query representation 122(1). In some examples, a DNN program 124 may be a tensor program that employs tensor operations, or any other type of DNN program with DNN operations. Some examples of DNN operations include transposing, indexing, slicing, mathematical operations, linear algebra, random sampling, etc.

In some aspects, the program generator 110 may be configured to map a query command in a query language (e.g., SQL) to one or more DNN operations even though the feature and/or command set of query languages and DNN APIs are vastly different and have different uses. For example, in some aspects, a query representation 122 may be a graph with each command of the query representation 122 represented as a node of the graph. Further, the program generator 110 may be configured to traverse the nodes of the graph, and determine the DNN operations of the DNN program 124 based on the one or more DNN operations corresponding to each node of the graph. Consequently, for example, the query processing component 102 may perform queries via hardware specialized and/or optimized for DNNs, thereby improving performance of query processing while reducing development effort, leveraging the cross-platform compilation capabilities of DNN runtimes, and providing the ability to perform queries including machine learning prediction.

In some examples, the program generator 110 may provide DNN-based implementations (e.g., tensor-based implementations) for the following relational operators: selection, projection, sort, group-by aggregation, natural join (primary key-foreign key, hash-based, and sort- based implementations), left-outer, left- semi and left anti -joins. In addition, in some examples, the program generator 110 may provide DNN-based implementations (e.g., tensor-based implementations) for query expressions, e.g., comparison and arithmetic operations, functions on date data type, in, case, like statements, aggregate expressions using sum, average, min, max, and count aggregates (with or without distinct).

For example, in some aspects, a query 104 may include a primary key - foreign key join. Further, DNN runtimes do not provide an operation for SQL joins. As such, in some aspects, the DNN operations illustrated in TABLE 1 below may be selected by the program generator 110 to perform a primary key - foreign key join.

As another example, in some aspects, a query 104 may include a generic hash join. Further, DNN runtimes do not provide an operation for SQL joins. As such, in some aspects, the DNN operations illustrated in TABLE 2 below may be selected by the program generator 110 to perform a generic hash join.

As another example, in some aspects, a query 104 may include aggregation. Further, DNN runtimes do not provide an operation for aggregation. As such, in some aspects, the DNN operations illustrated in TABLE 3 below may be selected by the program generator 110 to perform aggregation.

As another example, in some aspects, a query 104 may include a generic sort merge join. Further, DNN runtimes do not provide an operation for a generic sort merge join. As such, in some aspects, the DNN operations illustrated in TABLE 4 below may be selected by the program generator 110 to perform a generic sort merge joint.

The data formatter 112 may be configured to generate DNN data structures 126(1)-(N) based on query data 128 of the data store 106. Further, the DNN data structures 126(l)-(n) may be input into the DNN programs 124(1)-(N) to determine query responses 118(l)-(n) to the queries 104(1)- (n). As an example, the DNN program 124(1) may be a tensor program, and the data formatter may generate the DNN data structures 126(l)-(n) as tensors to be input into the DNN program 124(1). As used herein, a “tensor” may refer to a generalization of vectors and matrices to potentially higher dimensions. In some aspects, a tensor may be a data structure organized as an array of numbers. The tensor may be characterized by a degree or order of the tensor. A zeroth- order tensor is a scalar, a first-order tensor is a vector (i.e., a one-dimensional array), a second- order tensor is a two-dimensional array, and so forth. Each dimension of the tensor can have a different respective number of elements or values. In some examples, the data formatter may generate a tensor for each column of a database table. In addition, the dimensionality of the tensor may be based at least in part on the type of data stored in the column. As an example, a column of integers or Boolean values in the data store 106 may be represented as a one dimension tensor (e.g., a vector), while a column of string values may be represented as a two dimensional tensor (e.g., a matrix).

The DNN runtime 114 may be an environment configured to execute the DNN programs 124(1)- (n) on the DNN data structures 126(l)-(n) over the one or more processing components 116(1)- (n) to generate the DNN program results 130(l)-(n) that may be used as the query responses 118(l)-(n). For example, the DNN runtime 114 may be a tensor runtime configured to execute tensor programs. In some aspects, the DNN runtime 114 may provide an executable environment or an interpreter that may be used to train DNN models during a training mode and that can be used to evaluate the DNN models in a non-training mode (e.g., inference or classification mode). During the inference mode, input data can be applied to the DNN model inputs and the input data can be processed (e.g., classified) in accordance with the training of the DNN model.

In some aspects, the bulk of the processing operations performed in implementing a DNN is in performing Matrix x Matrix or Matrix x Vector multiplications. Such operations are computebandwidth intensive and memory-bandwidth intensive, where the size of a matrix may be, for example, 1000x 1000 elements (e.g., 1000x 1000 numbers, each including a sign, mantissa, and exponent) or larger. In some aspects, the DNN runtime 114 may apply techniques to the DNN operations of the DNN programs 124(l)-(n) to reduce the demands for computation as well as memory bandwidth in a given system, whether the system includes a field programmable gate array (FPGA), computer processing unit (CPU), or another hardware platform. In some aspects, the DNN runtime may be provided by a DNN library or framework (e.g., PyTorch, TensorFlow, Apache TVM, etc.).

The one or more processing components 116(l)-(n) may be implemented as a CPU, a graphics processing unit (GPU), a custom or an application specific integrated circuit (ASIC) (e.g., including a system-on-chip (SoC) integrated circuit), a FPGA or other reconfigurable logic, or as a soft processor virtual machine hosted by a physical, general-purpose processor. In addition, in some aspects, the one or more processing components 116(l)-(n) may be configured to accelerate these basic machine learning computations and improve performance, reduce latency and reduce cost of deploying machine learning based applications. Further, the DNN runtime 114 may be configured to execute the DNN programs 124(l)-(n) using processor specific details to further accelerate performance.

Example Processes

FIG. 2A is a flow diagram illustrating an example method 200 for generating a program representation, in accordance with some aspects of the present disclosure. For example, in some aspects, the DNN runtime may be a tensor runtime that performs tensor operations on tensors. Further, as illustrated in FIG. 2 A, the database component 108 may receive a query 104 (e.g., “select a form xyz where b = c) including one or more query operators and/or conditions (e.g., where b = c). As described in detail herein, the query optimizer 120 may generate a query representation 122(1) corresponding to the query 104, and the program generator 110 may employ the query representation 122(1) to determine tensor representations 204(l)-(n) for the operations/commands of the query representation 122(1). For example, the program generator 110 may generate a tensor representation 204(1) of the filter condition of the query 104 as represented within the query representation 122(1).

FIG. 2B is a flow diagram illustrating an example method 206 for generating a DNN data structure, in accordance with some aspects of the present disclosure. For example, in some aspects, the DNN runtime may be a tensor runtime that performs tensor operations on tensors. Further, as illustrated in FIG. 2B, the database component 108 may transmit query data 128 from the data store 106 to the data formatter 112. As described in detail herein, the data formatter 112 may generate DNN data structures 126(l)-(n) as tensors that represent query data 128 associated with the queries 104(l)-(n). For example, the data formatter 112 may generate a tensor representation 208(1) of the column x of the table xyz of the database component 108. In some aspects, column x may store string values, and the tensor representation 126(1) may be a matrix where each matrix row corresponds to a database entry within the column x and the 1 th element of each matrix row corresponds to the i th character of the database entry. Further, in some aspects, the 1 th element may be a byte encoding of the 1 th character of the database entry. Additionally, the DNN runtime may perform tensor operations using the tensor representation 204(1) and the tensor representation 208(1) to perform the query. As an example, the DNN runtime may perform a tensor operation using the tensor representation 204(1) and the tensor representation 208(1) to generate a mask that indicates which database entries to the column x meet the condition b = c.

The processes described in FIG. 3 below are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes. The operations described herein may, but need not, be implemented using the query processor 102. By way of example and not limitation, the method 300 is described in the context of FIGS. 1, 2 and 4. For example, the operations may be performed by one or more of the query processing system 102, the database component 108, the program generator 110, the data formatter 112, the DNN runtime 114, and one or more processing components 116.

FIG. 3 is a flow diagram illustrating an example method 300 for query processing over DNN runtimes, in accordance with some aspects of the present disclosure.

At block 302, the method 300 may include receiving a query including one or more query operators. For example, the database component 108 may receive a query 104(1) including one or more query operators. In some aspects, the query 104(1) may be a SQL query and/or include a ML operation.

Accordingly, the data processing system 100, the query processing system 102, the one or more processing components 116, the computing device 400, and/or the processor 402 executing the database component 108 may provide means for receiving a query including one or more query operators.

At block 304, the method 300 may include determining a query representation based on the one or more query operators. For example, the query optimizer 120 may generate a query representation 122(1) for the query 104(1). In some aspects, the query representation 122(1) may be a query plan for executing the query 104(1). In some aspects, the query representation 122(1) may be graph representation of an optimized strategy for executing the query 104(1).

Accordingly, the data processing system 100, the query processing system 102, the one or more processing components 116, the computing device 400, and/or the processor 402 executing database component 108 or the query optimizer 120 may provide means for determining a query representation based on the one or more query operators.

At block 306, the method 300 may include determining a neural network program based on the query representation, the neural network program including one or more neural network operators for performing the query in a neural network runtime. For example, the program generator 110 may generate the DNN program 124(1) based on the query representation 122(1). In some aspects, the DNN program 124(1) may be a tensor program. Further, the DNN program 124(1) may include DNN operations for performing the query 104(1) in a DNN runtime 114. For example, the DNN program 124(1) may include tensor operations for performing the query 104(1) in a tensor runtime. In some aspects, the program generator 110 may generate the DNN program 124(1) by identifying one or more DNN operations corresponding to each of the query operators of the query representation 122(1).

Accordingly, the data processing system 100, the query processing system 102, the one or more processing components 116, the computing device 400, and/or the processor 402 executing the program generator 110 may provide means for determining a neural network program based on the query representation, the neural network program including one or more neural network operators for performing the query in a neural network runtime.

At block 308, the method 300 may include generating a neural network data structure based on a dataset associated with the query. For example, the data formatter 112 may identify query data within the data store 106 that is associated with the query 104(1), and generate one or more DNN data structures 126 corresponding to the query data associated with the data store 106. In some examples, the DNN structures 126(l)-(n) may be tensors representing information stored in the data store 106.

Accordingly, the data processing system 100, the query processing system 102, the one or more processing components 116, the computing device 400, and/or the processor 402 executing the data formatter 112 may provide means for generating a neural network data structure based on a dataset associated with the query.

At block 310, the method 300 may include executing the neural network program in the neural network runtime over the neural network data structure to generate a query result. For example, the DNN runtime 114 may execute the DNN program 124(1) via one of the one or more processing components 116(l)-(n). For instance, the DNN runtime 114 may be a tensor runtime and the tensor runtime may execute the DNN program 124(1) on custom hardware configured to accelerate performance of the DNN program 124(1).

Accordingly, the data processing system 100, the query processing system 102, the one or more processing components 116, the computing device 400, and/or the processor 402 executing the DNN runtime 114 may provide means for executing the neural network program in the neural network runtime over the neural network data structure to generate a query result.

While the operations are described as being implemented by one or more computing devices, in other examples various systems of computing devices may be employed. For instance, a system of multiple devices may be used to perform any of the operations noted above in conjunction with each other.

Illustrative Computing Device

Referring now to FIG. 4, an example of a computing device(s) 400 (e.g., query processing system 102). In one example, the computing device(s) 400 includes the processor 402 (e.g., the one or more processing components 116) for carrying out processing functions associated with one or more of components and functions described herein. The processor 402 can include a single or multiple set of processors or multi-core processors. Moreover, the processor 402 may be implemented as an integrated processing system and/or a distributed processing system. In an example, the processor 402 includes, but is not limited to, any processor specially programmed as described herein, including a controller, microcontroller, a computer processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a system on chip (SoC), or other programmable logic or state machine. Further, the processor 402 may include other processing components such as one or more arithmetic logic units (ALUs), registers, or control units.

In an example, the computing device 400 also includes memory 404 for storing instructions executable by the processor 402 for carrying out the functions described herein. The memory 404 may be configured for storing data and/or computer-executable instructions defining and/or associated with the query processing system 102, the queries 104(l)-(n), the data store 106, the database component 108, the program generator 110, the data formatter 112, the DNN runtime 114, the query responses 118(l)-(n), the query optimizer 120, query representations 122(l)-(n), the DNN programs 124( 1 )-(n), the DNN data structures 126( 1 )-(n), and the DNN program results 128(l)-(n), and the processor 402 may execute the query processing system 102, the database component 108, the program generator 110, the data formatter 112, the DNN runtime 114, the query optimizer 120, and the DNN programs 124(l)-(n). An example of memory 404 may include, but is not limited to, a type of memory usable by a computer, such as random access memory (RAM), read only memory (ROM), tapes, magnetic discs, optical discs, volatile memory, nonvolatile memory, and any combination thereof. In an example, the memory 404 may store local versions of applications being executed by processor 402.

The example computing device 400 may include a communications component 410 that provides for establishing and maintaining communications with one or more parties utilizing hardware, software, and services as described herein. The communications component 410 may carry communications between components on the computing device 400, as well as between the computing device 400 and external devices, such as devices located across a communications network and/or devices serially or locally connected to the computing device 400. For example, the communications component 410 may include one or more buses, and may further include transmit chain components and receive chain components associated with a transmitter and receiver, respectively, operable for interfacing with external devices. In an implementation, for example, the communications component 410 may include a connection to communicatively couple the client devices 104(l)-(N) to the processor 402.

The example computing device 400 may include a data store 412, which may be any suitable combination of hardware and/or software, that provides for mass storage of information, databases, and programs employed in connection with implementations described herein. For example, the data store 412 may be a data repository for the operating system 406 and/or the applications 408.

The example computing device 400 may include a user interface component 414 operable to receive inputs from a user of the computing device 400 and further operable to generate outputs for presentation to the user. The user interface component 414 may include one or more input devices, including but not limited to a keyboard, a number pad, a mouse, a touch-sensitive display (e.g., display 416), a digitizer, a navigation key, a function key, a microphone, a voice recognition component, any other mechanism capable of receiving an input from a user, or any combination thereof. Further, the user interface component 414 may include one or more output devices, including but not limited to a display (e.g., display 416), a speaker, a haptic feedback mechanism, a printer, any other mechanism capable of presenting an output to a user, or any combination thereof.

In an implementation, the user interface component 414 may transmit and/or receive messages corresponding to the operation of the operating system 406 and/or the applications 408. In addition, the processor 402 executes the operating system 406 and/or the applications 408, and the memory 404 or the data store 412 may store them.

Further, one or more of the subcomponents of the query processing system 102, the database component 108, the program generator 110, the data formatter 112, the DNN runtime 114, the query optimizer 120, and the DNN programs 124(l)-(n), may be implemented in one or more of the processor 402, the applications 408, the operating system 406, and/or the user interface component 414 such that the subcomponents of the query processing system 102, the database component 108, the program generator 110, the data formatter 112, the DNN runtime 114, the query optimizer 120, and the DNN programs 124(l)-(n), are spread out between the components/subcomponents of the computing device 400.

Conclusion

In closing, although the various embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessary limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.