Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SELECTING AN INPUT MODE FOR A VIRTUAL ASSISTANT
Document Type and Number:
WIPO Patent Application WO/2019/222076
Kind Code:
A1
Abstract:
Methods, systems, and apparatus for selecting an input mode are described. In one aspect, a method includes receiving request data specifying a request to launch a virtual assistant application from a lock screen of a mobile device. In response to receiving the request data, input signals are obtained. A selection of an input mode for the virtual assistant application is made, from candidate input modes, based on the input signals. Each candidate input mode is a different input type from each other input type of each other candidate input mode. The input types include an image type and an audio type. The input mode of the image type receives pixel data for input to the virtual assistant application. The input mode of the audio type receives audio input for the virtual assistant application. The virtual assistant application presents content selected based on input signals received using the selected input mode.

Inventors:
BADR IBRAHIM (CH)
Application Number:
PCT/US2019/031969
Publication Date:
November 21, 2019
Filing Date:
May 13, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOOGLE LLC (US)
International Classes:
G06F3/16
Foreign References:
US20150088923A12015-03-26
US20160019240A12016-01-21
US20160170710A12016-06-16
EP2940556A12015-11-04
Attorney, Agent or Firm:
KING, Clint et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method performed by one or more data processing apparatus, the method comprising:

receiving request data specifying a request to launch a virtual assistant application from a lock screen of a mobile device;

in response to receiving the request data, obtaining a plurality of input signals, each of the input signals being of an input type different from each other input type of each other input signal, and each input signal being an input for the virtual assistant, the input types for the input signals including an image type and an audio type, wherein the input signal of the image type is pixel data for an image captured by a camera of the mobile device, and the input signal of the audio type is audio data for audio input captured by a microphone of the mobile device;

selecting, based on the plurality of input signals and from multiple candidate input modes, an input mode for the virtual assistant application, wherein each candidate input mode is of an input type different from each other input type of each other candidate input mode, and the input types include an image type and an audio type, wherein the input mode of the image type receives pixel data for input to the virtual assistant application, and the input mode of the audio type receives audio input for the virtual assistant application; and

presenting, by the virtual assistant application, content to a user of the mobile device, the presented content being selected based on the input signal received using the selected input mode.

2. The method of claim 1 , wherein selecting, based on the plurality of input signals and from multiple candidate input modes, an input mode for the virtual assistant application comprises:

determining, based on the pixel data, whether the image includes one or more recognized objects;

determining, based on the audio data, whether the audio includes one or more recognized spoken words; and

selecting, based on the respective determinations, at least one of the input modes

3. The method of claim 1 or 2, further comprising transmitting, to a remote server, input of the plurality of input signals based upon the selected input mode;

wherein the content presented to the user is received from the remote server based upon the transmitted input of the plurality of input signals.

4. The method of claim 1 or 2, further comprising selecting a user interface for presentation by the virtual assistant application based on the selected input mode, wherein the content is presented using the selected user interface.

5. The method of any preceding claim, wherein:

obtaining a plurality of input signals comprises determining a location of the mobile device with respect to the user based on data received from one or more of an accelerometer of the mobile device or a gyroscope of the mobile device; and

selecting, based on the plurality of input signals and from multiple candidate input modes, an input mode for the virtual assistant application comprises selecting the input mode based on the location of the mobile device with respect to the user.

6. The method of any preceding claim, wherein:

obtaining a plurality of input signals comprises determining a direction in which the camera of the mobile device is pointing based on data received from one or more of an accelerometer of the mobile device or a gyroscope of the mobile device; and

selecting, based on the plurality of input signals and from multiple candidate input modes, an input mode for the virtual assistant application comprises selecting the input mode based on the direction in which the camera of the mobile device is pointing

7 The method of any preceding claim, wherein:

obtaining one or more signals related to the inputs for the virtual assistant comprises obtaining geographic location data specifying a geographic location of the mobile device; and

selecting, based on the one or more signals and from multiple candidate input modes, an input mode for the virtual assistant application comprises selecting the input mode based on the geographic location of the mobile device

8 The method of any preceding claim, wherein selecting, based on the one or more signals and from multiple candidate input modes, an input mode for the virtual assistant application comprises determining whether the voice input includes a query and selecting an audio input mode in response to determining that the voice input includes the query

9. The method of any preceding claim, wherein presenting, by the virtual assistant application, content to a user of the mobile device comprises at least one of presenting image content in a user interface of the virtual assistant application, presenting video content in the user interface of the virtual assistant application, or playing audio content.

10. A computer-readable medium carrying a computer program comprising computer readable instructions configured to cause a computer to carry out a method according to any preceding claim.

1 1. A computer system comprising:

a memory storing processor readable instructions; and

one or more processors arranged to read and execute the processor readable instructions, wherein the processor readable instructions comprise instructions arranged to control the computer system to carry out a method according to any one of claims 1 to 9.

Description:
SELECTING AN INPUT MODE FOR A VIRTUAL ASSISTANT

BACKGROUND

[0001] Virtual assistants are software applications that can perform tasks or services for users, such as answer questions, piay music, schedule events in a calendar application, etc. The virtual assistants typically receive requests via voice or text in natural language form. For example, a user can ask a virtual assistant a question using voice and the virtual assistant can recognize the voice input, and provide an answer to the question using a speaker or presenting the answer in a user interface.

SUMMARY

[0002] This specification describes technologies relating to selecting an input mode for providing content to a user based on one or more signals. In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include receiving request data specifying a request to launch a virtual assistant application from a lock screen of a mobile device; in response to receiving the request data, obtaining input signals, each of the input signals being of an input type different from each other input type of each other input signal, and each input signal being an input for the virtual assistant, the input types for the input signals including an image type and an audio type, wherein the input signal of the image type is pixel data for an image captured by a camera of the mobile device, and the input signal of the audio type is audio data for audio input captured by a microphone of the mobile device; selecting, based on the input signals and from multiple candidate input modes, an input mode for the virtual assistant application, wherein each candidate input mode is of an input type different from each other input type of each other candidate input mode, and the input types include an image type and an audio type, wherein the input mode of the image type receives pixel data for input to the virtual assistant application, and the input mode of the audio type receives audio input for the virtual assistant application; and presenting, by the virtual assistant application, content to a user of the mobile device, the presented content being selected based on the input signal received using the selected input mode. Other implementations of this aspect include corresponding apparatus, devices, systems, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.

[0003] These and other impiementations can each optionally include one or more of the following features. In some aspects, selecting, based on the input signals and from multiple candidate input modes, an input mode for the virtual assistant application includes determining, based on the pixel data, whether the image includes one or more recognized objects, determining, based on the audio data, whether the audio includes one or more recognized spoken words, and selecting, based on the respective determinations, at least one of the input modes.

[0004] Some aspects include transmitting, to a remote server, input of the plurality of input signals based upon the selected input mode. The content presented to the user can be received from the remote server based upon the transmitted input of the input signals.

[0005] Some aspects include selecting a user interface for presentation by the virtual assistant application based on the selected input mode. The content can be presented using the selected user interface.

[0006] In some aspects obtaining a input signals includes determining a location of the mobile device with respect to the user based on data received from one or more of an accelerometer of the mobile device or a gyroscope of the mobile device.

Selecting, based on the input signals and from multiple candidate input modes, an input mode for the virtual assistant application can include selecting the input mode based on the location of the mobile device with respect to the user.

[0007] in some aspects, obtaining input signals includes determining a direction in which the camera of the mobile device is pointing based on data received from one or more of an accelerometer of the mobile device or a gyroscope of the mobile device. Selecting, based on the input signals and from multiple candidate input modes, an input mode for the virtual assistant application can include selecting the input mode based on the direction in which the camera of the mobile device is pointing. [0008] in some aspects, obtaining one or more signals related to the inputs for the virtual assistant can include obtaining geographic location data specifying a geographic location of the mobile device. Selecting, based on the one or more signals and from multiple candidate input modes, an input mode for the virtual assistant application can include selecting the input mode based on the geographic location of the mobile device.

[0009] in some aspects, selecting, based on the one or more signals and from multiple candidate input modes, an input mode for the virtual assistant application can include determining whether the voice input includes a query and selecting an audio input mode in response to determining that the voice input includes the query in some aspects, presenting, by the virtual assistant application, content to a user of the mobile device can include at least one of presenting image content in a user interface of the virtual assistant application, presenting video content in the user interface of the virtual assistant application, or playing audio content.

[0010] The subject matter described in this specification can be implemented in particular embodiments so as to realize one or more of the following advantages. By automatically determining an input mode based upon received input signals, the input signals can be processed more effectively and efficiently. For example, by first determining an input mode based upon the received signals, further processing of input signals that do not provide useful input can be avoided. For example, it may be determined that received image or audio data is not an input signal provided as part of a query and computationally expensive image or audio processing techniques such as techniques for object recognition and/or classification may therefore not be applied to a received image-based input signal. Transmission of input signals to a remote device may additionally or alternatively be avoided, avoiding bandwidth usage for transmission of signals to the remote device that are not suitable for processing at the remote device. By selecting an input mode in response to a user launching a virtual assistant application from a lock screen (or other user interface element), the virtual assistant application can identify and present content more quickly based on input signals received from the selected input mode than waiting for the user to select an input mode or prompting the user to select from multiple input modes. Using the appropriate input mode allows the virtual assistant application to more accurately respond to the user’s request, e.g., by providing the appropriate content In response to the user’s request.

[0011] This prevents the user from having to make multiple requests which, in turn, reduces the number of times computing system(s) process the requests to identify and select content. Aggregated over many users, this can significantly improve the functioning of computers of a content distribution system that selects the content by reducing the processing demands placed on the computers. As the content distribution system may be accessed over a network, reducing the number of requests can also reduce the amount of bandwidth consumed, allowing bandwidth for other network traffic and/or increasing the speed of the network.

[0012] By selecting the appropriate input mode for the user, the user is not burdened with having to select an input mode for each request. This can improve the user experience and allow for quicker requests by the user and more flexibility for the user. For example, the user can hold the mobile device such that the camera is pointing at an object and obtain information about the object without having to also interact with the mobile device to select a camera input mode.

[0013] Various features and advantages of the foregoing subject matter are described below with respect to the figures. Additional features and advantages are apparent from the subject matter described herein and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] FIG. 1 is a block diagram of an environment in which a virtual assistant application selects an input mode and presents content selected based on an input signal received using the selected input mode.

[0015] FIG. 2 depicts a sequence of example screen shots of a mobile device that selects an input mode and presents content selected based on an input signal received using the selected input mode. [0016] FIG. 3 depicts another sequence of example screen shots of a mobile device that selects an input mode and presents content selected based on an input signal received using the selected input mode.

[0017] FIG. 4 is a flow chart of an example process for selecting an input mode for a virtual assistant application and presenting content based on an input signal received using the selected input mode.

[0018] Like reference numbers and designations in the various drawings indicate like elements.

DETAILED DESCRIPTION

[0019] In general, systems and techniques described herein can select an input mode for use in selecting content to present to a user. The input mode can be selected from multiple different input modes where each input mode is of an input type different from each other input type of each other input mode. For example, the input modes can include an audio input mode of an audio type (e.g., voice input), an image input mode of an image type, a text input mode of a text type, and/or a combination input mode of a combination of two or more input types (e.g., voice and image). The input mode can be selected in response to a request to launch a virtual assistant application (or another application). The virtual assistant application can select the input mode based on input signal(s) received from the multiple different candidate input modes.

[0020] For example, a user can request to launch a virtual assistant application from a lock screen of a mobile device by making a gesture, such as swiping an icon presented on the lock screen in a particular direction in response to detecting the request, the virtual assistant application can obtain one or more signals and determine which input mode to use in identifying content based on the one or more signals.

[0021] The signals can be related to the input modes of the virtual assistant application. For example, the signals can include words or phrases recognized in voice input, pixel data of a viewfinder of a camera of the mobile device (e.g., whether an object is detected in the viewfinder and/or the object or type of object recognized), and/or a position and/or orientation of the mobile device (e.g., whether the mobile device is near the users face ef facing downwards). The signals can also include the geographic location of the mobile device (e.g., in what type of building the mobile device is located), environmental signals (e.g., the amount of lighting and/or ambient noise), the user’s history or preferences for input modes based on previous signal(s), and/or other appropriate signals.

[0022] For example, if the voice input includes a question and an object is not recognized in pixel data that represents what is shown in the viewfinder (e.g., what is being captured by the camera), a voice input mode may be selected as the user is likely asking the virtual assistant application the question and not pointing the camera at an object in which the user is interested in another example, if the user is holding the mobile device near the users face and one or more objects are detected in the pixel data, the user is likely requesting information about one of the objects.

[0023] FIG. 1 is a block diagram of an environment 100 in which a virtual assistant application 1 16 selects an input mode and presents content selected based on an input signal received using the selected input mode. The virtual assistant application 1 16 can be installed on and/or executed by a mobile device 1 10. A mobile device 1 10 is an electronic device that is capable of sending and receiving data over a data communication network 140 Example mobile devices 1 10 include smart phones, tablet computing devices, wearable computing devices (e.g., smart watches), and other devices that can send and receive data over the network 140. The network 140 can include a local area network (LAN), a wide area network (WAN), the internet, a mobile network, or a combination thereof.

[0024] The virtual assistant application 1 16 can be implemented as a native application developed for a particular platform or a particular device. The virtual assistant application 1 16 can perform tasks or services for a user of the mobile device 1 10. For example, the virtual assistant application 1 16 can answer user questions, identify and present data related to objects (e.g.., objects depicted in an image or viewfinder of a camera 1 1 1 ), play music, configure other applications (e.g., set up a meeting in a calendar application), and/or other appropriate tasks or services. [002S] The virtual assistant application 1 16 can receive requests (e.g., questions, commands, or requests for information) via multiple different input modes. For example, the virtual assistant application 1 16 can receive voice requests from a microphone 1 12 of the mobile device 1 10. The virtual assistant application 1 16 can also detect and/or recognize objects in a viewfinder of a camera 1 1 1 of the mobile device 1 10 and interpret the fact that the user is pointing the camera 1 1 1 at the objects as a request for more information about the objects in some implementations, the virtual assistant application 1 16 can also receive text requests, e.g., typed in using a physical or touch keypad of the mobile device 1 10.

[0026] The mobile device 1 10 can allow access to the virtual assistant application 1 16 from a lock screen of the mobile device 1 10. A lock screen is a user interface element that regulates access to the mobile device 1 10 by requiring the user to perform an action to access the mobile device 1 10. The mobile device 1 10 can be configured to launch the virtual assistant application 1 16 in response to detecting a particular user action (e.g., gesture) while the mobile device 1 10 is locked. For example, the user action can be a swipe of an icon in a particular direction, moving the mobile device 1 16 in a particular motion, speaking a particular command, or another appropriate action.

[0027] in response to detecting the particular user action, the mobile device 1 10 can launch the virtual assistant application 1 16 and present a user interface of the virtual assistant application 1 16. in addition, the virtual assistant application 1 16 can obtain one or more input signals for selecting from multiple different candidate input modes. Each candidate input mode can be of an input type different from each other input type of each other candidate input mode. The candidate input modes can include an image input mode of an image type and an audio input mode of an audio type. The image input mode can receive pixel data for input to the virtual assistant application. Similarly, the audio input mode can receive audio input for the virtual assistant application.

[0028] The candidate input modes can also include a combination input mode that receives inputs of two or more types (e.g., pixel data and audio data, or audio data and text data). As described above, the input mode can be selected based on one or more of the input signals, e.g., using a set of rules or a machine learning model.

[0029] Each of the input signals can be of an input type different from each other input type of each other input signal. For example, the input types can Include an image type and an audio type. The input signal of the Image type can be pixel data for one or more images captured by a camera of the mobile device, e.g., captured after the mobile device detects the request to launch the virtual assistant application. The pixel data can include a respective pixel data set for each image. Each pixel data set can represent the pixels of the viewfinder of the camera for a particular point in time. The pixel data in each pixel data set can include data specifying visual characteristics (e.g., color, intensity, brightness, etc.) of each pixel of the viewfinder of the camera.

[0030] The input signal of the audio type can be audio data for audio input captured by a microphone of the mobile device. For example, the audio data can include text that represents spoken words in the audio input if the audio input includes spoken words. For example, the virtual assistant application (or another application) can convert the audio data to text using audio transcription techniques.

[0031] The virtual assistant application 1 16 includes an input mode selector 1 17 that selects the input mode based on the input signal(s). The virtual assistant application 1 16 can begin obtaining the signals when the virtual assistant application

1 16 is launched. For example, the virtual assistant application 1 16 can begin obtaining audio input from the microphone 1 12 and pixel data for the viewfinder from the camera 1 1 1 when the virtual assistant application 1 16 is launched. The input mode selector

1 17 can monitor these signals until the input mode selector 1 17 has enough information to select an input mode. For example, the input mode selector 1 17 can monitor a stream of audio, pixel data for a stream of images, and/or the other signals until the input mode selector 1 17 obtains enough data to make a decision as to which input mode to select.

[0032] The determination of whether enough data has been obtained can be based on whether a query is detected in audio input, whether an object, a class of objects is detected in pixel data for the viewfinder, whether an object is recognized (e.g., the actual object is identified), whether the position of the mobile device with respect to the user can be determined, or a combination of one or more of these signals. In some implementations, the input mode selector 1 17 determines a confidence score for a candidate input mode based on one or more of these signals. If the confidence score meets or exceeds a threshold, the input mode selector 1 17 can select the candidate input mode.

[0033] The virtual assistant application 1 16 includes a voice recognizer 1 19 that can receive audio input from the microphone 1 12 and translate the audio into text (if the audio includes spoken words) and provide the text to the virtual assistant application 1 16. The virtual assistant application 1 16 can then use the text (alone or in combination with other signa!(s)) to select an input mode.

[0034] in some implementations, the input mode selector 1 17 can select the input mode based on whether text output by the voice recognizer 1 19 includes a query. For example, the input mode selector 1 17 can compare the text to a set of queries (e.g., previous queries received by virtual assistant applications from one or more users or a set of defined queries). If the text matches a query, the input mode selector 1 17 can select the audio input mode and use the text to identify content to present to the user.

In another example, the input mode selector 1 17 uses a language model to determine the likelihood that the text includes a query. The language model can be trained using the set of queries, e.g., using labeled training data that specifies which text is a query and which text is not a query. The language model can output a likelihood that input text includes a query based on input text

[0035] The input mode selector 1 17 can use the data specifying whether the text includes a query alone or in combination with other signals to select the input mode. If the text includes a query, it is more likely that the user is requesting content based on the user’s voice input, which has been translated into the text. Thus, the input mode selector 1 17 may select an audio input mode and use the text to select content to present to the user when the text includes a query.

[0036] The input mode selector 1 17 can select the input mode based on pixel data for the viewfinder of the camera 1 1 1. The virtual assistant application 1 16 can obtain pixel data from the camera 1 1 1 after the virtual assistant application 1 16 is launched. For example, the virtual assistant application 1 16 can obtain a stream of pixel data sets. Each pixel data set can represent the pixels of the viewfinder for a particular point in time. The pixel data in each pixel data set can include data specifying visual characteristics (e.g., color, intensity, brightness, etc.) of each pixel of the viewfinder.

[0037] in some implementations, the input mode selector 1 17 selects the input mode based on the overall brightness of the pixels. For example, if the overall brightness is low (e.g., less than a threshold), the user may be outside at night or in a dark location where it is unlikely the user is requesting content based on what can be detected in the viewfinder of the camera 1 1 1. However, if the overall brightness of the pixels is high (e.g., greater than a threshold), the user may be outside during the day or in a well-lit room where the user may be attempting to obtain information in the viewfinder of the camera 1 1 1. The overall brightness of a pixel data set can be determined based on a combination of the visual characteristics of each pixel in the pixel data set (e.g., an average of the brightness values across all of the pixels in the pixel data set).

[0038] The input mode selector 1 17 can select the input mode based on whether an object is detected in the viewfinder (e.g., in one or more of the pixel data sets) and, if so, the identity of the object or the class of the object. For example, the virtual assistant application 1 16 can include an object recognizer 120 that attempts to detect and recognize (e.g., identify) objects in pixel data (or images). The object recognizer 120 can detect various objects, such as people, barcodes, landmarks, paintings, vehicles, and/or other types of objects using edge detection and/or other object recognition techniques.

[0039] in some implementations, the object recognizer 120 includes a coarse classifier that determines whether a pixel data set includes an object in one or more particular classes (e.g., categories) of objects. For example, the coarse classifier may detect that a pixel data set includes an object of a particular class, with or without recognizing the actual object. [0040] The coarse classifier can detect the presence of a class of objects based on whether or not the image includes one or more features that are indicative of the class of objects. The coarse classifier can include a light-weight model to perform a low computational analysis to detect the presence of objects within its class(es) of objects. For example, the coarse classifier can detect, for each class of objects, a limited set of visual features depicted in the image to determine whether the image depicts an object that falls within the class of objects. In a particular example, the coarse classifier can detect whether an image depicts an object that is classified in one or more of the following classes: text, barcode, landmark, media object (e.g., album cover, movie poster, etc.), or artwork object (e.g., painting, sculpture, etc.). For barcodes, the coarse classifier can determine whether the image includes parallel lines with different widths.

[0041] in some implementations, the coarse classifier uses a trained machine learning model (e.g., a convolutional neural network) to classify images based on visual features of the images. For example, the machine learning model can be trained using labeled images that are labeled with their respective ciass(es). The machine learning model can be trained to classify images into zero or more of a particular set of classes of objects. The machine learning model can receive, as inputs, data related to the visual features of an image and output a classification into zero or more of the classes of objects in the particular set of classes of objects.

[0042] The coarse classifier can output data specifying whether a class of object has been detected in the image. The coarse classifier can also output a confidence value that indicates the confidence that the presence of a class of object has been detected in the image and/or a confidence value that indicates the confidence that an actual object, e.g., the Eiffel Tower, is depicted in the image.

[0043] The object recognizer 120 can provide, to the input mode selector 1 17, data specifying whether an object was detected in a pixel data set, and if an object was recognized, data identifying the recognized object. When a coarse classifier is used, the object recognizer 120 can provide, to the input mode selector 1 17, data specifying whether the presence of an object within at least one of the classes of objects has been detected and, if so, the detected class. As described above, the virtual assistant application 1 16 can receive a stream of pixel data sets in this example, the object recognizer 120 can evaluate each pixel data set and provide this data to the input mode selector 1 17 for each (or at least a portion of) the pixel data sets in the stream.

[0044] The input mode selector 1 17 can select the input mode based on whether one or more pixel data sets include an object, the actual object recognized, and/or the class(s) of object(s) detected in the pixel data set. For example, if an object is detected in the viewfinder (based on a pixel data set for the viewfinder), it’s more likely the user is requesting content based on what’s in the viewfinder than if there are no detectable objects in the viewfinder. Thus, the input mode selector 1 17 can select an image input mode if an object or class of objects is detected in the pixel data.

[0045] The input mode selector 1 17 can select an image input mode rather than other input modes if particular objects or classes of objects are defected. For example, if a landmark or famous painting is detected, it’s more likely that the user is requesting content about the landmark or famous painting than if wood flooring or sand is detected.

[0046] The input mode selector 1 17 can also select an input mode based on a sequence of pixel data sets, e.g., in a stream. For example, if the objects recognized in the pixel data sets change in over a short period of time (e.g., 2-5 seconds), it may be likely that the user is moving the mobile device 1 10 around and not trying to obtain content for a particular object and an audio input mode may be selected if the audio input includes a query. However, if the same object is recognized in a sequence of pixel data sets, it is more likely the user is requesting content for the object and an image input mode may be selected.

[0047] The input mode selector 1 17 can also select an input mode based on the location of the mobile device 1 10, e.g., with respect to the user, and/or the orientation of the mobile device 1 10. For example, if the mobile device 1 10 is near the user or the user s face (e.g., within a threshold distance), it is likely that the user is viewing the viewfinder and is requesting content for the object(s) in the viewfinder and an image input mode may be selected if the mobile device is far from the user or the user’s face (e.g., greater than a threshold distance away), it is less likely the viewfinder is viewing the viewfinder an audio input mode may be selected. [0048] in some implementations, the input mode selector 1 17 can determine whether the mobile device 1 10 is close to the user’s face based on recent movements (e.g., within a threshold amount of time before the request to launch the virtual assistant application is received). For example, the input model selector 1 17 can obtain data from an accelerometer 1 13 and/or a gyroscope 1 14 describing the motion of the mobile device 1 10, e.g., in a continuous stream. The input mode selector 1 17 can determine whether the mobile device 1 10 was moved upwards (e.g., at least a threshold distance) within the threshold amount of time prior to the request being received. If so, the input mode selector 1 17 can determine that the mobile device 1 17 is likely to be close to the user’s face. Similarly, of the mobile device 1 10 moved downwards at least a threshold distance, it’s likely that the mobile device is not close to the user’s face.

[0049] in some implementations, the input mode selector 1 17 can determine whether the mobile device 1 17 is close to the user’s face based on the audio input. For example, the input mode selector 1 17 can determine whether the mobile device 1 17 is close to the user’s face based on the intensify of voice input detected in the audio input. If the voice input is at least a threshold intensity, the input mode selector 1 17 can determine that the mobile device 1 10 is close to the user’s face.

[0050] The input mode selector 1 17 can also select an input mode based on the orientation of the mobile device 1 10 or the angle of the mobile device 1 10 with respect to ground. The input mode selector 1 17 can determine the orientation or angle of the mobile device 1 10 based on data received from the accelerometer 1 13 and/or a gyroscope 1 14. if the mobile device 1 10 is oriented such that the camera 1 1 1 is facing horizontally or upwards from the user, it may indicate that the user is pointing the camera at an object to obtain content related to the object and an image input mode may be selected. If the mobile device 1 10 is oriented such that the camera 1 1 1 is facing downwards from the user, it may indicate that the user is speaking to the mobile device 1 10 rather than pointing the camera at an object, and an audio input mode may be selected.

[0051] The input mode selector 1 17 can also select an input mode based on the location of the mobile device 1 10, e.g., the geographic location of the mobile device 1 10. For example, the mobile device 1 10 can include a Global Positioning System (GPS) sensor that determines the geographic location of the mobile device 1 10. The input mode selector 1 17 can use the geographic location to determine what building, type of building, outside area (e.g., park, landmark, monument), or other area where the mobile device 1 10 is located. For example, the input mode selector 1 17 can compare GPS coordinates for the mobile device 1 10 to GPS coordinates of known locations to determine what type of building or area the mobile device 1 10 is located. Users may be more likely to request content based on images of objects than voice in certain types of locations (e.g., in museums, at landmarks, etc.). When the mobile device 1 10 is at (or within a threshold distance from) one of these locations, the input mode selector 1 17 can select an image input mode if the mobile device 1 10 is located in a type of location where users are more likely to request content using voice requests, the input mode selector 1 17 can select an audio input mode.

[0052] The input mode selector 1 17 can select an input mode based on the user's history or preferences for input modes based on previous signal(s). For example, a user can select which input mode the user prefers for particular circumstances related to the signals in a particular example, the user may select an image input mode when the mobile device is pointing at an upward angle, a particular class of object is detected in the viewfinder, or when the user speaks a particular phrase. If the signals match one of these circumstances, the input mode selector 1 17 can select the image input mode

[0053] in another example, the input mode selector 1 17 can maintain a history of the input mode selected by the user for various values of the signals. For example, the input mode selector 1 17 can maintain, for a user, the number of times the user selects each input mode for each value of each individual signal or each combination of signals if the user selects one input mode more often than the other input modes when signai(s) have particular value(s) or are within particular range(s), the input mode selector 1 17 can select the one input mode when the signai(s) have the particular value(s) or range of vaiue(s). For example, if the user selects an image input mode when the overall pixel brightness is greater than a threshold value, the input mode selector 1 17 can select the image input mode when the overall brightness value for a pixel data set for the viewfinder is greater than the threshold value. [0054] As the input mode selector 1 17 can obtain multiple signals when the virtual assistant application is launched, the selection of the appropriate input mode may be difficult in some implementations, the input mode selector 1 17 processes a set of rules or other logic to select the input mode based on the signals. In one example, the rules can specify a hierarchy or relative importance of the signals. For example, detection of an object may have a higher weight than the location or orientation of the mobile device 1 10. in this example, if an object is detected in pixel data, the input mode selector 1 17 can select an image input mode irrespective of the position or orientation of the mobile device 1 10.

[0055] The input mode selector 1 17 can process rules in order to select the input mode. For example, the first rule may specify that if an object is detected, an image input mode should be selected if not, the second rule should be processed. The second rule may specify that if the audio input includes a query, an audio input mode should be selected. If not, the third rule should be processed and so on until a selection of an input mode is selected.

[0056] in some implementations, the input mode selector 1 17 uses a trained machine learning model to select the input mode. The machine learning model can be trained using labeled training data that specifies, for each of multiple different combinations of signals, which input mode should be selected. The various signals obtained by the input mode selector 1 17 can be provided as inputs to the machine learning model. The output of the machine learning model is a selection of one of the input modes based on the input.

[0057] in some implementations, the input mode selector 1 17 can select an input mode that is based on both audio and image inputs. For example, the input mode selector 1 17 can select such an input mode when the text translated from the audio is related to an object recognized in the pixel data for the viewfinder. The input mode selector 1 17 can determine that the text is related to the object when the text refers to the same object, the same class of object, or a characteristic of the object. In a particular example, the user may speak“how much does this car cost" while pointing the camera 1 1 1 of the mobile device 1 10 at a car. The input mode selector 1 17 can determine that both inputs are related to a car and select the audio and image input mode.

[0058] After the input mode selector 1 17 selects an input mode, the virtual assistant application 1 16 can send inputs for the input mode to a content distribution system 150 over the network 140. For example, if an image input mode is selected, the virtual assistant application 1 16 can send one or more pixel data sets to the content distribution system 150. The pixel data set(s) can include pixel data for the viewfinder used to select the image input mode and/or pixel data sets captured after the image input mode is selected. For example, the virtual assistant application 1 16 can first send the pixel data set(s) used to select the image input mode if the viewfinder is later pointing at a different object, the virtual assistant application 1 16 can send pixel data for the different object to the content distribution system 150. The virtual assistant application 1 16 may therefore transmit inputs to the content distribution system 150 based upon the selected input mode. For example, only inputs associated with an input mode that are selected may be transmitted to the content distribution system. The virtual assistant application 1 16 may therefore preprocess input signals to determine input signals that are useful for providing content to a user and/or that are intended as input from a user and only transmit input signals that are determined to be useful for providing content to the user. Transmittal of input signals over a communications network may therefore be reduced by only sending useful input signals.

[0059] Similarly, the virtual assistant application 1 16 can send, to the content distribution system 150, text translated from audio input captured before and/or after an audio input mode is selected. For example, if the text translated after the audio input mode was selected includes a different query than the text transiated from audio captured before the audio input mode was seiected, the virtual assistant application 1 16 can send the text including the different query to the content distribution system 150.

[0060] The content distribution system 150 can select content for presentation to the user at the mobile device 1 10 based on the data (e.g., pixel data and/or text translated from audio) received from the mobile device 1 10. The content distribution system 150 includes one or more front-end servers 152 and one or more back-end servers 154. The front-end servers 152 can receive the data from the mobile devices 1 10 and provide the data to the back-end servers 154. The front-end servers 152 can also send content to the mobile devices 1 10 in response to receiving the data.

[0061] The back-end servers 154 includes a content selection engine 156 that selects content for providing to mobile devices 1 10 in response to the data received from the mobile devices 1 10. As used herein, the term engine refers to a data processing apparatus that performs a set of tasks. The content selection engine 156 can select content from a content data storage unit 158 (e.g., one or more hard drives, flash memory, etc.) that stores content related to objects, information that may be requested by users, music, videos, images, information related to barcodes, and/or other appropriate content. The content selection engine 156 can select, from the content stored in the content data storage unit 158, one or more content items that are most relevant to the data received from the mobile device 1 10.

[0062] For example, if pixel data received from a mobile device 1 10 represents an image of a barcode, the content selection engine 156 can identify the object represented by the barcode and select content (e.g., images, title, etc.) related to the object. If the pixel data represents an image of a landmark, the content selection engine 156 can select content related to the landmark, e.g., photos of the landmark, a map to the landmark etc. If text translated from voice input includes a query for a particular fact or characteristic of an object (e.g., what is the average temperature in Florida in April?), the content selection engine 156 can identify this average temperature from the data stored in the content data storage unit 158.

[0063] The back-end servers 154 can provide content selected by the content selection engine 156 to the front-end servers 152. The front-end servers 152 can, in turn, provide the content to the mobile device 1 10 from which the data used to select the content was received.

[0064] The virtual assistant application 1 16 includes a user interface generator 1 18 that generates, updates, and presents user interfaces 122 at the mobile device 1 10. The user interfaces 122 can present content received from the content distribution system 150 in response to sending data for the selected input mode to the content distribution system 150. The user interfaces 122 can also present the data (or a portion of the data) that was sent to the content distribution system 150. For example, if the user asked the virtual assistant application 1 16 a question, the user interface 122 that provides an answer to the question can also include the question itself. In this way, the user has confirmation that the answer is an answer to the question that the user asked. Similarly, if content is being provided based on pixel data for the viewfinder of the camera 1 1 1 , an image represented by the pixel data can be presented in the user interface 122. This provides similar confirmation.

[0065] The user interface generator 1 18 can select and/or configure the user interface 122 based on the selected input mode. For example, the user interface generated when an image input mode is selected may be different (e.g., include different types of content, have different layouts, include different interface controls, etc.) from the user Interface generated when an audio input mode is selected. In a particular example, the user interface for an image input mode can include an image represented by the pixel data used to select the content while the user interface for an audio input mode can include a chat interface that presents the user’s query and an answer to the query.

[0066] in some implementations, the input mode selector 1 17 and/or the user interface generator 1 18 can be implemented remote from the mobile device 1 10. For example, the input mode selector 1 17 and/or the user interface generator 1 18 can be implemented at the content distribution system 150 In such implementations, a user can select a feature that allows the virtual assistant application 1 16 to send pixel data and/or audio data to the content distribution system 150.

[0067] in some implementations, the content selection engine 156 is

implemented on the mobile device 1 10. For example, the virtual assistant application 1 16 can include a content selection engine 156 and a content data storage unit that stores content, e.g., a more limited set of content than the content data storage unit 158. In this example, the virtual assistant application 1 16 can attempt to identify content in the content data storage unit of the virtual assistant application 1 16. If not suitable content is identified, the virtual assistant application 1 16 can request content from the content distribution system 150. In this way, content can be presented to the user more quickly in some cases than obtaining the content from the content distribution system 150.

[0068] FIG. 2 depicts a sequence of example screen shots 210, 220, and 230 of a mobile device that selects an input mode and presents content selected based on an input signal received using the selected input mode.

[0069] The first screen shot 210 depicts an example lock screen 212 of the mobile device. As described above, a lock screen 212 can regulate access to the mobile device. The lock screen 212 can also present information to the user. For example, the lock screen 212 presents the current date and time 214.

[0070] in this example, the user can access the mobile device by swiping the touch screen, e.g., in a particular direction. This would open the mobile device to a main user interface. The lock screen 212 also presents an icon 216 for a virtual assistant application. In this example, the icon 216 depicts a microphone symbol.

However, the icon could present a camera symbol, both a microphone symbol and a camera symbol, or another appropriate symbol (or other visual element) for the virtual assistant application.

[0071] if the user swipes the icon 216 upwards (or another appropriate direction), the mobile device can allow access to the mobile device and present a user interface for the virtual assistant application. The second screen shot 220 presents an initial user interface 222 for the virtual assistant application in this example, the initial user interface 222 presents an audio input mode selector control 223 and an image input mode selector control 224. If the user interacts with (e.g., selects) the audio input mode selector control 223, the virtual assistant application can use audio input to select content for presentation to the user. Similarly, if the user interacts with (e.g., selects) the image input mode selector control 224, the virtual assistant application 1 16 can use image input (e.g., pixel data for the viewfinder of the camera) to select content for presentation to the user if other input modes are available, the initial user interface 222 can include selector controls that allow the user to select the other input modes. [0072] in some implementations, the initial user interface 222 can present content or an animation that indicates that the virtual assistant application is attempting to identify content. For example, the initial user interface 222 can present a throbber animation that animates in a loop or another animation that signals that the virtual assistant application is identifying content for presentation to the user.

[0073] As described above, the virtual assistant application can select the input mode in response to the user launching the virtual assistant application, e.g., from the lock screen 212. To select the input mode, the virtual assistant application can obtain one or more signals in this example, the virtual assistant application obtains audio input 225, image input 228 (e.g., in the form of pixel data), and proximity data 227 specifying the location of the mobile device with respect to the user’s face.

[0074] The audio input 225 can be translated into text. In this example, the audio input 225 has been translated into the question“How fast was a 1981 Super Roadster?” For example, a user may have asked the virtual assistant application the translated question.

[0075] The image input 226 represents an image with a solid color in which each pixel has the same color. For example, the camera may have been facing the ground, a wail, or some other surface that has a solid color. The proximity data 227 indicates that the mobile device is below the users face (e.g., based on motion data for the mobile device indicating that the mobile device was moving downward before and/or when the virtual assistant application was launched).

[0076] The virtual assistant application can use the audio input 225, the image input 228, and the proximity data 227 to select an input mode. In this example, virtual assistant application can identify a query in the audio input (e.g., a question query), determine that there is not an object in the image input 226, and determine that the mobile device is not near the user’s face. Thus, the virtual assistant application can select an audio input mode as these signals indicate that it is more likely that the user is requesting content based on the translated question rather than the solid color in the image input 226. [0077] After the input mode is selected, the virtual assistant application can present a user interface 232 that presents content selected using input(s) from the selected input mode, as shown in the screen shot 230. In this example screen shot 230, the user interface 232 presents a text box 234 that includes the question and the answer to the question. In other examples, the question and answer can be presented in a chat box that allows the user to ask other questions either using voice or text.

[0078] As described above, the user interface can be selected based on the selected input mode. In this example, the selected input mode is the audio input mode and a text-based user interface was selected. The user interface can also be selected based on the content that is selected for presentation. For example, of the user’s question was“what does a 1961 Super Roadster look like,” a user interface that presents an image of a 1961 Super Roadster could be selected and presented to the user.

[0079] The example user interface 232 also presents a box 236 around the audio input mode selector control 223 to indicate that the audio input mode was the selected input mode. Other visual indicators can also be used to indicate that the audio input mode was the selected input mode if the user selects the image input mode selector control 224, the virtual assistant application can begin obtaining image inputs from the camera and select content to the present based on the image inputs. The virtual assistant application can also update the user interface or generate a user interface for the image input mode.

[0080] FIG. 3 depicts another sequence of example screen shots 310, 320, and 330 of a mobile device that selects an input mode and presents content selected based on an input signal received using the selected input mode. The first screen shot 310 depicts an example lock screen 212 of the mobile device. The lock screen 312 can be the same as, or similar to, the lock screen 212 of FIG. 2. For example, the lock screen 312 presents the current date and time 314 and an icon 316 for a virtual assistant application.

[0081] The second screen shot 320 presents an initial user interface 322 for the virtual assistant application, which can be the same as, or similar to the initial user interface 222 of FIG. 2. For example, the initial user interface 322 presents an audio input mode selector control 323 and an image input mode selector control 324.

[0082] As described above, the virtual assistant application can select the input mode in response to the user launching the virtual assistant application, e.g., from the lock screen 312. To select the input mode, the virtual assistant application can obtain one or more signals, e.g., audio input 325, image input 326 (e.g., in the form of pixel data), and proximity data 327 specifying the location of the mobile device with respect to the user’s face.

[0083] in this example, the audio input 325 has been translated into the question Ί wonder where she bought that” and the image input 326 represents an image of a car, e.g., based on pixel data from a viewfinder of a camera that was pointing at the car. For example, the user may have pointed the camera at the car while talking to a friend about the car. The proximity data 327 indicates that the mobile device is about level (or near) with the user's face (e.g., based on motion data for the mobile device indicating that the mobile device was moving upwards before and/or when the virtual assistant application was launched, and/or based on audio data including voice with at least a threshold intensity).

[0084] The virtual assistant application can use the audio input 325, the image input 326, and the proximity data 327 to select an input mode. In this example, virtual assistant application can determine that the audio input 325 does not include a query as there is not an identifiable object in the translated text. The virtual assistant application can also determine that the image input 326 represents an image of an object or of a car (e.g., using object detection techniques), determine that the image input 326 represents an image of an object in a vehicle class (e.g., using a coarse classifier), and/or determine that the image input 326 represents an image of a particular car, e.g., a 1961 Super Roadster (e.g., using object recognition techniques).

[0085] As the image input 326 represents and image of an object, the proximity data 327 indicates that the mobile device is level with or near the user’s face, and the audio input does not include a query, the virtual assistant application can select an image input mode.

7? [0086] After the input mode is selected, the virtual assistant application can present a user interface 332 that presents content selected using input(s) from the selected input mode, as shown in the screen shot 330. In this example screen shot 330, the user interface 332 presents an image 334 represented by the image input 324 and an information box 335 that includes information about the car depicted in the image. For example, the virtual assistant application (or a content distribution system) can recognize the car in the image and select content related to the car for presentation in the user interface 332. In this example, information about the car (e.g., the identity of the car) has been selected and presented. In other examples, other images of the car, video of the car, or other information about the car may be selected and presented. If the audio input included a question about the car, or that could be classified as a question about the car (e.g., what color is that? or what year is that car?), the virtual assistant application can select content that answers the question.

[0087] The example user interface 332 also presents a box 336 around the image input mode selector control 324 to indicate that the image input mode was the selected input mode if the user selects the audio input mode selector control 323, the virtual assistant application can begin obtaining audio inputs from the microphone and select content to present based on the audio inputs. The virtual assistant application can also update the user interface or generate a user interface for the audio input mode.

[0088] FIG. 4 is a flow chart of an example process 400 for selecting an input mode for a virtual assistant application and presenting content based on an input signal received using the selected input mode. Operations of the process 400 can be performed, for example, by one or more data processing apparatus, such as the mobile device 1 10 of FIG. 1. Operations of the process 400 can also be implemented as instructions stored on a non-transitory computer readable medium. Execution of the instructions cause one or more data processing apparatus to perform operations of the process 400.

[0089] Request data specifying a request to launch a virtual assistant application from a lock screen of a mobile device is received (402). The mobile device can detect a particular action, gesture, or spoken commands at the lock screen (or while the mobile device is locked) that indicates that the user is requesting the mobile device to unlock the mobile device and launch the virtual assistant application. For example, the mobile device can detect a user swipe of an icon presented on the lock screen.

[0090] Input signals for the virtual assistant are obtained (404). Each of the input signals can be of an input type different from each other input type of each other input signal. For example, the input types can include an image type and an audio type. The input signal of the image type can be pixel data for one or more images captured by a camera of the mobile device, e.g., captured after the mobile device detects the request to launch the virtual assistant application. The pixel data can include a respective pixel data set for each image. Each pixel data set can represent the pixels of the viewfinder of the camera for a particular point in time. The pixel data in each pixel data set can include data specifying visual characteristics (e.g., color, intensity, brightness, etc.) of each pixel of the viewfinder of the camera.

[0091] The input signal of the audio type can be audio data for audio input captured by a microphone of the mobile device. For example, the audio data can include text that represents spoken words in the audio input if the audio input includes spoken words. For example, the virtual assistant application (or another application) can convert the audio data to text using audio transcription techniques.

[0092] The obtained signals can also include inputs of a proximity type that indicates a location of the mobile device with respect to the user or the user’s face. The input signals of the proximity type can specify a distance between the mobile device and the user (or the user’s face) and the relative location of the mobile device with respect to the user (or the user’s face), e.g , above, even, below, to a particular side, etc.).

[0093] The obtained signals can also include inputs of a geographic location type. The input signals of the geographic location type can include a geographic location of the mobile device. For example, the virtual assistant application can obtain GPS coordinates from a GPS sensor of the mobile device. The virtual assistant application can use the coordinates and an index of locations to determine, for example, the type of building or area in which the mobile device is located (e.g., a museum, park, stadium, residence, retail store, etc.). [0094] The obtained signals can include inputs of an environmental type. The inputs of the environmental type can include signals related to the lighting, noise, and/or other environment signals. For example, the mobile device can include a light sensor that detects the amount of ambient light at the location of the mobile device and/or an ambient noise sensor that detects the amount of ambient noise at the location of the mobile device. In another example, the virtual assistant application (or another application) can determine the amount of ambient lighting using the pixel data and/or determine the amount of ambient noise using the audio data.

[0095] The obtained signals can include the user's history or preferences for input modes based on previous signai(s). For example, if the user has previously selected an image mode for the virtual assistant as the preferred input mode when an object is recognized in an image, this data can be obtained by the virtual assistant application when selecting the input mode.

[0096] An input mode for the virtual assistant application is selected from multiple candidate input modes based on the one or more signals (406). Each candidate input mode can be of an input type different from each other input type of each other candidate input mode. The candidate input modes can include an image input mode of an image type and an audio input mode of an audio type. The image input mode can receive pixel data for input to the virtual assistant application. Similarly, the audio input mode can receive audio input for the virtual assistant application.

[0097] The candidate input modes can also include a combination input mode that receives inputs of two or more types (e.g., pixel data and audio data, or audio data and text data). As described above, the input mode can be selected based on one or more of the input signals, e.g., using a set of rules or a machine learning model.

[0098] In one example, the input mode is selected by determining, based on the pixel data for an image received using the image input mode, whether the image includes one or more recognized objects and determining, based on the audio data received using the audio input mode, whether the audio includes one or more

recognized spoken words. The input mode is determined based on whether the image includes one or more recognized objects and whether the audio includes one or more recognized spoken words. For example, if the image includes one or more recognized objects and the audio does not include a spoken word, it’s more likely the user is interested in the object(s) in the image and the image input mode can be selected. Similarly, if the audio includes one or more recognized spoken words and the image does not include a recognized object, it’s more likely the user it interested in receiving content related to the spoken words.

[0099] If the image includes one or more recognized objects and the audio includes one or more spoken words that are related to the one or more recognized objects, the user may be asking a question about the recognized object(s) or specifying the content related to the object(s) that the user is seeking if the image includes one or more recognized objects and the audio includes one or more spoken words that are related to the one or more recognized objects, a combination input mode that uses both the audio data and the pixel data can be selected for us in identifying content for presentation by the virtual assistant application.

[00100] Content is presented to a user of the mobile device (408). The presented content can be selected based on the input signal(s) received using the selected input mode. For example, If an audio input mode is selected, the content can be selected based on audio data for audio input (e.g., a transcription of the audio input). If an image input mode is selected, the content can be selected based on one or more objects recognized in pixel data for one or more images if an image and audio input mode is selected, the content can be selected based on a combination of the image input and the audio input. The content can be selected by a remote system, e.g., a content distribution system, and sent to the mobile device for presentation.

[00101] Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in

combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine- readable storage substrate, a random or serial access memory device, or a combination of one or more of them.

[00102] The term“data processing apparatus” encompasses ail kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field

programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

[00103] A computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. [00104] The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array), an ASIC (application specific integrated circuit), or a GPGPU (General purpose graphics processing unit).

[001 OS] Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.

[00106] Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

[00107] To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user’s client device in response to requests received from the web browser

[00108] Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.

[00109] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a

communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

[00110] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

[00111] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that ail illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

[00112] Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results in certain implementations, multitasking and parallel processing may be advantageous.