Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND APPARATUS FOR PROVIDING INTERACTIVE IMAGES
Document Type and Number:
WIPO Patent Application WO/2017/189985
Kind Code:
A1
Abstract:
In some embodiments, an apparatus includes a flexible portable housing having a flexible electronic ink image display, one or more sensors, and an image processor operably coupled to both the one or more sensors and the flexible electronic ink image display. The one or more sensors include an accelerometer and/or a gyroscope. The image processor can cause the flexible electronic ink image display to display a graphical representation of a first image frame from a video generated by a video capture device. The image processor can receive animation data generated (1) by the one or more sensors, and (2) based on a user-generated animation request. Based on the animation data, the image processor can cause the flexible electronic ink image display to replace the graphical representation of the first image frame with a graphical representation of a second image frame from the video.

Inventors:
COLE ALARIC (US)
STADLEN THOMAS (US)
BLACKFORD FREDERICK (US)
POELKER COLE (US)
Application Number:
PCT/US2017/030092
Publication Date:
November 02, 2017
Filing Date:
April 28, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GRASSCROWN INC (US)
International Classes:
G06F3/0485; G06F3/0484; G06T3/60
Foreign References:
US20130305286A12013-11-14
CN203502713U2014-03-26
US20150309686A12015-10-29
US20090286479A12009-11-19
US20060061089A12006-03-23
US8218818B22012-07-10
Attorney, Agent or Firm:
HUTTER, Christopher R. et al. (US)
Download PDF:
Claims:
1. An apparatus, comprising:

a flexible portable housing including a flexible electronic ink image display, at least one sensor, and an image processor operably coupled to both the sensor and the flexible electronic ink image display, the at least one sensor including at least one of an accelerometer or a gyroscope;

the image processor configured to:

cause the flexible electronic ink image display to display a graphical representation of a first image frame from a video generated by a video capture device separate from and wirelessly operably coupled to the flexible portable housing,

receive animation data generated (1) by the sensor, and (2) based on a user- generated animation request, and

based on the animation data, cause the flexible electronic ink image display to replace the graphical representation of the first image frame with a graphical representation of a second image frame from the video, the second image frame being generated by the video capture device after the first image frame is generated by the video capture device.

2. The apparatus of claim 1, wherein the user-generated animation request is a first user- generated animation request, the image processor further configured to:

receive a second user-generated animation request at a user interaction portion of the flexible electronic ink image display, the user interaction portion being coextensive with at least one of (1) an object in both the first image frame and a second image frame, or (2) both the graphical representation of the first image frame and the graphical representation of the second image frame, and

in response to the second user-generated animation request, cause the flexible electronic ink image display to replace the graphical representation of the second image frame with a graphical representation of a third image frame from the video.

3. The apparatus of claim 1, wherein the user-generated animation request is a first user- generated animation request, the image processor further configured to:

receive a second user-generated animation request at a user interaction portion of the flexible electronic ink image display, the user interaction portion being coextensive with at least one of (1) an object in both the first image frame and a second image frame, or (2) both the graphical representation of the first image frame and the graphical representation of the second image frame, and in response to the second user-generated animation request, cause the flexible electronic ink image display to replace the graphical representation of the second image frame with a graphical representation of a third image frame from the video,

the second user-generated animation request being a rightward finger motion applied to the user interaction portion of the flexible electronic ink image display.

4. The apparatus of claim 1, wherein:

the user-generated animation request includes tilting of the flexible portable housing.

5. The apparatus of claim 1, wherein the image processor is further configured to: conduct image analytics on the video to detect motion in a non-horizontal direction of a foreground object from the first image frame to the second image frame,

the animation request including tilting of the flexible portable housing in the non- horizontal direction, and

replace the graphical representation of the first image frame with the graphical representation of the second image frame such that the foreground object appears to animate in the non-horizontal direction at the flexible electronic ink image display.

6. The apparatus of claim 1, wherein the video includes a third image frame generated by the video capture device after the second image frame is generated by the video capture device, the image processor further configured to:

detect at least one of a velocity, an acceleration, a force, or a distance associated with the user-generated animation request to produce a display instruction,

(1) replace at a first time the graphical representation of the first image frame with the graphical representation of the second image frame, and (2) replace at a second time after the first time the graphical representation of the second image frame with a graphical representation of the third image frame, when the display instruction meets a criterion, and replace the graphical representation of the first image frame with the graphical representation of the second image frame, and do not display the graphical representation of the third image frame, when the display instruction fails to meet the criterion.

7. The apparatus of claim 1, wherein the image processor is configured to pair with the video capture device based on data generated by the at least one sensor, the data generated by the at least one sensor being generated in response to a user-generated pair request.

8. The apparatus of claim 1, wherein the flexible electronic ink image display includes a set of lenticular lenses such that when the image processor causes the flexible electronic ink image display to graphical representation of images the graphical representation of images is magnified based on a viewing angle.

9. A non-transitory processor-readable medium storing code representing instructions to be executed by a processor, the code comprising code to cause the processor to:

display at a visual display device a graphical representation of a first image frame from a video generated by a video capture device;

receive at a user interaction portion of the visual display device a user-generated animation request, the user interaction portion coextensive with at least one of (1) an object in both the first image frame and a second image frame, or (2) both the graphical representation of the first image frame and a graphical representation of the second image frame, the second image frame being generated by the video capture device after the first image frame is generated by the video capture device; and

in response to the animation request, replace the graphical representation of the first image frame with the graphical representation of the second image frame.

10. The non-transitory processor-readable medium of claim 9, wherein:

the animation request is a rightward finger motion applied to the user interaction portion of the visual display device.

11. The non-transitory processor-readable medium of claim 9, wherein:

the animation request is a rightward finger motion applied to the user interaction portion of the visual display device,

the animation request is a first animation request, the code to cause the processor to receive the first animation request includes code to cause the processor to receive the first animation request at a first time, the code further comprising code to cause the processor to: receive at a second time at the user interaction portion a second user-generated animation request, the second animation request being a leftward finger motion applied to the user interaction portion of the visual display device; and

in response to the second animation request, replace the graphical representation of the second image frame with a graphical representation of the first image frame.

12. The non-transitory processor-readable medium of claim 9, the code further comprising code to cause the processor to:

conduct image analytics on the video to detect motion in a non-horizontal direction of a foreground object from the first image frame to the second image frame,

the animation request being a finger motion (1) in the non-horizontal direction, and (2) applied to the user interaction portion of the visual display device,

the code to cause the processor to replace the graphical representation of the first image frame includes code to cause the processor to replace the graphical representation of the first image frame with the graphical representation of the second image frame such that the foreground object appears to animate in the non-horizontal direction at the visual display device.

13. The non-transitory processor-readable medium of claim 9, the code further comprising code to cause the processor to:

detect the object as a foreground object in both the first image frame and the second image frame, the user interaction portion being designated in part by the foreground object.

14. The non-transitory processor-readable medium of claim 9, wherein the video includes a third image frame generated by the video capture device after the second image frame is generated by the video capture device, the code further comprising code to cause the processor to:

detect at least one of a velocity, an acceleration, a force, or a distance associated with the user-generated animation request to produce a display instruction,

the code to cause the processor replace the graphical representation of the first image frame includes code to cause the processor to (1) replace at a first time the graphical representation of the first image frame with the graphical representation of the second image frame, and (2) replace at a second time after the first time the graphical representation of the second image frame with a graphical representation of the third image frame, when the display instruction meets a criterion;

the code to cause the processor to replace the graphical representation of the first image frame includes code to cause the processor to replace the graphical representation of the first image frame with the graphical representation of the second image frame, and not display the graphical representation of the third image frame when the display instruction fails to meet the criterion.

15. The non-transitory processor-readable medium of claim 9, the code further comprising code to cause the processor to:

receive the first image frame from the video capture device at a first time;

receive the second image frame from the video capture device at a second time after the first time; and

decode the first image frame before the second time to produce a decoded first image frame,

the video not including any image frame generated (1) after the first image frame is generated and (2) before the second image frame is generated, the graphical representation of the first image frame being a graphical representation of the decoded first image frame.

16. The non-transitory processor-readable medium of claim 9, wherein the visual display device is physically separate from the video capture device.

17. A non-transitory processor-readable medium storing code representing instructions to be executed by a processor, the code comprising code to cause the processor to:

display at a visual display device a graphical representation of a first image frame from a video generated by a video capture device;

receive animation data generated by at least one sensor, the animation data being generated based on a user-generated animation request, the at least one sensor including at least one of an accelerometer or a gyroscope; and

in response to the animation data, replace the graphical representation of the first image frame with a graphical representation of a second image frame, the second image frame being generated by the video capture device after the first image frame is generated by the video capture device.

18. The non-transitory processor-readable medium of claim 17, wherein:

the animation data is generated by the at least one sensor in response to tilting of the visual display device.

19. The non-transitory processor-readable medium of claim 17, wherein: the code to cause the processor to replace the graphical representation of the first image frame includes code to cause the processor to replace the graphical representation of the first image frame without physical contact to a display of the visual display device.

20. The non-transitory processor-readable medium of claim 17, the code further comprising code to cause the processor to:

conduct image analytics on the video to identify motion in a non-horizontal direction of a foreground object from the first image frame to the second image frame, the animation data generated by the at least one sensor being based on tilting the visual display device in the non-horizontal direction,

the code to cause the processor to replace the graphical representation of the first image frame includes code to cause the processor to replace the graphical representation of the first image frame with the graphical representation of the second image frame such that the foreground object appears to animate in the non-horizontal direction at the visual display device.

21. The non-transitory processor-readable medium of claim 17, wherein the video includes a third image frame generated by the video capture device after the second image frame is generated by the video capture device, the code further comprising code to cause the processor to:

detect at least one of a velocity, an acceleration, a force, or a distance associated with the user-generated animation request to produce a scrub instruction,

the code to cause the processor to replace the graphical representation of the first image frame includes code to cause the processor to (1) replace at a first time the graphical representation of the first image frame with the graphical representation of the second image frame, and (2) replace at a second time after the first time the graphical representation of the second image frame with a graphical representation of the third image frame, when the scrub instruction meets a criterion;

the code to cause the processor to replace the graphical representation of the first image frame includes code to cause the processor to (1) replace the graphical representation of the first image frame with the graphical representation of the second image frame, and (2) not display the graphical representation of the third image frame, when the scrub instruction fails to meet the criterion.

22. The non-transitory processor-readable medium of claim 17, the code further comprising code to cause the processor to:

receive the first image frame from the video capture device at a first time;

receive the second image frame from the video capture device at a second time after the first time; and

decode the first image frame before the second time to produce a decoded first image frame, the video not including any image frame generated (1) after the first image frame is generated and (2) before the second image frame is generated, the graphical representation of the first image frame being a graphical representation of the decoded first image frame.

Description:
METHODS AND APPARATUS FOR PROVIDING INTERACTIVE

IMAGES

Cr oss-Reference to Related Applications

[1001] This application claims priority to and the benefit of U.S. Provisional Application No. 62/329,714, filed April 29, 2016, entitled "Methods and Apparatus for Providing Interactive Images," the disclosure of which is incorporated herein by reference in its entirety.

Background

[1002] Some embodiments described herein relate generally to methods and apparatus for developing and providing interactive images.

[1003] Known interactive image platforms implemented, for example, on mobile devices, allow users to interact with one or more images (e.g., from a recorded video) via progress bars or scrubbers. Such platforms, however, fail to provide ease to a user in accurately selecting a frame or portion of the video and quickly scrubbing through, for example, an entirety, or portion thereof, of the video.

[1004] Further, such progress bars or scrubbers offer a small interface touch point at which users can interact, for example, by swiping a finger across a touchscreen of a mobile device. Not only does the small interface touch point limit a user's ability to accurately select and scrub through multiple images and/or portions of recorded video(s), but such small interface touch points are located separate from or adjacent to the image of interest and/or the subject of interest within that image. Said another way, current interactive image platforms limit user interaction to a small and inaccurate progress bar or scrubber that is offset from the actual image or set of images of interest to the user.

[1005] Moreover, the above-identified user interaction shortcomings of current interactive image platforms are exacerbated by current media consumption trends and current technological solutions of managing the same. For example, in many instances, media consumption has shifted from storing the entire media locally (e.g., at the user device), to streaming and/or partial and temporary media storage locally. Current solutions, for example, implement video codecs (e.g., H.264) that rely on intra-frame compression. Current implementations of intra-frame compression with interactive image platforms, however, ultimately appear choppy or coarse to an end-user at a user device, due in part to difficulties in decoding specific frames at one or more arbitrary points in time. Accordingly, a need exists for developing and providing improved interactive images.

[1006] Furthermore, known systems fail to provide images in a manner similar to instant film— a format desirable to many users. For example, instant film is available to a user on request and is on a flexible and rugged substrate capable of portability— characteristics desirable to many users. Known systems for digital or film prints, however, involve expensive and bulky printers, expensive and/or corrosive chemicals, and/or the use of expensive instant film including a specialized instants cameras. Further, some known products can print physical photos from digital images, however, such prints are not instant. Other known products employ a relatively small printer to provide instant-print functionality; these prints, however, are bulky and print in poor quality compared to digital images, non- instant prints, or instant prints. Additionally, digital picture frames exist, but are bulky, of low quality, not portable, fragile, and require energy inefficient backlighting. Moreover, known systems include paper-like or electronic ink displays; such systems, however, display only static images (rather than videos or animated images).

[1007] Accordingly, a need exists for improved devices for emulating instant printed images in an electronic format.

Summary

[1008] In some embodiments, an apparatus includes a flexible portable housing having a flexible electronic ink image display, one or more sensors, and an image processor operably coupled to both the one or more sensors and the flexible electronic ink image display. The one or more sensors include an accelerometer and/or a gyroscope. The image processor can cause the flexible electronic ink image display to display a graphical representation of a first image frame from a video generated by a video capture device separate from and wirelessly operably coupled to the flexible portable housing. The image processor can receive animation data generated (1) by the one or more sensors, and (2) based on a user-generated animation request. Based on the animation data, the image processor can cause the flexible electronic ink image display to replace the graphical representation of the first image frame with a graphical representation of a second image frame from the video. The second image frame is generated by the video capture device after the first image frame is generated by the video capture device. Brief Description of the Drawings

[1009] FIG. 1 is a block diagram depicting a compute device from a interactive image system, according to an embodiment.

[1010] FIG. 2 is a block diagram depicting a server device from the interactive image system of FIG. 1.

[1011] FIG. 3 is a block diagram showing the interactive image system of FIGS. 1 and 2, according to an embodiment.

[1012] FIG. 4 is a block diagram depicting a server device and a compute device configured to execute an interactive image application, according to an embodiment.

[1013] FIGS. 5A and 5B are example graphical representations of an image interaction environment defined by the interactive image application shown in FIG. 4.

[1014] FIGS. 6A and 6B are example graphical representations of an image interaction environment defined by the interactive image application shown in FIG. 4.

[1015] FIG. 7 is a flow chart illustrating a method of creating and manipulating an image interaction environment, according to an embodiment.

[1016] FIG. 8 illustrates a block diagram depicting an interactive image system including a flexible portable housing, according to an embodiment.

[1017] FIG. 9 is an exploded view of a flexible portable housing configured to provide an image interaction environment, according to an embodiment.

[1018] FIG. 10 is a flow chart illustrating a method of providing an interactive image using a flexible portable housing, according to an embodiment.

[1019] FIG. 11 is a flow chart illustrating a method of providing an interactive image, according to an embodiment.

[1020] FIG. 12 is a flow chart illustrating a method of providing an interactive image, according to an embodiment.

[1021] FIG. 13A is a block diagram in front view depicting a flexible portable housing and its dimensions, according to an embodiment; and FIG. 13B is a block diagram in side view depicting the flexible portable housing of FIG. 13 A. Detailed Description

[1022] In some embodiments, an apparatus includes a flexible portable housing having a flexible electronic ink image display, one or more sensors, and an image processor operably coupled to both the one or more sensors and the flexible electronic ink image display. The one or more sensors include an accelerometer and/or a gyroscope. The image processor can cause the flexible electronic ink image display to display a graphical representation of a first image frame from a video generated by a video capture device separate from and wirelessly operably coupled to the flexible portable housing. The image processor can receive animation data generated (1) by the one or more sensors, and (2) based on a user-generated animation request. Based on the animation data, the image processor can cause the flexible electronic ink image display to replace the graphical representation of the first image frame with a graphical representation of a second image frame from the video. The second image frame is generated by the video capture device after the first image frame is generated by the video capture device.

[1023] In some embodiments, a non-transitory processor-readable medium includes code to cause a processor to display at a visual display displace a graphical representation of a first image frame from a video generated by a video capture device. The code further includes code to cause the processor to receive at a user interaction portion of the visual display a user-generated animation request. The user interaction portion is coextensive with at least one of (1) an object in both the first image frame and a second image frame, or (2) both the graphical representation of the first image frame and a graphical representation of the second image frame. The second image frame is generated by the video capture device after the first image frame is generated by the video capture device.

[1024] In some embodiments, a non-transitory processor-readable medium includes code to cause a processor to display at a visual display device a graphical representation of a first image frame from a video generated by a video capture device. The code further includes code to cause the processor to receive animation data generated by at least one sensor. The animation data is generated based on a user-generated animation request. The at least one sensor includes at least one of an accelerometer or a gyroscope. The code further includes code to cause the processor to, in response to the animation data, replace the graphical representation of the first image frame with a graphical representation of a second image frame. The second image frame is generated by the video capture device after the first image frame is generated by the video capture device. [1025] In some embodiments an apparatus includes a processor included within a compute device, operatively coupled to a memory, and configured to execute an application including a display, a controller, and a recorder device. The recorder is configured to receive an indication of a user selection to record a video. The recorder is configured to cause image capture by a device in response to the user selection to record the video. The controller is configured to define an image interaction environment based on the video, and send a signal representing the image interaction environment to the display. The display is configured to display a graphical representation of the image interaction environment. The controller is configured to receive and analyze a user image interaction request, re-define, re-create or modify the image interaction environment in accordance with the user image interaction request, and send a signal to the display representing the re-defined, re-created, or modified image interaction environment. The display is configured to display a graphical representation of the re-defined, re-created, or modified image interaction environment such that the representation appears to animate from the perspective of the user.

[1026] As used in this specification, any of the components, such as a display, recorder, etc., can be, for example, any assembly and/or set of operatively-coupled electrical components associated with performing a specific function(s), and can include, for example, a memory, a processor, electrical traces, optical connectors, software (that is stored in memory and/or executing in hardware) and/or the like.

[1027] As used in this specification, the singular forms "a," "an" and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, the term "a compute device" is intended to mean a single compute device or a combination of compute devices, and the term "a server device" is intended to mean a single server device or a combination of server devices.

[1028] As shown in FIG. 1, a compute device 102 includes a memory 112, a processor 114, and a communication interface 110. The compute device 102 can be any suitable compute device. For example, in some implementations, the compute device 102 is a mobile compute device (smartphone, tablet, laptop, smartwatch or other wearable compute device, electronic paper, etc.) that is wirelessly in communication with the network 116 (see e.g., FIG. 3) and/or a server device 120 (see e.g., FIG. 2). In other implementations, compute device 102 is a desktop computer, television, set-top box, etc. The compute device 102 includes an application 108 (e.g., stored in memory 112 and executed at the processor 114). Although the application 108 is shown and described as being located at the compute device 102, in other embodiments, the application 108 can be located separate from the compute device 102 (e.g., at the server device 120), and in communication with any device operably coupled to the network 116 (e.g., including the compute device 102).

[1029] In some implementations, such as, for example, as shown in FIG. 1, image capturer 104 and a visual display 106 can be integrated into and/or part of the compute device 102 (shown as dashed lines in FIG. 1, by way of example, a smartphone or tablet). In other implementations, the image capturer 104 and the visual display 106 can be separate from the compute device 102 (by way of example, a desktop computer, a projector screen, a holographic screen, etc.).

[1030] The memory 112 can be, for example, a random access memory (RAM), a memory buffer, a hard drive, a read-only memory (ROM), an erasable programmable readonly memory (EPROM), and/or the like. In some instances, the memory 112 can store, for example, one or more software applications, components and/or code, for example application 108, that can include instructions to cause the processor 114 to perform one or more processes, functions, and/or the like. For example, in some instances, the memory 112 can include a software applications, components and/or code that can include instructions to cause the processor 114 to operate an interactive image application and/or a media player application. The memory 112 can further include instructions to cause the communication interface 110 to send and/or receive one or more signals for example associated with the input to or output from, respectively, the server device 120, as described in further detail herein.

[1031] The processor 114 can be any suitable processing device configured to run and/or execute a set of instructions or code such as, for example, a general purpose processor, a central processing unit (CPU), an accelerated processing unit (APU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), and/or the like. As such, the memory 112 can store instructions, for example, for application 108, to cause the processor 114 to execute the application 108 and/or other applications, processes, and/or functions associated with, for example, an interactive image application, as described in further detail herein. [1032] The image capturer 104 can be any suitable component, subsystem, device and/or combination of devices. For example, in some embodiments, the image capturer 104 can be an input port or the like that can be operably coupled to the memory 112 and the processor 114, as well as, for example, a camera device (not shown). The image capturer 104 can be configured to receive a signal (e.g., a video signal from a camera) associated with an interactive image application, can forward the signal and/or otherwise send another signal representing that signal to the processor 114 for any suitable processing and/or analyzing process, as described in further detail herein. In some implementations, the image capturer 104 can be an integrated camera, for example, a camera that shares a housing with compute device 102 (shown as dashed lines in FIG 1, by way of example, a smartphone, tablet, laptop, etc.) In other embodiments, the image capturer 104 can be a peripheral camera, for example, a camera having a housing distinct from compute device 102, but that is operably and/or physically coupled to and/or co-located with compute device 102 (e.g. an add-on webcam, a digital camera or camcorder, etc.) In some implementations, the image capturer 104 can be a combination of components or devices, for example, a camera coupled to a microphone, a gyroscope, and/or an accelerometer. In other embodiments, the compute device 102 can also include a haptic input component, an audio input component, an accelerometer, a gyroscope, and/or the like (not shown in FIG. 1).

[1033] The visual display 106 of the compute device 102 can be any suitable component, subsystem, device and/or combination of devices. For example, in some instances, the visual display can provide a visual user interface for the compute device 102. For example, the visual display 106 can be a cathode ray tube (CRT) display, a liquid crystal display (LCD) display, a light emitting diode (LED) display, an e-paper display, an e-ink display, and/or the like. As described in further detail herein, the visual display 106 can provide the user interface for a software application (e.g., mobile application, internet web browser, and/or the like). In some implementations, the visual display 106 can include a touchscreen input component configured to receive input data or coordinates in response to a user touching the touchscreen. For example, as described in further detail herein, the visual display 106 can provide both a graphical representation of one or more images and receive inputs, such as haptic inputs, from a user. In some embodiments, the visual display 106 can be a combination of components or devices, for example, a display coupled to a speaker and/or a haptic output component. In other implementations, the compute device can also include a speaker that can receive a signal to cause the speaker to output audible sounds such as, for example, instructions, verification questions, confirmations, etc. In other implementations, the compute device can also include a haptic output component that can receive a signal to cause the haptic output component to vibrate at any number of different frequencies and/or in any number patterns. In some implementations, the visual display 106 can be integrated such that it shares a housing with the compute device 102 (shown as dashed lines in FIG. 1), while in other implementations, at least a portion of the visual display 106 can be a peripheral visual display, for example, a visual display having a housing distinct from compute device 102, but that is operably and/or physically coupled to compute device 102.

[1034] The communication interface 110 of the compute device 102 can be any suitable component, subsystem, and/or device that can communicate with the network 1 16. More specifically, the communication interface 110 can include one or more wired and/or wireless interfaces, such as, for example, Ethernet interfaces, optical carrier (OC) interfaces, asynchronous transfer mode (ATM) interfaces, and/or the like. In some embodiments, the communication interface 110 can be, for example, a network interface card and/or the like that can include at least a wireless radio (e.g., a WiFi ® radio, a Bluetooth ® radio, etc.). As such, the communication interface 1 10 can send signals to and/or receive signals from the server device 120.

[1035] As described in further detail herein, the compute device 102 can receive, capture, display and/or allow a user of the compute device 102 to manipulate images and/or videos. It should be understood that the compute device 102 can be configured to perform some or all of this functionality without communication with a separate device, e.g., server device 120. Some or all of this functionality can alternatively occur on a server device 120 separate from the compute device 102. As opposed to the compute device 102 which in many instances is in physical possession by the user, the server device 120 in many instances is located remote from the compute device 102 and the user. The server device 120 can be any type of device that can send data to and/or receive data from one or more compute devices (e.g., the compute device 102) and/or databases (e.g., the database 126) via the network 116. In some instances, the server device 120 can function as, for example, a server device (e.g., a web server device), a network management device, an administrator device, and/or so forth. The server device 120 can be located within a central location, distributed in multiple locations, and/or a combination thereof. Moreover, some or all of a set of components of the server device 120 can be located within a user device (e.g., the compute device 102) and/or any other device or server in communication with the network 1 16.

[1036] As shown in FIG. 2, the server device 120 includes a communication interface 128, a memory 122, a processor 124, and a database 126. The server device 120 can include and/or can otherwise be operably coupled to the database 126. The database 126 can be, for example, a table, a repository, a relational database, an object-oriented database, an object- relational database, a structured query language (SQL) database, and an extensible markup language (XML) database, and/or the like. In some embodiments, the database 126 can be stored in a memory of the server device 120 and/or the like. In other embodiments, the database 126 can be stored in, for example, a network access storage device (NAS) and/or the like that is operably coupled to the server device 120. In some instances, the database 126 can be in communication with the server device 120 via the network 116. In such instances, the database 126 can communicate with the network 1 16 via a wired and/or a wireless connection. The database 126 can be configured to at least temporarily store data such as, for example, data associated with multimedia presentations or content (e.g., videos, images, audio, etc.). In some implementations, at least a portion of the database 126 can be stored and/or implemented in, for example, the memory 1 12 of the compute device 102.

[1037] The communication interface 128 of the server device 120 can be any suitable device that can communicate with the network 116 via a wired and/or wireless communication. More specifically, the communication interface 128 can include one or more wired or wireless interfaces, such as, for example, Ethernet interfaces, optical carrier (OC) interfaces, asynchronous transfer mode (ATM) interfaces, and/or the like. In some embodiments, the communication interface 128 can be, for example, an Ethernet port, a network interface card, and/or the like. In some implementations, the communication 128 can include a wireless radio (e.g., a WiFi ® radio, a Bluetooth ® radio, etc.) that can communicate with the network 1 16.

[1038] The memory 122 can be, for example, a random access memory (RAM), a memory buffer, a hard drive, a read-only memory (ROM), an erasable programmable readonly memory (EPROM), and/or the like. In some instances, the memory 122 can be configured to store, for example, one or more software applications, components, and/or code that can include instructions to cause the processor 122 to perform one or more processes, functions, and/or the like. For example, in some instances, the memory 122 can include software, applications, components, and/or code that can include instructions to cause the processor 124 to instruct the communication interface 128 to receive and/or send, for example, one or more signals from or to, respectively, the compute device 102 (via the network 1 16). In some instances, the one or more signals can be associated with an interactive image application, and/or the like. The memory 122 can further include instructions to cause the processor 124 to analyze, classify, compare, verify, and/or otherwise process data received from the compute device 102. In addition, the memory 122 can include instructions to cause the processor 124 to query, update, and/or access data stored in the database 126, as described in further detail herein.

[1039] The processor 124 of the server device 120 can be any suitable processing device configured to run and/or execute a set of instructions or code such as, for example, a general purpose processor, a central processing unit (CPU), an accelerated processing unit (APU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a front end processor, a network processor, and/or the like. As such, the memory 122 can store instructions to cause the processor 124 to execute application, components, processes, and/or functions associated with, for example, sending and/or receiving signals via the network 120, analyzing; classifying, comparing, verifying, and/or processing data; and/or querying, updating, and/or otherwise accessing data stored in the database 126, and/or the like.

[1040] FIG. 3 is block diagram showing an interactive image system ("system") 100 according to an embodiment. As shown in FIG. 1 , the system 100 includes a compute device 102 and a server device 120 that are coupled via a network 116. Compute device 102 includes an application 108 and is operatively coupled to an image capturer 104 and an interactive visual display 106.

[1041] The compute device 102 (e.g., a mobile compute device) and the server device 120 are in communication via the network 116. The network 116 can be any suitable network or combination of networks. For example, in some embodiments, the network 1 16 can be a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a worldwide interoperability for microwave access network (WiMAX®), an intranet, the Internet, an optical fiber (or fiber optic)-based network, a virtual network, and/or any combination thereof. Moreover, at least a portion of the network 116 can be implemented as a wireless network. For example, in some embodiments, the compute device 102 can be in communication with the network 1 16 via a wireless access point or the like (not shown in FIG. 1) that is operably coupled to the network 116. The server device 120 can similarly be in communication with the network 160 via a wired and/or wireless connection.

[1042] FIG. 4 is a block diagram depicting a compute device 202 operatively coupled to a server device 220 via a network 216. Compute device 202, network 216 and server device 220 can be similar to and include similar components as compute device 102, network 116 and server 120, respectively. As shown in FIG. 4, compute device 202 includes a processor 212 configured to execute an interactive image application 208. FIGS. 5A and 5B are example graphical representations of a user interface for, and an output of, an image interaction environment 240 defined by the interactive image application 208. The interactive image application 208 can include software and/or code (stored in memory or implemented in hardware such as processor 212) that can include instructions to cause the processor 212 to define the image interaction environment 240. In some instances, an interactive image application 208 can be a native application on a desktop and/or mobile computing device. In some instances, an interactive image application 208 can be a web based application (e.g., accessed by a web browser). As shown in FIG. 4, the server device 220 includes a processor 224 configured to execute instructions in connection with a publisher 270, a database 272, and optionally a controller 274 (shown as dashed lines in FIG. 4).

[1043] The publisher 270 is configured to receive multimedia content from the image capturer 234 and/or the database 272, and send a signal via the network 216 representing the multimedia content to one or more components and/or devices in communication with the network 216. The controller 274 can be configured to function similar to or the same as a controller 232. For ease of explanation, the functionality of both the controller 274 and the controller 232 will be described here with reference to the controller 232. It should be understood that any functionality described with respect to the controller 232 at the compute device 202 can be alternatively or additionally performed by the controller 274 at the server device 220.

[1044] As shown in FIG. 4, the interactive image application 208 includes an interactive visual display 230, a controller 232, and an image capturer 234. The controller 232 can be configured to send signals to the visual display 230 to cause the visual display 230 to render a graphical representation of the image interaction environment 240. The controller 232 can be configured to receive signals from the visual display 230 indicative of and/or in response to a user's interaction with the visual display 230. For example, in instances in which the visual display 230 is a touchscreen, the visual display 230 can receive signals indicative of haptic data based on the user's touching of the touchscreen. Such haptic data can be time varying and/or can include, for example, location, direction of motion, velocity, acceleration, and/or force of a user's touching of the touchscreen. The image capturer 234 can be configured to receive signals indicative of a request by a user of the compute device 202 to sense, capture, and/or record one or more images and/or videos. The image capturer 234 can be configured to (e.g., in response to and/or based on the user's request) to sense, capture, and/or record audio, image(s), video, and/or other inputs from the user of compute device 202. The controller 232 can communicate with the image capturer 234 to receive multimedia content from the image capturer 234. In some instances, the controller 232 can retrieve the multimedia content from the image capturer 234, while in other instances, the image capturer 234 can send the multimedia content to the controller 232. Based on the multimedia, the controller 232 can define, create, or manipulate the image interaction environment 240.

[1045] As shown in FIG. 5A, the image interaction environment 240 includes a graphical representation of a first image 242 from a set of images (or video), and a user interaction portion 244. The user interaction portion 244 provides or defines the portion(s) of the display of the compute device 202 in which a user can for example gesture to, touch or otherwise interact with (also referred to herein as "user animation request") to cause scrubbing or animation through the video or of the representation of the image 242. For example, as illustrated in sequence by FIGS. 5 A to 5B, in response to a user animation request by way of a finger touch and/or gesture from left to right at the compute device 202 (as illustrated by the finger touch representation 246, 246'), the visual display 230 records the user input and/or sends the user input (or a representation thereof) to the controller 232. In response to or based on the user input, the controller 232 sends a signal to the visual display 230 to cause the visual display 230 to display a representation of a second image 242' from the video or set of images. As apparent from FIGS. 5A and 5B, a subject 284, 248' (e.g., a horse) is common to both images, but is in motion when the video is captured. As such, a user is able to interact with and scrub through each frame of the video, and from the user's perspective, the representation of the image displayed at the compute device 202 will appear to animate (e.g., the horse moves from a left portion of the frame to a right portion of the frame) according to the user's particular gesturing (or user animation request), e.g., a finger swipe from left to right, as shown by arrow A in FIG 5A. [1046] The controller 232 can be configured to define the image interaction environment 240 in a variety of ways, and similarly, the controller 232 can send a signal to the visual display 230 to display a representation of the image(s) in a variety of ways. For example, in some instances, a user animation request in which a user finger swipes from left to right can cause the image frames of the video to scrub forward in time and a user animation request in which a user finger swipes from right to left can cause the image frames of the video to scrub backward in time, while in other instances, the user finger swipe from left to right can cause the image frames of the video to scrub backward in time and the user finger swipe from right to left can cause the image frames of the video to scrub forward in time. Such versatility can promote sensible interaction between a user and the image interaction environment for various subjects. For example, FIGS. 5A and 5B illustrate a horse race. In this embodiment, a user can easily touch (indicated by touch representation 246) the visual display 230 anywhere within the user interaction portion 244 (in this case near the subject 284), and in effect swipe and/or gesture to the right (as illustrated by touch representation 246') to cause the visual display 230 to display the representation of the second image 242' in which the representation of the subject 248' has shifted positions. More specifically, user input data based on the swipe and/or gesture received at and/or by the visual display 230 is recorded and/or sent to and/or retrieved by the controller 232. The controller 232, in turn, defines an image interaction environment 240 and sends a signal to the visual display 230 to display the graphical representation of the second image 242'. In this manner, the user can selectively manipulate (or cause animation of) the representation of an image displayed at the compute device 202.

[1047] As described in further detail herein, the controller 232 can be configured to define image interaction environments in which user input such as a finger swipe or compute device rotation corresponds to one or more direction of motions associated with the video or a subject within the video. For example, if a video represents a person jumping in a vertical direction, the controller 232 can define an image interaction environment 240 in which a horizontal user input finger swipe would not cause the visual display 230 to animate the image, but a vertical user input finger swipe would cause the visual display 230 to animate the image such that the person appears to jump in the vertical direction. Further to this example, in other instances, the controller 232 can define an image interaction environment 240 in which either or both a horizontal swipe and a vertical swipe would cause the visual display 230 to animate the image having vertical motion (e.g., a person jumping). Although the above examples refer to horizontal and vertical user input swiping, it should be understood that any direction (e.g., diagonal) can be received by the visual display 230 to animate an image. As described in further detail herein, motion and motion direction associated with one or more subjects within an image can be detected and analyzed by the controller 232 to define image interaction environments.

[1048] The user interaction portion 244 (e.g., the portion of the visual display 230 (e.g., a touchscreen) that records user input) can be sized and/or shaped (e.g., by the system or by the user) in any suitable manner. In this manner, the visual display 230, in some instances, can be configured to record user input data received within the user interaction portion 244, and not record user input data received outside the user interaction portion 244. In some instances, the user interaction portion 244 corresponds to and/or is coextensive with the size and shape of the representation of the first image 242 and the representation of the second image 242'. In this manner, a user can interact with any portion of the representations at the compute device 202, as opposed to, for example, a progress bar smaller than the representation of the images 242, 242' and/or offset from the representation of the images 242, 242'. Said another way, overlapping the user interaction portion 244 with at least a portion (or all) of a representation of an image 242, 242' promotes ease of use for a user to interact with or animate the representations of the image 242, 242'. Defining the user interaction portion 244 in this manner promotes a more selective and interactive environment for the user as opposed to, for example, a traditional video progress bar or scrubber in which a user selectively engages or selectively interacts with a relatively very small graphical representation of one or more frames from a video or a graphical representation of a time bar or slider.

[1049] In some instances, the user interaction portion 244 can correspond to and/or can be coextensive with a subject within the representation of the image instead of the entire representation of the image. In this manner, in such instances, a user can interact with the representation of the subject (which is overlapped by or within the user interaction portion) to animate the image. Such interaction provides the user with apparent animation control of a given subject, or said another way, such interaction creates an environment in which the user can feel like the subject within the image has come to life. For example, with reference to FIGS. 5A and 5B, a user interaction portion can correspond to and/or be coextensive with the representation of the horse (referenced as the subject 248). In such instances, a user can press the visual display 230 at or near the representation of the horse, as shown by FIG. 5A to provide user input and animate the representation of the horse at the visual display 230. With the user interaction portion corresponding to the subject 248, in some instances, user interaction with the visual display 230 outside of the user interaction portion 244 will not cause the visual display 230 animate, but instead, at least in some instances, can cause the visual display 230 to perform another suitable function, e.g., a zoom-in, zoom-out, or panning function. For example, in such instances, a user can press the visual display 230 outside of the user interaction portion 244 to cause the visual display 230 to display a zoomed-in representation of at least a portion of the image. Further, in response to a subsequent press by the user at the visual display 230 within the user interaction portion 244, the visual display 230 can display animation of the zoomed-in representation of the image. In some instances, with the user interaction portion corresponding to the subject 248, a user can press the visual display 230 at or near the representation of the subject (i.e., within the user interaction portion) to provide user input and animate only the subject (e.g., the representation of the horse) at the visual display 230, while the remaining portion of the representation of the image remains static. In this manner, the user can animate a portion of the graphical representation of the image less than the entire graphical representation of the image. In some instances, the user interaction portion can be defined by a user.

[1050] The controller 232 can analyze the user animation request to determine one or more criterion associated therewith, e.g., a velocity, acceleration, force, or distance of a user's finger swipe on the compute device 202, and define, create, or otherwise manipulate the image interaction environment 240 based on the one or more criterion. In this manner, the visual display 230 can display or cycle through a number of frames based on, for example, the velocity, acceleration, force, or distance of a user's finger swipe. For example, the controller 232 can define an image interaction environment 240 in which, in response to a user finger swipe having a first velocity, the controller 232 sends a signal to the visual display 230 to scrub the video for a number of frames in a first time period. Further to this example, in response to a user finger swipe having a second velocity greater than the first velocity, the controller 232 sends a signal to the visual display 230 to scrub the video or cycle through the number of frames faster, or in a second time period shorter than the first time period. In this manner, the velocity of the user input or user swipe can determine, at least in part, the speed at which the frames are scrubbed. As another example, in some instances, the controller 232 can define an image interaction environment 240 in which, in response to a user finger swipe having a first distance, the controller 232 sends a signal to the visual display 230 to scrub the video a particular number of frames. Said another way, the controller 232 determines (e.g., based on a ratio of swipe distance to a width or length of the visual display 230, and/or the visual interaction portion 244) the number of frames to be scrubbed based on the distance associated with the user input. In this manner, animation of an image can correspond to or be based on specific parameters associated with the user animation request. Said another way, in this manner, a user can selectively manipulate or animate an image (or series of images) by providing gestures having particular criteria.

[1051] To provide a user with such a high level of controllability while maintaining visually seamless animation of the image (i.e., to avoid a choppy animation in which it appears to a user that animation of the image is skipping frames), the controller 232 can be configured to decode substantially continuously or sequentially (or "on the fly" or "in realtime" such that an end user can view the animation without perceiving any delay in the animation or scrubbing between frames) at least a portion of the multimedia content received from and/or recorded at the image capturer 234 as it is received by the controller 232. For example, the controller 232 can decode each frame from a set of frames received from the image capturer 234 as each frame is received, as opposed to, for example, delaying frame decoding until multiple frames are received. Further, in some instances, each and every frame is decoded and no frames are skipped. In this manner, the representation of the image displayed at the compute device 202 can appear to animate seamlessly in direct accordance with a criterion of a user's interaction request, e.g., a velocity of a user's finger swipe. Compared to traditional video scrubbers, in which a finger that swipe across, for example, 75% of a representation of a scrubber bar would cause a corresponding 75% of the video to be scrubbed (i.e., the video is scrubbed merely based on, for example, where the user placed or swiped his/her finger), in this embodiment, animation of an image can emulate or correspond to various criteria of a user's interaction request. In this manner, the user is provided with greater control and selectability of how an image is animated. Said another way, the user is able to better fine tune image animation such that the user can view a representation of a particular section or frame of a video.

[1052] Managing multimedia content by the controller 232, in this manner, can provide such seamless animation in a variety of circumstances. For example, with the controller 232 decoding frames on the fly, in instances in which such decoding is undesirably slow, animation of the representation of the image at the compute device 202 would not appear or feel slow to a user, but instead, the user would feel only as if he/she needs to adjust or increase, for example, a velocity of his/her gesture (e.g., finger swipe). Thus, as opposed to a user experiencing skipped frames (and in turn a choppy user interaction or image animation environment) due to undesirably slow decoding, a user would simply experience or view slower, but still seamless, animation of the image in response to a particular user gesture having particular criteria (e.g., a specific velocity, acceleration, push force, etc.).

[1053] Some known systems attempt image animation using "burst" mode concepts, in which, for example, a low power mobile camera captures images at 10-12 frames per second. Such a low frame rate contributes to staccato or choppy image animation for the user, and in turn, the user experience or feeling of "bringing an image to life" is not available. In contrast, as discussed above, in this embodiment, the controller 232 can define and/or create such a desired user experience by decoding or decomposing a video (captures at a higher frame rate, e.g., 60 frames per second) sequentially into individual frames.

[1054] In some instances, the controller 232 is configured to perform substantially realtime frame interpolation to compensate for lower frame rates. For example, using two images overlaid with frame n and frame n+1, the n+1 frame's opacity can correspond to and/or be based on user gestures or movement to simulate additional frames between n and n+1. Optical flow and other frame interpolation algorithms can be used to increase frame resolution and provide for improved interaction or an improved user touch and image animation experience. Further, in some instances, the controller 232 can define and/or execute algorithms for moving optical flow calculations into a shader (not shown).

[1055] In some circumstances, for example, when capturing video at high frame rates (e.g., 30-60 frames or higher per second), pulsation or flickering or other image noise may occur due to, for example, fluorescent frequency and/or LED lighting (e.g., 50 Hz or 60Hz). To remedy such pulsation or flickering, in some instances, multiple images and/or videos can be overlaid, e.g., with frame n and frame n+1 at 50% opacity. In this manner, for example, light flickering can be limited or eliminated from the user's viewing experience. An added benefit of performing such overlay techniques and/or frame interpolation in substantially real-time at the compute device 202 is the reduction of image or video data, thereby allowing for or promoting faster upload and/or download speeds of the video data. Said another way, the image or video files can be uploaded and downloaded without the interpolation information, and the interpolation information can be separately restored or added locally at the compute device 202.

[1056] In some instances, a user interaction environment can include a graphical representation of multiple images. For example, as shown in FIG. 6A, a user interaction environment 340 includes a representation of a first image 348 from a set of images associated with a video, and a representation of a second image 350, and as shown in FIG. 6B, the user interaction environment 340 includes a representation of a subsequent or second image 348' from the set of images associated with the video, and a representation of a third image 352. In this example, a user can interact with (e.g., gesture, finger swipe, etc.) the user interaction environment 340 to scroll through multiple representation of images (e.g., image 348, image 350, image 352). As shown by progression in FIGS. 6A to 6B, as the user scrolls through user interaction environment 340, the representation of the first image 348 changes or appears to the user to animate. In this manner, a user can interact with or animate images from multiple videos while cycling through the multiple videos. Said another way, as opposed to viewing and animating an image from a single video, a user can quickly and efficiently view and animate images from multiple videos. For example, as illustrated by FIGS. 6A and 6B, a user can scroll or finger swipe vertically to cycle through each image representation from the set of videos, as opposed to targeting and selectively interacting with a user interaction environment including only a single image or video.

[1057] In some instances, the controller 232 can be configured to define the user interaction environment 340 such that only one graphical image representation animates during a time period. In such instances, the controller 232, for example, can select a particular graphical image representation to animate based on the relationship of the image representation to the user interaction environment. For example, as shown in FIG. 6A, the first image 348 covers over half of the user interaction environment 340, while the second image 350 covers less than half of the user interaction environment 340. As such, in some instances, the controller 232 can be configured to select the first image 348 and not the second image 350 to animate as a user interacts with or scrolls through the user interaction environment. As the amount or percentage of coverage of each image changes as a user scrolls, the controller 232 can alter its selection of which image to animate.

[1058] Alternatively, in other instances, the controller 232 can be configured to define the user interaction environment 340 such that multiple graphical image representations simultaneously animate during a time period. In such instances, as a user scrolls or interacts with the user interaction environment 340, multiple images, e.g., the first image 348 and the second image 350, can simultaneously animate.

[1059] In some instances, although not shown, a user interaction environment can include both a graphical representation of an image and other information. For example, in some instances, a graphical representation of an image can be displayed in conjunction with a website on a browser or within an application. For example, a user can view a news website including a graphical representation of a product advertisement image. As the user scrolls through or otherwise interacts with the website, the graphical representation of the product advertisement image can animate. In this manner, one or more portions or images of a website displayed for a user can appear to come to life as a user interacts with other portions of the website. Such an interactive environment, for example, can provide advertisers or other media content producers greater opportunities to interact with viewers, in part, because viewers may become more engaged with and interested in an environment in which images appear to come to life.

[1060] As discussed above, a user interaction request can include, for example, a finger swipe by a user. In addition to or instead of a finger swipe, in some instances, a user interaction request can include movement (e.g., rotation, tilting, etc.) of the compute device 202. In such instances, the compute device 202 includes a motion sensor (e.g., a gyroscope, an accelerometer, a pedometer, etc.) configured to sense or detect such movement of the computer device. In response to or based on such motion detection, the controller 232 can create, define, or otherwise manipulate an image interaction environment. In this manner, a user can move the compute device 202, e.g., rotate the compute device 202, to cause animation of a representation of an image displayed within the image interaction environment. For example, with reference to the illustrated animation of FIGS. 5A and 5B, disregarding the finger touch representation 246, the visual display 230 can transition between displaying the image 242 to displaying the image 242' based upon detection of orientation and/or movement by, for example, the gyroscope and/or accelerometer. In this manner, a user can animate an image or series of images or videos without touching a user interface (e.g., a touch screen) of the compute device 202. As such, the user can experience or view animation without interrupting his/or view with a finger swipe, thereby enhancing the overall user experience. Further, allowing a user to animate an image by way of moving or tilting a compute device 202 (e.g., a mobile phone) allows for accurate and precise control and selectability by the user (e.g., tilt right to cause video to scrub forward and/or tilt left to cause video to scrub backward, or vice versa).

[1061] In some instances, the controller 232 can analyze the user animation request to determine one or more criterion associated therewith, e.g., a velocity, acceleration, force, direction or distance associated with movement of the compute device 202, and define, create, or otherwise manipulate the image interaction environment based on the one or more criterion. In this manner, the visual display 230 can display or cycle through a number of frames based on, for example, the velocity, acceleration, force, or distance associated with rotation or tilt of the compute device 202. For example, the controller 232 can define an image interaction environment in which, in response to a rotation of the compute device 202 having a first velocity, the controller 232 sends a signal to the visual display 230 to scrub the video or cycle through a number of frames in a first time period. Further to this example, in response to a rotation of the compute device 202 having a second velocity greater than the first velocity, the controller 232 sends a signal to the visual display 230 to scrub the video a or cycle through the number of frames faster or in a second time period less than the first time period. In this manner, animation of an image can correspond to or be based on specific parameters associated with the user animation request. Said another way, in this manner, a user can selectively manipulate or animate an image (or series of images) by moving, rotating, or tilting the compute device 202 in a particular manner. As a further example, animation of the image can be based on multiple criteria, e.g., a velocity and direction, associated with a rotation of the compute device 202. For example, referring to FIGS. 5A and 5B, a user can rotate the compute device 202 in a clockwise fashion to animate the images as shown, i.e., rotating the compute device 202 clockwise can cause the image to appear to animate forward in time. Similarly, rotation of the compute device 202 in a counterclockwise fashion can cause the image to animate in the opposite direction, or backwards in time.

[1062] In some instances, the controller 232 can perform image or video analysis to determine motion within the video and/or media content. For example, the controller 232 can analyze one or more images to detect or determine primary movement and a direction associated therewith, e.g., of a subject of the image(s). As an example, in use, the controller 232 can conduct image analytics on a video to detect motion in a non-horizontal direction of a foreground object from one or more image frames of the video. Motion detection and motion direction detection can be performed using a variety of approaches, including, for example, Farneback algorithms or optical flow. For example, blobs and blob sizes can be assessed to detect and rank degrees of various movements, for example, to determine primary and/or secondary movements.

[1063] Such primary motion identification and motion direction, as described above, can improve the user's interactive experience with the image(s). In this manner, the effectiveness of a user gesture input can be based on and/or change depending on the content of each video. For example, if a user seeks to animate an image in which a subject is jumping to dunk a basketball, a horizontal finger swipe or compute device 202 rotation about a vertical axis may detract from the user experience. If instead, the controller 232 defines a user interaction environment, based on motion and motion direction detection, in which the image animates (e.g., the subject jumps) in response to or based on an image interaction request in accordance with or corresponding to the detected motion and motion direction (e.g., a vertical finger swipe or compute device 202 rotation about a horizontal axis), the user animation experience is greatly improved. For ease of illustration the above example describes only horizontal and vertical motion, however, it should be understood that motion can be detected in any direction, e.g., a diagonal direction, and similarly, user input such as a finger swipe or compute device 202 rotation can have a diagonal or off-axis direction.

[1064] In some instances, for example, in which a video is captured when the image capturer 234 and/or the compute device 202 is moving, the controller 232 can compensate its motion detection of one or more subjects of the video based on detection of such movement of the image capturer 234 and/or the compute device 202. The controller 232 can detect movement of the image capturer 234 and/or the compute device 202 based on data captured by the sensor (e.g., a gyroscope, accelerometer, and the like, and/or a combination thereof) at the compute device 202. In some instances, the controller 232 can be configured to use one or more thresholds to define or determine motion and motion direction of one or more subjects within the video to be animated, while compensating for any inaccuracies due to movement of the image capturer 234 and/or the compute device 202 when the video was captured. For example, in some instances, if the compute device 202 and/or the image capturer 234 is moved or rotated slightly (e.g., below a predefined threshold), the controller 232 will not incorporate such movement in its primary motion identification and motion direction detection. If, however, the compute device 202 is moved or rotated beyond the predefined threshold, the controller 232 will compensate for such movement by determining primary motion identification and motion direction of one or more subjects of the video based on the movement of the compute device 202 and/or the image capturer 234. Incorporating device movement in this manner promotes more accurate, consistent, and repeatable motion detection and motion direction detection with respect to subjects within images and/or videos to be animated at the visual display 230.

[1065] In some instances, in addition to or instead of automatic motion detection by the controller 232, the controller 232 can define the image interaction environment based on a user request for animation to be based on or responsive to a particular movement. For example, a user can select a temporal direction of the image scrubbing or animation to match a particular gesture, e.g., a swipe or gesture from left to right to animate the image forward in time, or alternatively, a swipe or gesture from left to right to animate the image backward in time. In this manner, the user can define animation of the image in a way that feels natural.

[1066] As discussed herein, in some instances, a graphical representation of an image can be displayed and animated based upon one or more user interaction request criterion. In some instances, the image representation can be displayed based on one or more criterion exceeding a predefined threshold. For example, if a user gestures or moves the compute device 202 quickly or forcefully such that a sensed velocity or force, for example, exceeds one or more thresholds, the visual display 230 may display the first or final image from the series of images, i.e., the beginning or end of the video or animation. Implementing such thresholds can allow a user to quickly and efficiently view the beginning or end (e.g., first frame or last frame) of an animation. In some instances, based on one or more criterion exceeding one or more predefined thresholds, the controller 232 is configured to send an extremity indication signal to the visual display 230 such that the visual display 230 displays at the compute device 202 an extremity indication representing or indicating the beginning and/or ending of an animation. An extremity indication can include any suitable visual or audio indication. For example, in some instances, an extremity indication can include a graphical representation of the animated image in a bouncing motion, e.g., appearing to the user as if the image has collided with a physical limit or stop member such that the image appears to bounce. Such bounce can indicate to the user that the current graphical representation of an image being displayed by the visual display 230 is either an image first in time or last in time relative to a series of images from which the image represents.

[1067] FIG. 7 is a flow chart illustrating a method of defining and manipulating an image interaction environment, according to an embodiment. At 410, the method optionally includes inputting video captured at given frame rate such as a high frame-per-second (FPS). In some instances, the video can be captured and/or recorded at the compute device 202, while in other instances the video can be captured at a device separate from the compute device and sent to or retrieved by the compute device 202. For example, the video can be pre-stored at the compute device 202 or a device separate from the computer device 202. At 420, the method includes optionally clipping the video to, e.g., one second. In other instances, the video can be clipped to any suitable time period, e.g., 3 seconds, 5 seconds, 30 seconds, or more). In some instances, the method can include clipping the video, while in other instances, the method can include receiving a video that has already been clipped. Further, in some instances, the video can be clipped to remove a beginning portion and/or an end portion of the video. In yet further instances, at least a portion of the video can be time-compressed or time-expanded. For example, a 10-second video can be reduced to a 1 -second video by removing frames to be displayed. At 430, the clipped video is decompressed and rendered as a sequence of static images with a single image representing the whole (e.g., similar to as shown in FIG. 5A in which image 242 is a single image representing a series of images including the second image 242'). At 440, the method includes determining the compute device's 202 physical orientation (e.g., a vertical or portrait orientation, or a horizontal or landscape orientation, or some orientation therebewteen). At 450, the method includes mapping the video frames to coordinates of a user's touch on the visual display 230 and/or orientation of the device. At 460, the method includes detecting the user's image interaction request, e.g., a user touch on the display of the device (or in other instances, movement of the compute device 202). Based upon such detection, at 470, the method includes displaying single mapped image(s) as the user interacts with the compute device 202 via any of the manners discussed herein, e.g., by physical orientation of the compute device 202, and/or location or nature of the user's physical touching (e.g., a finger swipe) of the compute device 202.

[1068] As described in previous embodiments an interactive image system can include a compute device such as a smartphone, tablet, or the like configured to provide interactive images to a user. In some embodiments, an interactive image system can include a flexible portable housing incorporating a flexible electronic ink display. In this manner, images can be stored, displayed, animated and/or otherwise interacted with on a device that aesthetically emulates a printed image (e.g., by way of being flexible, lightweight, form factor suitable for portability, etc.) but also includes an interactive user-experience as described in previous embodiments with respect to, for example, the interactive image system 100. FIG. 8 is a block diagram depicting an interactive image system 500, according to such an embodiment.

[1069] The interactive image system 500 can be constructed similar to and/or function similar to or the same as any of the interactive image systems described herein (e.g., the interactive image system 100). Thus, some details regarding the interactive image system 500 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the interactive image systems described herein.

[1070] As shown in FIG. 8, the interactive image system 500 includes a flexible portable housing 502 arranged to be in operable communication with and receive image and/or video data from a video capturer 504. Although in this embodiment the video capturer 504 is separate from the flexible portable housing 502, in other embodiments, the video capturer 504 can be integral with and/or part of the flexible portable housing 502.

[1071] The flexible portable housing 502 includes an image processor 514, a flexible electronic ink image display 506, and a sensor 540. The flexible portable housing 502 can be constructed similar to and/or function the same as or similar to any of the compute devices described herein (e.g., compute device 102, compute device 202, or the like). Thus, some details regarding the flexible portable housing 502 are not described below. It should be understood that for features and functions not specifically discussed, those operations and/or functions can be the same as or similar to any of the compute devices described herein.

[1072] The flexible portable housing 502 can be formed of any material or combination of materials suitable to flex (e.g., bend, roll, fold) to a certain extent without altering the functional and/or operational characteristics of the components contained therein. For example, in some instances, the flexible portion housing 502 can be a thin and flexible plastic or polymer enclosure. The flexible portable housing 502 can have a thickness similar to a common instant print. The flexible portable housing 502 can emulate the instant and/or developed image film prints on the flexible electronic ink image display 506 using electronic paper electronics using technologies such as but not limited to Gyricon, Electrophoretic, Electrowetting, Interferometric modulator, Plasmonic electronic display, and/or the like. The flexible portable housing 502 provides physicality (e.g., small form factor, photo-like feel), portability (e.g., lightweight and easy to carry) and ease to use user interface.

[1073] The image processor 514 can be any suitable processing device configured to run and/or execute a set of instructions or code such as, for example, image and/or video related processing. The image processor 514 can be constructed similar to or the same as and/or function similar to or the same as any of the processors described herein (e.g., the processor 114, the processor 124, the processor 212). Thus, some details regarding the image processor 514 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the processors described herein. Further, although not shown, the flexible portable housing 502 can further include a memory (e.g., similar to or the same as the memory 112) configured to store instructions executable by the image processor 514. The image processor 514 can be configured to send signals to the flexible electronic ink image display 506 (also referred to herein as "image display 506") to cause the image display 506 to render a graphical representation of an image interaction environment (similar to as discussed with respect to the image interaction environment 240).

[1074] The image display 506 is operatively coupled to the image processor 514 and configured to receive one or more images from the image processor 514 and display a graphical representation of the one or more images. The image display 506 can be constructed similar to and/or can function the same as or similar to any of the visual displays described herein, e.g., the visual display 106 (refer FIG. 1), the visual display 230 (refer FIG. 4). Thus, some details regarding the image display 506 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the visual displays described herein. For example, similar to previous embodiments, the image display 506 can include a touchscreen input component or feature configured to receive input data or coordinates in response to a user touching the touchscreen.

[1075] The sensor 540 is operably coupled to the image processor 514 and is configured to detect movement, orientation, and/or motion of the flexible electronic ink image display 506 and/or the flexible portable housing 502 to produce sensed data, and then send the sensed data to the image processor 514. The sensor 540, for example, can be a gyroscope or an accelerometer or the like. Although only one sensor is shown in this embodiment, in other embodiments, a flexible portable housing 802 can include multiple sensors (e.g., both a gyroscope and an accelerometer). The sensor 540 can be constructed similar to and/or function the same as or similar to any of the sensors (e.g., the sensors of compute device 202) described herein. Thus, some details regarding the sensor 540 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to the sensors described in other embodiments herein.

[1076] The video capturer 504 can be any suitable component, subsystem, device and/or combination of devices configured to capture an image or series of images. The video capturer 504 can be constructed the same as or similar to any of the image capturers described herein (e.g., the image capturer 104, the image capturer 234, and the like). Thus, some details regarding the video capturer 504 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the image capturers described herein. The video capturer 504 can be configured to capture one or more images and send a signal representing the one or more images to the flexible portable housing 502 for suitable processing and/or analysis by the image processor 514, as described in further detail herein. Although the flexible portable housing 502 is shown and described as receiving image data from the video capturer 504, in other embodiments, the flexible portable housing 502 can receive image data from any suitable source, e.g., any compute device containing the image data.

[1077] In use, the flexible portable housing 502 can receive and/or retrieve image data generated by the video capturer 504. The received or retrieved image data can then be displayed by the image processor 514 on the image display 506 in any suitable manner. For example, in some instances, the image data can be displayed and/or animated on the image display 506 in accordance with or based on user-generated input. The user-generated input, as discussed in more detail in previous embodiments, can include, for example, a user touch or finger-swipe at a touchscreen input component or feature of the image display 506 or movement of the flexible portable housing 502 (as sensed by the sensor 540). [1078] In some embodiments, power used by the flexible portable housing 502 can be toggled based on data generated by the sensor 550. For example, in such instances, a user can rotate, shake, or otherwise move the flexible portable housing 502 to toggle between power ON and power OFF. Once powered on, the flexible portable housing 502 can operatively pair to the video capturer 504 and/or any other suitable compute device.

[1079] In some implementations, a flexible portable housing can include a hardware control (e.g., a push button or switch) designed to be engaged by a user to power ON and/or OFF the flexible portable housing. In such implementations, the flexible portable housing can be powered ON and/or OFF by activation of the hardware control, by data generated by the sensor 550, or both. In other implementations, a flexible portable housing can be designed without any such hardware control such as a push button or switch. In such embodiments, powering ON and OFF the flexible portable housing is limited to data generated by the sensor 550, as described above. Providing a flexible portable housing in this manner contributes to the aesthetic look and feel of the flexible portable housing, i.e., the flexible portable housing can appear similar to an image print.

[1080] In a similar manner, in some implementations, the image processor 514 can be configured to pair with the video capturer 504 and/or other suitable compute devices in response to data generated by the sensor 540. For example, in such embodiments, a user can shake the flexible portable housing 502 in a manner detectable by the sensor 540. The sensor 540 in response can send a signal to the image processor 515 representing the shake, causing the image processor 514 to initiate a pairing. The paring, in some instances, can configure the image processor 514 to receive image data (e.g., via Bluetooth® Low Energy, Wi-Fi®, or the like) from the device(s) to which it becomes paired.

[1081] FIG. 9 is an exploded view of a flexible portable housing 660 of an interactive image system 600, according to another embodiment. The interactive image system 600 can be constructed similar to and/or function similar to or the same as any of the interactive image systems described herein (e.g., the interactive image system 500). Thus, some details regarding the interactive image system 600 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the interactive image systems described herein. [1082] As shown, the flexible portable housing 660 includes a flexible electronic-ink display 610, a network chipset 622, a graphic controller 624, a flexible battery 630, a lenticular film 650, a layer of solar film 640, and one or more sensors (not shown; e.g., an accelerometer, a gyroscope, or the like). The flexible portable housing 660 can be constructed similar to and/or function similar to or the same as any of the flexible portable housings described herein (e.g., the flexible portable housing 502). Thus, some details regarding the flexible portable housing 660 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the flexible portable housings described herein.

[1083] In this embodiment, the flexible portable housing 660 can be constructed from any suitable material or combination of materials (e.g., plastics, polymers, etc.). At least a portion of the flexible portable housing 660 includes a clear or transparent flexible protective film. The flexible portable housing 660 is lightweight, flexible, and durable enough to travel in a bag or pocket without breaking— similar to a printed photograph.

[1084] The flexible electronic ink display 610 can be constructed similar to and/or function similar to or the same as any of the displays described herein (e.g., the flexible electronic ink display 510). Thus, some details regarding the flexible electronic ink display 610 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the displays described herein. In this embodiment, the flexible electronic ink display 610 is a thin, paper-like display screen.

[1085] The network chipset 622 is a computer hardware component that connects the flexible electronic ink display 610 and/or the graphic controller 624 to a computer network (not shown in the FIG. 9). The network chipset 622 can include network interface controllers (not shown in the FIG. 9) for managing data flow to and/or from the network. The network chipset 622 is configured to receive the data from the network and can perform network- related processing on the received data. The network-related processing can involve stripping down the network related headers and/or trailers from a received data. In some instances, the received data is graphic-related data for example, an image and/or video file captured from an image capturer. After the completion of the network related processing, the received data is then sent to the graphic controller 624. The graphics controller 624 processes the received data from the network chipset 622 and performs graphic-related processing to produce dots and lines on the electronic ink display 610 to represent an image and/or video. The graphic controller 624 processes the received graphic-related data for analyzing different graphic parameters (such as monochrome/color, 2D/3D graphics, screen resolution, screen form factor and/or the like). The graphic controller 624 is configured to display the processed graphic-related data on the flexible electronic ink display 610.

[1086] In some instances, the network chipset 622 can receive data in either burst mode and/or continuous mode. The network chipset 622 can include any suitable component, subsystem and/or device to communicate with the network, such as, for example, a network interface card and/or the like that can include at least a wireless radio (e.g., a WiFi® radio, a Bluetooth® radio, etc.). In other embodiments, as the network chipset and/or graphic controller can be realized as a software program on a single processor (not shown in the FIG. 9).

[1087] The lenticular film 650 is coupled atop the flexible electronic ink display 610 and configured to illustrate movement, such as frames of a recorded video, when tilting the device or changing the angle of the view. The lenticular film 650 is an array of magnifying lenses or lenticules, designed so that when viewed from slightly different angles, different images are magnified enough for viewing. Known electronic ink technology has a limited refresh rate, resulting, for example, in fewer than 30 frames per second. By using lenticules, the effect of frame changes could be emulated. This would be accomplished by interlacing multiple frames of an animation into a single image with specified intervals for each frame, such that when tilting the device, the line of sight for a particular set of image strips becomes magnified and visible. Continuing to tilt the device would give the impression of an animation occurring

[1088] In other embodiments, a flexible portable housing similar to the flexible portable housing 660 does not include lenticular film, but can provide interactive and animated images similar to as discussed in previous embodiments (e.g., based on user-generated input at the display and/or sensor(s)). In some instances, in any of the embodiment's described herein, touch interaction could be via an embedded capacitive or non-capacitive component embedded in the flexible electronic ink display.

[1089] The flexible battery 630 provides power to any of the components (e.g., the flexible electronic ink display 610, the network chipset 622, the graphical controller 624, etc.). The flexible battery 630 can have characteristics such as but not limiting to a small form factor, lightweight, flexible, capacity sufficient to support the components and functionality described herein. In some instances, for example, the flexible battery 630 can be an alkaline, zinc-carbon, lithium ion and/or the like. The flexible battery 630 may include multiple cells connected together for fulfilling the voltage (and/or current) and/or the power requirements of the various components within the flexible portable housing 660.

[1090] The solar film 640 provides an additional source of energy to the electronic components within the flexible portable housing 660 and can provide power, for example, to the flexible battery 630, the flexible electronic ink display 610, and the like. In other embodiments in which a flexible portable housing does not include a battery or includes a battery with limited capabilities, solar film can be configured to provide power sufficient to power any and/or all of the components of the flexible portable housing.

[1091] In the present embodiment, the solar film 640 is disposed on the rear-side of the flexible portable housing 660, however, in other embodiments, solar film can be disposed in any suitable location or locations (e.g., front, left, right, top, bottom, sandwiched between various layers of the flexible portable housing 660, and the like). In some embodiments, solar film can be positioned to promote optimum conversion of energy from the incident external light.

[1092] In use, the flexible portable housing 660 can conserve energy by managing the power usage of its radio. For example, the flexible potable housing 660 can listen for movement or touch via a sensor such as an embedded accelerometer or capacitive or non- capacitive component (not shown). In such instances, in response to detecting movement by the sensor(s), the radio can be powered on or modified to a more active state such that it is ready for data transmission to/from the flexible portable housing 660. For example, the flexible portable housing 660 could be in a very low power state or an off state, and in response to detecting that the flexible portable housing 660 has been shaken, touched, rotated, etc. by a user, it can turn on or otherwise increase power to particular components therein.

[1093] FIG. 10 is a flow chart illustrating a method 700 of providing an interactive image using a flexible portable housing including a flexible electronic ink image display, according to an embodiment. At 702, the method includes causing the flexible electronic ink image display to display a graphical representation of a first image frame from a video generated by a video capture device separate from and wirelessly operably coupled to the flexible portable housing. At 704, the method further includes receiving animation data generated by (1) a sensor, and (2) based on a user-generated animation request (e.g., a finger touch or swipe at the image display, and/or rotation of the flexible portable housing). At 706, the method further includes, based on the animation data, causing the flexible electronic ink image display to replace the graphical representation of the first image frame with a graphical representation of a second image frame from the video. The second image frame is generated by the video capture device after the first image frame is generated by the video capture device (e.g., the first image frame and the second image frame can be sequentially captured by the video capture device).

[1094] FIG. 11 is a flow chart illustrating a method 800 of providing an interactive image, according to another embodiment. At 802, the method includes displaying at a visual display device a graphical representation of a first image frame from a video generated by a video capture device. At 804, the method further includes receiving at a user interaction portion of the visual display device a user-generated animation request. The user interaction portion is coextensive with at least one of (1) an object in both the first image frame and a second image frame, or (2) both the graphical representation of the first image frame and a graphical representation of the second image frame. The second image frame is generated by the video capture device after the first image frame is generated by the video capture device. At 806, the method further includes, in response to the animation request, replacing the graphical representation of the first image frame with the graphical representation of the second image frame.

[1095] FIG. 12 is a flow chart illustrating a method 900 of providing an interactive image, according to another embodiment. At 902, the method includes displaying at a visual display device a graphical representation of a first image frame from a video generated by a video capture device. At 904, the method further includes receiving animation data generated by at least one sensor (including at least one of an accelerometer or a gyroscope). The animation data is generated based on a user-generated animation request. At 906, the method further includes, in response to the animation data, replacing the graphical representation of the first image frame with a graphical representation of a second image frame. The second image frame is generated by the video capture device after the first image frame is generated by the video capture device. [1096] FIG. 13 A is a block diagram in front view depicting a flexible portable housing 1002 and its dimensions, according to an embodiment; and FIG. 13B is a block diagram in side view depicting the flexible portable housing 1002 of FIG. 13 A. The flexible portable housing 1002 can be constructed similar to and/or function the same as or similar to any of the flexible portable housings (e.g., flexible portable housing 502, or the like) and/or compute devices described herein (e.g., compute device 102, compute device 202, or the like). Thus, some details regarding the flexible portable housing 1002 are not described below. It should be understood that for features and functions not specifically discussed, those operations and/or functions can be the same as or similar to any of the compute devices described herein. As shown, the flexible portable housing 1002 includes a flexible electronic ink image display 1006, a battery 1030, and an image processor 1014. Although not shown, the flexible portable housing 1002 can include any additional suitable components, similar to the flexible portable housings and compute devices described with respect to other embodiments herein. For example, in some implementations, the flexible portable housing 1002 can further include one or more sensors, one or more lenticular lenses, one or more solar films, and the like.

[1097] The image processor 1014 can be any suitable processing device configured to run and/or execute a set of instructions or code such as, for example, image and/or video related processing. The image processor 1014 can be constructed similar to or the same as and/or function similar to or the same as any of the processors described herein (e.g., the processor 114, the processor 124, the processor 212). Thus, some details regarding the image processor 1014 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the processors described herein. Further, although not shown, the flexible portable housing 1002 can further include a memory (e.g., similar to or the same as the memory 112) configured to store instructions executable by the image processor 1014. The image processor 1014 can be configured to send signals to the flexible electronic ink image display 1006 (also referred to herein as "image display 1006") to cause the image display 1006 to render a graphical representation of an image interaction environment (similar to as discussed with respect to the image interaction environment 240).

[1098] The image display 1006 is operatively coupled to the image processor 1014 and configured to receive one or more images from the image processor 1014 and display a graphical representation of the one or more images. The image display 1006 can be constructed similar to and/or can function the same as or similar to any of the visual displays described herein, e.g., the visual display 106 (refer FIG. 1), the visual display 230 (refer FIG. 4). Thus, some details regarding the image display 1006 are not described below. It should be understood that for features and functions not specifically discussed, those features and functions can be the same as or similar to any of the visual displays described herein. For example, similar to previous embodiments, the image display 1006 can include a touchscreen input component or feature configured to receive input data or coordinates in response to a user touching the touchscreen.

[1099] As described in previous embodiments herein, a flexible portable housing can be sized and arranged in any suitable manner. In this embodiment, the flexible portable housing 1002 is shown having exemplary dimensions in millimeter units selected for emulating an instant print in an electronic format and having a form factor suitable to be held in a user's hand and suitable for easy portability, as described in more detail above.

[1100] As shown in FIG. 13B, the flexible portable housing 1002 has varying thickness to accommodate for the components disposed therein. Specifically, the flexible portable housing defines a first portion 1002A having a first thickness hi and containing the battery 1030 and the image processor 1014, and defines a second portion 1002B having a second thickness h2 less than the first thickness and housing the flexible electronic ink image display 1006. It should be evident that the dimensions shown in FIGS. 13A and 13B are simply exemplary in accordance with an embodiment, and in other embodiments or implementations, the dimensions of the flexible portable housing and its components can have dimensions different from the dimensions shown in FIGS. 13A and 13B.

[1101] Although various embodiments have been described as having particular features and/or combinations of components, other embodiments are possible having a combination of any features and/or components from any of embodiments as discussed above. For example, charging the device could be done instead or in combination with an inductive coupling or the use of wireless energy. Additionally, some image processing could occur on the Frame device itself instead of requiring a smartphone or connected camera.

[1102] While generally described herein as image animation, which for ease of explanation is intended to describe an animation experience from the perspective of a user viewing and interacting with a graphical display at the compute device 202, it should be understood that image animation includes displaying of and/or cycling through (back and forth) multiple images over a period of time (e.g., similar to displaying a video), for example, frame-by -frame viewing of a video based on user instruction.

[1103] Some embodiments described herein relate to a computer storage product with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations. The computer-readable medium (or processor- readable medium) is non-transitory in the sense that it does not include transitory propagating signals per se (e.g., a propagating electromagnetic wave carrying information on a transmission medium such as space or a cable). The media and computer code (also can be referred to as code) may be those designed and constructed for the specific purpose or purposes. Examples of non-transitory computer-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing; and hardware devices that are specially configured to store and execute program code, such as Application-Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), Read-Only Memory (ROM) and Random-Access Memory (RAM) devices.

[1104] Examples of computer code include, but are not limited to, micro-code or microinstructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, embodiments may be implemented using imperative programming languages (e.g., C, Fortran, etc.), functional programming languages (Haskell, Erlang, etc.), logical programming languages (e.g., Prolog), object-oriented programming languages (e.g., Java, C++, etc.) or other suitable programming languages and/or development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.

[1105] Where schematics and/or embodiments described above indicate certain components arranged in certain orientations or positions, the arrangement of components may be modified. While the embodiments have been particularly shown and described, it will be understood that various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The embodiments described herein can include various combinations and/or sub-combinations of the functions, components, and/or features of the different embodiments described.

[1106] While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Where methods described above indicate certain events occurring in certain order, the ordering of certain events may be modified. Additionally, certain of the events may be performed concurrently in a parallel process when possible, as well as performed sequentially as described above.