Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTIPLE USER VIDEO IMAGING ARRAY
Document Type and Number:
WIPO Patent Application WO/2016/081666
Kind Code:
A1
Abstract:
Computationally implemented methods and systems such as acquiring a request for particular image data that is part of a scene, transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, receiving only the particular image data from the image sensor array, and transmitting the received particular image data to at least one requestor. In addition to the foregoing, other aspects are described in the claims, drawings, and text.

Inventors:
BRAV EHREN (US)
HANNIGAN RUSSELL (US)
RUTSCHMAN PHILIP (US)
JOHANSON 3RIC (US)
Application Number:
PCT/US2015/061439
Publication Date:
May 26, 2016
Filing Date:
November 18, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ELWHA LLC (US)
International Classes:
H04N7/00
Foreign References:
US20120169842A12012-07-05
US20130308197A12013-11-21
Attorney, Agent or Firm:
COOK, Dale, R. (PllcSuite 717,918 South Horton Stree, Seattle WA, US)
Download PDF:
Claims:
CLAIMS:

1. A computationally- implemented method, comprising:

acquiring a request for particular image data that is part of a scene;

transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

receiving only the particular image data from the image sensor array; and transmitting the received particular image data to at least one requestor.

2. The computationally- implemented method of claim 1 , wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene that includes one or more images.

3. The computationally- implemented method of claim 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving the request for particular image data of the scene.

217. A computationally-implemented system, comprising

circuitry for acquiring a request for particular image data that is part of a scene; circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

circuitry for receiving only the particular image data from the image sensor array; and

circuitry for transmitting the received particular image data to at least one requestor.

218. The computationally-implemented system of claim 217, wherein said circuitry for acquiring a request for particular image data t i is part of a scene comprises: circuitry for acquiring the request for particular image data of the scene that includes one or more images.

219. The computationally-implemented system of claim 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving the request for particular image data of the scene.

325. A computer program product, comprising:

a signal-bearing medium bearing:

one or more instructions for acquiring a request for particular image data that is part of a scene;

one or more instructions for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

one or more instructions for receiving only the particular image data from the image sensor array; and

one or more instructions for transmitting the received particular image data to at least one requestor.

The following claims are from 1114-003-006-000000 (USAN: 14/791,160) specifically 267, and 358.

267. (NEW) A device, comprising:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor;

a scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

a selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location; and a scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

268. (NEW) The device of claim 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of an array of image sensors.

269. (NEW) The device of claim 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images through use of two image sensors arranged side by side and angled toward each other.

358. (NEW) A device comprising:

an integrated circuit configured to purpose itself as an multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor at a first time;

the integrated circuit configured to purpose itself as a scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene at a second time;

the integrated circuit configured to purpose itself as an selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location at a third time; and

the integrated circuit configured to purpose itself as a scene pixel de- emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene at a fourth time.

1412

Description:
Multiple User Video Imaging Array

Inventor (s):

Ehren Brav

Russell Hannigan

3ric Johanson

Phil Rutschman

CROSS-REFERENCE TO RELATED APPLICATIONS

Incorporation by Reference, Priority Date, and/or Benefits Under USC § 119(e) are Hereby Claimed for/throufih Applications Listed Herein, Such as:

Unless specifically excepted, all subject matter of the herein listed application(s) and of any and all parent, grandparent, great-grandparent, etc. applications of the herein listed applications, including any priority claims, is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.

Unless specifically excepted, the present application is related to and/or claims the benefit of the earliest available effective filing date(s) from/through the application(s) if any, listed herein (e.g., claims earliest available priority dates for other than provisional patent applications, or claims benefits under 35 USC § 119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc. applications of the Priority Application(s)).

[0010] A. The present application claims benefit of priority of United States Provisional Patent Application No. 62/081,559, entitled DEVICES, METHODS, AND SYSTEMS FOR INTEGRATING MULTIPLE USER VIDEO IMAGING ARRAY, naming Ehren Brav, Russell Hannigan, Roderick A. Hyde, Muriel Y. Ishikawa, 3ric Johanson, Jordin T. Kare, Tony S. Pan, Phillip Rutschman, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, Jr., and Victoria Y.H. Wood as inventors, filed 18 NOVEMBER 2014 with attorney docket no. MUVIA-PROV1, which was filed within the twelve months preceding the filing date of the present application or is an application of which a currently co-pending priority application is entitled to the benefit of the filing date.

[0011] B. The present application claims benefit of priority of United States Provisional Patent Application No. 62/081,560, entitled DEVICES, METHODS, AND SYSTEMS FOR IMPLEMENTATION OF MULTIPLE USER VIDEO IMAGING ARRAY

(MUVIA), naming Ehren Brav, Russell Hannigan, Roderick A. Hyde, Muriel Y. Ishikawa, 3ric Johanson, Jordin T. Kare, Tony S. Pan, Phillip Rutschman, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, Jr., and Victoria Y.H. Wood as inventors, filed 18 NOVEMBER 2014 with attorney docket no. MUVIA-PROV2, which was filed within the twelve months preceding the filing date of the present application or is an application of which a currently co-pending priority application is entitled to the benefit of the filing date.

[0012] C. The present application claims benefit of priority of United States Provisional Patent Application No. 62/082001, entitled DEVICES, METHODS, AND SYSTEMS FOR IMPLEMENTATION OF MULTIPLE USER ACCESS CAMERA ARRAY, naming Russell Hannigan, Ehren Brav, and 3ric Johanson as inventors, filed 19 NOVEMBER 2014 with attorney docket no. MUVIA-PROV3, which was filed within the twelve months preceding the filing date of the present application or is an application of which a currently co-pending priority application is entitled to the benefit of the filing date.

[0013] D. The present application claims benefit of priority of United States Provisional Patent Application No. 62/082,002, entitled DEVICES, METHODS, AND SYSTEMS FOR INTEGRATING MULTIPLE USER VIDEO IMAGING ARRAY, naming Russell Hannigan, Ehren Brav, and 3ric Johanson as inventors, filed 19

NOVEMBER 2014 with attorney docket no. MUVIA-PROV4, which was filed within the twelve months preceding the filing date of the present application or is an application of which a currently co-pending priority application is entitled to the benefit of the filing date. [0014] E. The present application claims benefit of priority of United States Provisional Patent Application No. 62/156,162, entitled DEVICES, METHODS, AND SYSTEMS FOR INTEGRATING MULTIPLE USER VIDEO IMAGING ARRAY, naming Russell Hannigan, Ehren Brav, 3ric Johanson, and Phil Rutschman as inventors, filed 01 MAY 2015 with attorney docket no. MUVIA-PROV5, which was filed within the twelve months preceding the filing date of the present application or is an application of which a currently co-pending priority application is entitled to the benefit of the filing date.

[0015] F. For purposes of the USPTO extra-statutory requirements, the

present application constitutes a continuation-in-part of United States Utility Patent Application No. 14/147,239,entitled DEVICES, METHODS AND SYSTEMS FOR VISUAL IMAGING ARRAY, naming Ehren Brav, Russell Hannigan, Roderick A. Hyde, Muriel Y. Ishikawa, 3ric Johanson, Jordin T. Kare, Tony S. Pan, Phil

Rutschman, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, Jr., and Victoria Y.H. Wood as inventors, filed 15 MAY 2015 with attorney docket no. 1114-003-001-000000, which is currently co-pending or is an application of which a currently co-pending application is entitled to the benefit of the filing date.

[0016] G. The present application claims benefit of priority of United States Provisional Patent Application No. 62/180,040, entitled DEVICES, METHODS, AND SYSTEMS FOR INTEGRATING MULTIPLE USER ACCESS CAMERA ARRAY, naming Russell Hannigan, Ehren Brav, 3ric Johanson, and Phil Rutschman as inventors, filed 15 JUNE 2015 with attorney docket no. 1114-003-001-PR0006, which was filed within the twelve months preceding the filing date of the present application or is an application of which a currently co-pending priority application is entitled to the benefit of the filing date.

[0017] H. For purposes of the USPTO extra-statutory requirements, the

present application constitutes a continuation-in-part of United States Utility Patent Application No. 14/791,127, entitled DEVICES, METHODS, AND SYSTEMS FOR VISUAL IMAGING ARRAYS, naming Ehren Brav, Russell Hannigan, Roderick A. Hyde, Muriel Y. Ishikawa, 3ric Johanson, Jordin T. Kare, Tony S. Pan, Phil Rutschman, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, Jr., and Victoria Y.H. Wood as inventors, filed 02 JULY 2015 with attorney docket no.

1114-003-002-000000, which is currently co-pending or is an application of which a currently co-pending application is entitled to the benefit of the filing date.

[0018] I. For purposes of the USPTO extra-statutory requirements, the

present application constitutes a continuation-in-part of United States Utility Patent Application No. 14/791,160, entitled DEVICES, METHODS, AND SYSTEMS FOR VISUAL IMAGING ARRAYS, naming Ehren Brav, Russell Hannigan, Roderick A. Hyde, Muriel Y. Ishikawa, 3ric Johanson, Jordin T. Kare, Tony S. Pan, Phil

Rutschman, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, Jr., and Victoria Y.H. Wood as inventors, filed 02 JULY 2015 with attorney docket no.

1114-003-006-000000, which is currently co-pending or is an application of which a currently co-pending application is entitled to the benefit of the filing date.

[0019] J. For purposes of the USPTO extra-statutory requirements, the

present application constitutes a continuation-in-part of United States Utility Patent Application No. 14/838,114, entitled DEVICES, METHODS, AND SYSTEMS FOR VISUAL IMAGING ARRAYS, naming Ehren Brav, Russell Hannigan, Roderick A. Hyde, Muriel Y. Ishikawa, 3ric Johanson, Jordin T. Kare, Tony S. Pan, Phil

Rutschman, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, Jr., and Victoria Y.H. Wood as inventors, filed 27 AUGUST 2015 with attorney docket no. 1114-003-003-000000, which is currently co-pending or is an application of which a currently co-pending application is entitled to the benefit of the filing date.

[0020] K. For purposes of the USPTO extra-statutory requirements, the

present application constitutes a continuation-in-part of United States Utility Patent Application No. 14/838,128, entitled DEVICES, METHODS AND SYSTEMS FOR VISUAL IMAGING ARRAYS, naming Ehren Brav, Russell Hannigan, Roderick A. Hyde, Muriel Y. Ishikawa, 3ric Johanson, Jordin T. Kare, Tony S. Pan, Phil

Rutschman, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, Jr., and Victoria Y.H. Wood as inventors, filed 27 AUGUST 2015 with attorney docket no. 1114-003-007-000000, which is currently co-pending or is an application of which a currently co-pending application is entitled to the benefit of the filing date.

[0021] L. For purposes of the USPTO extra-statutory requirements, the

present application constitutes a continuation-in-part of United States Utility Patent Application No. 14/941,181,entitled MULTIPLE USER VI naming Ehren Brav, Russell Hannigan, Roderick A. Hyde, Muriel Y. Ishikawa, 3ric Johanson, Jordin T. Kare, Tony S. Pan, Phil Rutschman, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, Jr., and Victoria Y.H. Wood as inventors, filed 13 NOVEMBER 2015 with attorney docket no. 1114-003-009-000000, which is currently copending or is an application of which a currently co-pending application is entitled to the benefit of the filing date.

[0022] M. For purposes of the USPTO extra-statutory requirements, the

present application constitutes a continuation-in-part of United States Utility Patent Application No. 14/945,342,entitled Devices, Methods and Systems for Multi- User Capable Visual Imaging Arrays naming Ehren Brav, Russell Hannigan, 3ric Johanson, and Phil Rutschman, as inventors, filed 19 NOVEMBER 2015 with attorney docket no. 1114-003-004-000000, which is currently co-pending or is an application of which a currently co-pending application is entitled to the benefit of the filing date.

[0002] Rights Reservations/No Waiver/No Admissions/Saving Language:

The United States Patent Office (USPTO) has published a notice to the effect that the USPTO's computer programs require that patent applicants reference both a serial number and indicate whether an application is a continuation, continuation-in-part, or divisional of a parent application. Stephen G. Kunin, Benefit of Prior-Filed Application, USPTO Official Gazette March 18, 2003. The USPTO further has provided forms for the Application Data Sheet which allow automatic loading of bibliographic data but which require identification of each application as a continuation, continuation-in-part, or divisional of a parent application. The present Applicant Entity (hereinafter "Applicant") has provided above a specific reference to the application(s) from which priority is being claimed as recited by statute. Applicant understands that the statute is unambiguous in its specific reference language and does not require either a serial number or any characterization, such as "continuation" or "continuation-in-part," for claiming priority to U.S. patent applications. Notwithstanding the foregoing, Applicant understands that the USPTO's computer programs have certain data entry requirements, and hence Applicant has provided designation(s) of a relationship between the present application and its parent application(s) as set forth above and in any ADS filed in this application, but expressly points out that such designation(s) are not to be construed in any way as any type of commentary and/or admission as to whether or not the present application contains any new matter in addition to the matter of its parent application(s).

If an Application Data Sheet (ADS) has been filed on the filing date of this application, it is incorporated by reference herein. Any applications claimed on the ADS for priority under 35 U.S.C. §§ 119, 120, 121, or 365(c), and any and all parent, grandparent, great- grandparent, etc. applications of such applications, are also incorporated by reference, including any priority claims made in those applications and any material incorporated by reference, to the extent such subject matter is not inconsistent herewith.

If the listings of applications provided above are inconsistent with the listings provided via an ADS, it is the intent of the Applicant to claim priority to each application that appears in the Priority Applications section of the ADS and to each application that appears in the Priority Applications section of this application.

United States case law is replete with patent applicants losing rights via clerical errors that appeared to have resulted from unintended errors which judges have held have broken the priority chains, and it seems likely that such breaks are a consequence of the nonstatutory rules regarding priority claiming which have been imposed for the

administrative convenience of the PTO. There should be a way for the drafting attorney to craft language to "fail safe" on this point, and that is what is intended herein.

Specifically, Applicant hereby gives public notice that priority is being claimed for the earliest priority that could be achieved under the Statutes through the herein listed applications, and further through any parents, grandparents, great-grandparents, etc. of the herein listed applications. Furthermore, Applicant hereby gives public notice that incorporation by reference is made for the most inclusive subject matter that could be achieved under the Statutes through the herein listed applications, and further through any parents, grandparents, great-grandparents, etc. of the herein listed applications.

BACKGROUND

[0015] This application is related to video imaging arrays that may be capable of handling multiple users and which may transmit less data than they collect.

SUMMARY

[0016] In one or more various aspects, a method includes but is not limited to that which is illustrated in the drawings. In addition to the foregoing, other method aspects are described in the claims, drawings, and text forming a part of the disclosure set forth herein.

[0017] In one or more various aspects, a method includes, but is not limited to, acquiring a request for particular image data that is part of a scene, transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, and receiving only the particular image data from the image sensor array. In addition to the foregoing, other method aspects are described in the claims, drawings, and text forming a part of the disclosure set forth herein.

[0018] In one or more various aspects, one or more related systems may be

implemented in machines, compositions of matter, or manufactures of systems, limited to patentable subject matter under 35 U.S.C. 101. The one or more related systems may include, but are not limited to, circuitry and/or programming for carrying out the herein-referenced method aspects. The circuitry and/or programming may be virtually any combination of hardware, software, and/or firmware configured to effect the herein- referenced method aspects depending upon the design choices of the system designer, and limited to patentable subject matter under 35 USC 101.

[0019] In one or more various aspects, a system includes, but is not limited to, means for acquiring a request for particular image data that is part of a scene, means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, and means for receiving only the particular image data from the image sensor array. In addition to the foregoing, other system aspects are described in the claims, drawings, and text forming a part of the disclosure set forth herein.

[0020] In one or more various aspects, a system includes, but is not limited to, circuitry for acquiring a request for particular image data that is part of a scene, circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, and circuitry for receiving only the particular image data from the image sensor array. In addition to the foregoing, other system aspects are described in the claims, drawings, and text forming a part of the disclosure set forth herein.

[0021] In one or more various aspects, a computer program product, comprising a signal bearing medium, bearing one or more instructions including, but not limited to, one or more instructions for acquiring a request for particular image data that is part of a scene, one or more instructions for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, and one or more instructions for receiving only the particular image data from the image sensor array. In addition to the foregoing, other computer program product aspects are described in the claims, drawings, and text forming a part of the disclosure set forth herein.

[0022] In one or more various aspects, a device is defined by a computational language, such that the device comprises one or more interchained physical machines ordered for acquiring a request for particular image data that is part of a scene, one or more interchained physical machines ordered for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, and one or more interchained physical machines ordered for receiving only the particular image data from the image sensor array.

[0023] In addition to the foregoing, various other method and/or system and/or program product aspects are set forth and described in the teachings such as text (e.g., claims and/or detailed description) and/or drawings of the present disclosure. [0024] The foregoing is a summary and thus may contain simplifications,

generalizations, inclusions, and/or omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is NOT intended to be in any way limiting. Other aspects, features, and advantages of the devices and/or processes and/or other subject matter described herein will become apparent by reference to the detailed description, the corresponding drawings, and/or in the teachings set forth herein.

BRIEF DESCRIPTION OF THE FIGURES

— This Roman Numeral Section, And the Corresponding Figures, Were Copied From a Pending United States Application into this PCT Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If" Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0009] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. An absence of symbols in the drawings should not lead to any inference.

[0010] Figs. 1-33 describe various, non-limiting, exemplary embodiments of the multiple user video imaging array.

This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If" Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0009] There are no figures present in this application.

This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If" Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0009] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. An absence of symbols in the drawings should not lead to any inference.

[0010] Figs. 1-8 describe various, non-limiting, exemplary embodiments of the multiple user video imaging array.

Figures, Were Copied From a Pending United States Application into this PCT Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0009] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. An absence of symbols in the drawings should not lead to any inference.

[0010] Figs. 1-24 deescribe various, non-limiting, exemplary embodiments of the multiple user video imaging array.

This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0009] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other

embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. An absence of symbols in the drawings should not lead to any inference.

[0010] Fig. 1, including Figs. 1A-1C, deescribe various, non-limiting, exemplary embodiments of a particular implementation of the multiple user video imaging array that includes a latency hiding message sequence.

s— This Roman Numeral Section, And the Corresponding Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0017] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings.

The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be

limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.

[0018] Fig. 1, including Figs. 1A through 1AL, shows a high-level system diagram of one or more exemplary environments in which transactions and potential

transactions may be carried out, according to one or more embodiments. Fig. 1 forms a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein when Figs. 1A through 1AL are stitched together in the manner shown in Fig. 1Z, which is reproduced below in table format.

[0019] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 shows "a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets" labeled Fig. 1A through Fig. IAD (Sheets 1-30). The "views on two or more sheets form ,in effect, a single complete view, [and] the views on the several sheets ... [are] so arranged that the complete figure can be assembled" from "partial views drawn on separate sheets ... linked edge to edge. Thus, in Fig. 1, the partial view Figs. 1A through IAD are ordered alphabetically, by increasing in columns from left to right, and increasing in rows top to bottom, as shown in the following table:

Pos. X-Pos 1 X-Pos 2 X-Pos 3 X-Pos 4 X-Pos 5 X-Pos 6 X-Pos 7 X-Pos 8 X-Pos X-Pos 10

(0,0) 9

Y-Pos. (1,1): Fig. (1,2): Fig. (1,3): Fig. (1,4): Fig. (1,5): Fig. (1,6): (1,7): (1,8): (1,9): (1,10): 1 1-A 1-B 1-C 1-D 1-E Fig. 1-F Fig. 1-G Fig. 1-H Fig. 1-1 Fig. 1-J

Y-Pos. (2,1): Fig. (2,2): Fig. (2,3): Fig. (2,4): Fig. (2,5): Fig. (2,6): (2,7): (2,8): (2,9): (2,10): 2 1-K 1-L 1-M 1-N l-O Fig. 1-P Fig. 1-Q Fig. 1-R Fig. 1-S Fig. 1-T

Y-Pos. (3,1): Fig. (3,2): Fig. (3,3): Fig. (3,4): Fig. (3,5): Fig. (3,6): (3,7):Fig. (3,8):Fig. (3,9):Fig. (3,10):Fig. 3 1-U 1-V 1-W 1-X 1-Y Fig. 1-Z 1-AA 1-AB 1-AC 1-AD Y-Pos. (4,1): Fig. (4,2): Fig. (4,3): Fig. (4,3): Fig. (4,5): Fig. (4,6): (4,7):Fig. (4,8):Fig. (4,8):Fig. (4,10):Fig.

4 1-AE 1-AF 1-AG 1-AH 1-AI Fig. 1-AJ 1-AK 1-AL 1-AM 1-AN

Table 1. Table showing alignment of enclosed drawings to form partial schematic of one or more environments.

[0020] Fig. 1-A, when placed at position (1,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0021] Fig. 1-B, when placed at position (1,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0022] Fig. 1-C, when placed at position (1,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0023] Fig. 1-D, when placed at position (1,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0024] Fig. 1-E, when placed at position (1,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0025] Fig. 1-F, when placed at position (1,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0026] Fig. 1-G, when placed at position (1,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0027] Fig. 1-H, when placed at position (1,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0028] Fig. l-I, when placed at position (1,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0029] Fig. 1-J, when placed at position (1,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0030] Fig. 1-K, when placed at position (2,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0031] Fig. 1-L, when placed at position (2,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0032] Fig. 1-M, when placed at position (2,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0033] Fig. 1-N, when placed at position (2,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0034] Fig. l-O (which format is changed to avoid confusion as Figure "10" or "ten"), when placed at position (2,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0035] Fig. 1-P, when placed at position (2,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0036] Fig. 1-Q, when placed at position (2,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0037] Fig. 1-R, when placed at position (2,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0038] Fig. 1-S, when placed at position (2,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0039] Fig. 1-T, when placed at position (2,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0040] Fig. 1-U, when placed at position (3,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0041] Fig. 1-V, when placed at position (3,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0042] Fig. 1-W, when placed at position (3,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0043] Fig. 1-X, when placed at position (3,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0044] Fig. 1-Y, when placed at position (3,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0045] Fig. 1-Z, when placed at position (3,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0046] Fig. 1-AA, when placed at position (3,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0047] Fig. 1-AB, when placed at position (3,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein. [0048] Fig. 1-AC, when placed at position (3,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0049] Fig. 1-AD, when placed at position (3,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0050] Fig. 1-AE, when placed at position (4,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0051] Fig. 1-AF, when placed at position (4,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0052] Fig. 1-AG, when placed at position (4,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0053] Fig. 1-AH, when placed at position (4,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0054] Fig. 1-AI, when placed at position (4,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0055] Fig. 1-AJ, when placed at position (4,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0056] Fig. 1-AK, when placed at position (4,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0057] Fig. 1-AL, when placed at position (4,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0058] Fig. 1-AM, when placed at position (4,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0059] Fig. 1-AN, when placed at position (4,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0060] Fig. 2 shows an exemplary operation 200, according to embodiments of the invention.

[0061] Fig. 3 shows an exemplary operation 300, according to embodiments of the invention.

[0062] Fig. 4 shows an exemplary operation 400, according to embodiments of the invention.

[0063] Fig. 5 shows an exemplary operation 500, according to embodiments of the invention.

[0064] Fig. 6A shows a first part of a latency hiding message sequence diagram, according to an embodiment of the invention.

[0065] Fig. 6B shows a first part of a latency hiding message sequence diagram, according to an embodiment of the invention.

[0066] Fig. 6C shows a first part of a latency hiding message sequence

diagram, according to an embodiment of the invention.

This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to

Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0009] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. An absence of symbols in the drawings should not lead to any inference.

[0010] Figs. 1-8 describe various, non-limiting, exemplary embodiments of the multiple user video imaging array.

This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0024] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.

[0025] Fig. 1, including Figs. 1-A through 1-AN, shows a high-level system diagram of one or more exemplary environments in which transactions and potential transactions may be carried out, according to one or more embodiments. Fig. 1 forms a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein when Figs. 1-A through 1-AN are stitched together in the manner shown in Fig. 1-D, which is reproduced below in table format.

[0026] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 shows "a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets" labeled Fig. 1-A through Fig. 1-AN (Sheets 1-40). The "views on two or more sheets form ,in effect, a single complete view, [and] the views on the several sheets ... [are] so arranged that the complete figure can be assembled" from "partial views drawn on separate sheets ... linked edge to edge. Thus, in Fig. 1, the partial view Figs. 1-A through 1-AN are ordered alphabetically, by increasing in columns from left to right, and increasing in rows top to bottom, as shown in the following table:

Y-Pos. (4,1): Fig. (4,2): Fig. (4,3): Fig. (4,3): Fig. (4,5): Fig. (4,6): (4,7):Fig. (4,8):Fig. (4,8):Fig. (4,10):Fig.

4 1-AE 1-AF 1-AG 1-AH 1-AI Fig. 1-AJ 1-AK 1-AL 1-AM 1-AN able 1. Table showing alignment of enclosed drawings to form partial schematic of one or more environments.

[0027] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 is "... a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets ... [with] no loss in facility of understanding the view." The partial views drawn on the several sheets indicated in the above table are capable of being linked edge to edge, so that no partial view contains parts of another partial view. As here, "where views on two or more sheets form, in effect, a single complete view, the views on the several sheets are so arranged that the complete figure can be assembled without concealing any part of any of the views appearing on the various sheets." 37 C.F.R. § 1.84(h)(2).

[0028] It is noted that one or more of the partial views of the drawings may be blank, or may be absent of substantive elements (e.g., may show only lines, connectors, arrows, and/or the like). These drawings are included in order to assist readers of the application in assembling the single complete view from the partial sheet format required for submission by the USPTO, and, while their inclusion is not required and may be omitted in this or other applications without subtracting from the disclosed matter as a whole, their inclusion is proper, and should be considered and treated as intentional.

[0029] Fig. 1-A, when placed at position (1,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0030] Fig. 1-B, when placed at position (1,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0031] Fig. 1-C, when placed at position (1,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0032] Fig. 1-D, when placed at position (1,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0033] Fig. 1-E, when placed at position (1,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0034] Fig. 1-F, when placed at position (1,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0035] Fig. 1-G, when placed at position (1,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0036] Fig. 1-H, when placed at position (1,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0037] Fig. 1-1, when placed at position (1,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0038] Fig. 1-J, when placed at position (1,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0039] Fig. 1-K, when placed at position (2,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0040] Fig. 1-L, when placed at position (2,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0041] Fig. 1-M, when placed at position (2,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein. [0042] Fig. 1-N, when placed at position (2,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0043] Fig. l-O (which format is changed to avoid confusion as Figure "10" or "ten"), when placed at position (2,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0044] Fig. 1-P, when placed at position (2,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0045] Fig. 1-Q, when placed at position (2,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0046] Fig. 1-R, when placed at position (2,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0047] Fig. 1-S, when placed at position (2,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0048] Fig. 1-T, when placed at position (2,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0049] Fig. 1-U, when placed at position (3,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0050] Fig. 1-V, when placed at position (3,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0051] Fig. 1-W, when placed at position (3,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0052] Fig. 1-X, when placed at position (3,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0053] Fig. 1-Y, when placed at position (3,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0054] Fig. 1-Z, when placed at position (3,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0055] Fig. 1-AA, when placed at position (3,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0056] Fig. 1-AB, when placed at position (3,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0057] Fig. 1-AC, when placed at position (3,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0058] Fig. 1-AD, when placed at position (3,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0059] Fig. 1-AE, when placed at position (4,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0060] Fig. 1-AF, when placed at position (4,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0061] Fig. 1-AG, when placed at position (4,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0062] Fig. 1-AH, when placed at position (4,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0063] Fig. 1-AI, when placed at position (4,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0064] Fig. 1-AJ, when placed at position (4,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0065] Fig. 1-AK, when placed at position (4,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0066] Fig. 1-AL, when placed at position (4,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0067] Fig. 1-AM, when placed at position (4,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0068] Fig. 1-AN, when placed at position (4,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0069] Fig. 2A shows a high-level block diagram of an exemplary environment 200, including an image device 220, according to one or more embodiments.

[0070] Fig. 2B shows a high-level block diagram of a computing device, e.g., a device 220 operating in an exemplary environment 200, according to one or more embodiments.

[0071] Fig. 3 shows a high-level block diagram of an exemplary operation of a device 220A in an exemplary environment 300, according to embodiments.

[0072] Fig. 4 shows a high-level block diagram of an exemplary operation of an image device 420 in an exemplary environment 400, according to

embodiments.

[0073] Fig. 5 shows a high-level block diagram of an exemplary operation of an image device 520 in an exemplary environment 500, according to

embodiments.

[0074] Fig. 6, including Figs. 6A-6F, shows a particular perspective of a multiple image sensor based scene capturing module 252 of processing module 250 of image device 220 of Fig. 2B, according to an embodiment.

[0075] Fig. 7, including Figs. 7A-7C, shows a particular perspective of a scene particular portion selecting module 254 of processing module 250 of image device 220 of Fig. 2B, according to an embodiment.

[0076] Fig. 8, including Figs. 8A-8D, shows a particular perspective of a selected particular portion transmitting module 256 of processing module 250 of image device 220 of Fig. 2B, according to an embodiment.

[0077] Fig. 9, including Figs. 9A-9D, shows a particular perspective of a scene pixel de-emphasizing module 258 of processing module 250 of image device 220 of Fig. 2B, according to an embodiment.

[0078] Fig. 10 is a high-level logic flowchart of a process, e g., operational flow 1000, including one or more operations of a capturing a scene that includes one or more images operation, a selecting a particular portion of the scene that includes at least one image operation, a transmitting only the selected particular portion from the scene operation, and a de-emphasizing pixels from the scene operation, according to an embodiment.

[0079] Fig. 11A is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0080] Fig. 11B is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0081] Fig. llC is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0082] Fig. 1 ID is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0083] Fig. HE is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0084] Fig. 11F is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0085] Fig. 12A is a high-level logic flow chart of a process depicting alternate implementations of a selecting a particular portion of the scene that includes at least one image operation 1004, according to one or more embodiments.

[0086] Fig. 12B is a high-level logic flow chart of a process depicting alternate implementations of a selecting a particular portion of the scene that includes at least one image operation 1004, according to one or more embodiments.

[0087] Fig. 12C is a high-level logic flow chart of a process depicting alternate implementations of a selecting a particular portion of the scene that includes at least one image operation 1004, according to one or more embodiments.

[0088] Fig. 13A is a high-level logic flow chart of a process depicting alternate implementations of a transmitting only the selected particular portion from the scene operation 1006, according to one or more embodiments.

[0089] Fig. 13B is a high-level logic flow chart of a process depicting alternate implementations of a transmitting only the selected particular portion from the scene operation 1006, according to one or more embodiments.

[0090] Fig. 13C is a high-level logic flow chart of a process depicting alternate implementations of a transmitting only the selected particular portion from the scene operation 1006, according to one or more embodiments.

[0091] Fig. 14A is a high-level logic flow chart of a process depicting alternate implementations of a de-emphasizing pixels from the scene operation 1008, according to one or more embodiments.

[0092] Fig. 14B is a high-level logic flow chart of a process depicting alternate implementations of a de-emphasizing pixels from the scene operation 1008, according to one or more embodiments.

[0093] Fig. 14C is a high-level logic flow chart of a process depicting alternate implementations of a de-emphasizing pixels from the scene operation 1008, according to one or more embodiments.

[0094] Fig. 14D is a high-level logic flow chart of a process depicting alternate implementations of a de-emphasizing pixels from the scene operation 1008, according to one or more embodiments.

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0024] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.

[0025] Fig. 1, including Figs. 1-A through 1-AN, shows a high-level system diagram of one or more exemplary environments in which transactions and potential transactions may be carried out, according to one or more embodiments. Fig. 1 forms a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein when Figs. 1-A through 1-AN are stitched together in the manner shown in Fig. 1-D, which is reproduced below in table format.

[0026] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 shows "a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets" labeled Fig. 1-A through Fig. 1-AN (Sheets 1-40). The "views on two or more sheets form ,in effect, a single complete view, [and] the views on the several sheets ... [are] so arranged that the complete figure can be assembled" from "partial views drawn on separate sheets ... linked edge to edge. Thus, in Fig. 1, the partial view Figs. 1-A through 1-AN are ordered alphabetically, by increasing in columns from left to right, and increasing in rows top to bottom, as shown in the following table:

Y-Pos. (4,1): Fig. (4,2): Fig. (4,3): Fig. (4,3): Fig. (4,5): Fig. (4,6): (4,7):Fig. (4,8):Fig. (4,8):Fig. (4,10):Fig.

4 1-AE 1-AF 1-AG 1-AH 1-AI Fig. 1-AJ 1-AK 1-AL 1-AM 1-AN able 1. Table showing alignment of enclosed drawings to form partial schematic of one or more environments.

[0027] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 is "... a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets ... [with] no loss in facility of understanding the view." The partial views drawn on the several sheets indicated in the above table are capable of being linked edge to edge, so that no partial view contains parts of another partial view. As here, "where views on two or more sheets form, in effect, a single complete view, the views on the several sheets are so arranged that the complete figure can be assembled without concealing any part of any of the views appearing on the various sheets." 37 C.F.R. § 1.84(h)(2).

[0028] It is noted that one or more of the partial views of the drawings may be blank, or may be absent of substantive elements (e.g., may show only lines, connectors, arrows, and/or the like). These drawings are included in order to assist readers of the application in assembling the single complete view from the partial sheet format required for submission by the USPTO, and, while their inclusion is not required and may be omitted in this or other applications without subtracting from the disclosed matter as a whole, their inclusion is proper, and should be considered and treated as intentional.

[0029] Fig. 1-A, when placed at position (1,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0030] Fig. 1-B, when placed at position (1,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0031] Fig. 1-C, when placed at position (1,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0032] Fig. 1-D, when placed at position (1,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0033] Fig. 1-E, when placed at position (1,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0034] Fig. 1-F, when placed at position (1,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0035] Fig. 1-G, when placed at position (1,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0036] Fig. 1-H, when placed at position (1,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0037] Fig. 1-1, when placed at position (1,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0038] Fig. 1-J, when placed at position (1,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0039] Fig. 1-K, when placed at position (2,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0040] Fig. 1-L, when placed at position (2,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0041] Fig. 1-M, when placed at position (2,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein. [0042] Fig. 1-N, when placed at position (2,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0043] Fig. l-O (which format is changed to avoid confusion as Figure "10" or "ten"), when placed at position (2,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0044] Fig. 1-P, when placed at position (2,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0045] Fig. 1-Q, when placed at position (2,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0046] Fig. 1-R, when placed at position (2,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0047] Fig. 1-S, when placed at position (2,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0048] Fig. 1-T, when placed at position (2,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0049] Fig. 1-U, when placed at position (3,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0050] Fig. 1-V, when placed at position (3,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0051] Fig. 1-W, when placed at position (3,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0052] Fig. 1-X, when placed at position (3,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0053] Fig. 1-Y, when placed at position (3,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0054] Fig. 1-Z, when placed at position (3,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0055] Fig. 1-AA, when placed at position (3,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0056] Fig. 1-AB, when placed at position (3,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0057] Fig. 1-AC, when placed at position (3,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0058] Fig. 1-AD, when placed at position (3,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0059] Fig. 1-AE, when placed at position (4,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0060] Fig. 1-AF, when placed at position (4,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0061] Fig. 1-AG, when placed at position (4,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0062] Fig. 1-AH, when placed at position (4,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0063] Fig. 1-AI, when placed at position (4,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0064] Fig. 1-AJ, when placed at position (4,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0065] Fig. 1-AK, when placed at position (4,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0066] Fig. 1-AL, when placed at position (4,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0067] Fig. 1-AM, when placed at position (4,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0068] Fig. 1-AN, when placed at position (4,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0069] Fig. 2A shows a high-level block diagram of an exemplary environment 200, including an image device 220, according to one or more embodiments.

[0070] Fig. 2B shows a high-level block diagram of a computing device, e.g., a device 220 operating in an exemplary environment 200, according to one or more embodiments.

[0071] Fig. 3 shows a high-level block diagram of an exemplary operation of a device 220A in an exemplary environment 300, according to embodiments.

[0072] Fig. 4 shows a high-level block diagram of an exemplary operation of an image device 420 in an exemplary environment 400, according to

embodiments.

[0073] Fig. 5 shows a high-level block diagram of an exemplary operation of an image device 520 in an exemplary environment 500, according to

embodiments.

[0074] Fig. 6, including Figs. 6A-6F, shows a particular perspective of a multiple image sensor based scene capturing module 252 of processing module 250 of image device 220 of Fig. 2B, according to an embodiment.

[0075] Fig. 7, including Figs. 7A-7C, shows a particular perspective of a scene particular portion selecting module 254 of processing module 250 of image device 220 of Fig. 2B, according to an embodiment.

[0076] Fig. 8, including Figs. 8A-8D, shows a particular perspective of a selected particular portion transmitting module 256 of processing module 250 of image device 220 of Fig. 2B, according to an embodiment.

[0077] Fig. 9, including Figs. 9A-9D, shows a particular perspective of a scene pixel de-emphasizing module 258 of processing module 250 of image device 220 of Fig. 2B, according to an embodiment.

[0078] Fig. 10 is a high-level logic flowchart of a process, e g., operational flow 1000, including one or more operations of a capturing a scene that includes one or more images operation, a selecting a particular portion of the scene that includes at least one image operation, a transmitting only the selected particular portion from the scene operation, and a de-emphasizing pixels from the scene operation, according to an embodiment.

[0079] Fig. 11A is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0080] Fig. 11B is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0081] Fig. llC is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0082] Fig. 1 ID is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0083] Fig. HE is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0084] Fig. 11F is a high-level logic flow chart of a process depicting alternate implementations of a capturing a scene that includes one or more images operation 1002, according to one or more embodiments.

[0085] Fig. 12A is a high-level logic flow chart of a process depicting alternate implementations of a selecting a particular portion of the scene that includes at least one image operation 1004, according to one or more embodiments.

[0086] Fig. 12B is a high-level logic flow chart of a process depicting alternate implementations of a selecting a particular portion of the scene that includes at least one image operation 1004, according to one or more embodiments.

[0087] Fig. 12C is a high-level logic flow chart of a process depicting alternate implementations of a selecting a particular portion of the scene that includes at least one image operation 1004, according to one or more embodiments.

[0088] Fig. 13A is a high-level logic flow chart of a process depicting alternate implementations of a transmitting only the selected particular portion from the scene operation 1006, according to one or more embodiments.

[0089] Fig. 13B is a high-level logic flow chart of a process depicting alternate implementations of a transmitting only the selected particular portion from the scene operation 1006, according to one or more embodiments.

[0090] Fig. 13C is a high-level logic flow chart of a process depicting alternate implementations of a transmitting only the selected particular portion from the scene operation 1006, according to one or more embodiments.

[0091] Fig. 14A is a high-level logic flow chart of a process depicting alternate implementations of a de-emphasizing pixels from the scene operation 1008, according to one or more embodiments.

[0092] Fig. 14B is a high-level logic flow chart of a process depicting alternate implementations of a de-emphasizing pixels from the scene operation 1008, according to one or more embodiments.

[0093] Fig. 14C is a high-level logic flow chart of a process depicting alternate implementations of a de-emphasizing pixels from the scene operation 1008, according to one or more embodiments.

[0094] Fig. 14D is a high-level logic flow chart of a process depicting alternate implementations of a de-emphasizing pixels from the scene operation 1008, according to one or more embodiments.

^WB^^^^^^H^^M— This Roman Numeral Section, And the Corresponding Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0025] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.

[0026] Fig. 1, including Figs. 1-A through 1-AN, shows a high-level system diagram of one or more exemplary environments in which transactions and potential transactions may be carried out, according to one or more embodiments. Fig. 1 forms a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein when Figs. 1-A through 1-AN are stitched together in the manner shown in Fig. 1-D, which is reproduced below in table format.

[0027] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 shows "a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets" labeled Fig. 1-A through Fig. 1-AN (Sheets 1-40). The "views on two or more sheets form ,in effect, a single complete view, [and] the views on the several sheets ... [are] so arranged that the complete figure can be assembled" from "partial views drawn on separate sheets ... linked edge to edge. Thus, in Fig. 1, the partial view Figs. 1-A through 1-AN are ordered alphabetically, by increasing in columns from left to right, and increasing in rows top to bottom, as shown in the following table:

able 1. Table showing alignment of enclosed drawings to form partial schematic of one or more environments.

[0028] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 is "... a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets ... [with] no loss in facility of understanding the view." The partial views drawn on the several sheets indicated in the above table are capable of being linked edge to edge, so that no partial view contains parts of another partial view. As here, "where views on two or more sheets form, in effect, a single complete view, the views on the several sheets are so arranged that the complete figure can be assembled without concealing any part of any of the views appearing on the various sheets." 37 C.F.R. § 1.84(h)(2).

[0029] It is noted that one or more of the partial views of the drawings may be blank, or may be absent of substantive elements (e.g., may show only lines, connectors, arrows, and/or the like). These drawings are included in order to assist readers of the application in assembling the single complete view from the partial sheet format required for submission by the USPTO, and, while their inclusion is not required and may be omitted in this or other applications without subtracting from the disclosed matter as a whole, their inclusion is proper, and should be considered and treated as intentional.

[0030] Fig. 1-A, when placed at position (1,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0031] Fig. 1-B, when placed at position (1,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0032] Fig. 1-C, when placed at position (1,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0033] Fig. 1-D, when placed at position (1,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0034] Fig. 1-E, when placed at position (1,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0035] Fig. 1-F, when placed at position (1,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0036] Fig. 1-G, when placed at position (1,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0037] Fig. 1-H, when placed at position (1,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0038] Fig. 1-1, when placed at position (1,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0039] Fig. 1-J, when placed at position (1,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0040] Fig. 1-K, when placed at position (2,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0041] Fig. 1-L, when placed at position (2,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0042] Fig. 1-M, when placed at position (2,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0043] Fig. 1-N, when placed at position (2,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0044] Fig. l-O (which format is changed to avoid confusion as Figure "10" or "ten"), when placed at position (2,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0045] Fig. 1-P, when placed at position (2,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0046] Fig. 1-Q, when placed at position (2,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0047] Fig. 1-R, when placed at position (2,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein. [0048] Fig. 1-S, when placed at position (2,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0049] Fig. 1-T, when placed at position (2,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0050] Fig. 1-U, when placed at position (3,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0051] Fig. 1-V, when placed at position (3,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0052] Fig. 1-W, when placed at position (3,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0053] Fig. 1-X, when placed at position (3,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0054] Fig. 1-Y, when placed at position (3,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0055] Fig. 1-Z, when placed at position (3,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0056] Fig. 1-AA, when placed at position (3,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0057] Fig. 1-AB, when placed at position (3,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0058] Fig. 1-AC, when placed at position (3,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0059] Fig. 1-AD, when placed at position (3,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0060] Fig. 1-AE, when placed at position (4,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0061] Fig. 1-AF, when placed at position (4,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0062] Fig. 1-AG, when placed at position (4,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0063] Fig. 1-AH, when placed at position (4,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0064] Fig. 1-AI, when placed at position (4,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0065] Fig. 1-AJ, when placed at position (4,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0066] Fig. 1-AK, when placed at position (4,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0067] Fig. 1-AL, when placed at position (4,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0068] Fig. 1-AM, when placed at position (4,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0069] Fig. 1-AN, when placed at position (4,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0070] Fig. 2A shows a high-level block diagram of an exemplary environment 200, including an image device 220, according to one or more embodiments.

[0071] Fig. 2B shows a high-level block diagram of a computing device, e.g., a server device 230 operating in an exemplary environment 200, according to one or more embodiments.

[0072] Fig. 3A shows a high-level block diagram of an exemplary operation of a device 220A in an exemplary environment 300A, according to embodiments.

[0073] Fig. 3B shows a high-level block diagram of an exemplary operation of a device 220B in an exemplary environment 300B, according to embodiments.

[0074] Fig. 3C shows a high-level block diagram of an exemplary operation of a device 220C in an exemplary environment 300C, according to embodiments.

[0075] Fig. 4A shows a high-level block diagram of an exemplary operation of an image device 420 in an exemplary environment 400 A, according to embodiments.

[0076] Fig. 4B shows a high-level block diagram of an exemplary operation of an image device 420B in an exemplary environment 400B, according to embodiments.

[0077] Fig. 5A shows a high-level block diagram of an exemplary operation of a server device 530A in an exemplary environment 500A, according to embodiments.

[0078] Fig. 5B shows a high-level block diagram of an exemplary operation of a server device 530B in an exemplary environment 500B, according to embodiments.

[0079] Fig. 6, including Figs. 6A-6G, shows a particular perspective of a request for particular image data that is part of a scene acquiring module 252 of processing module 250 of server device 230 of Fig. 2B, according to an embodiment.

[0080] Fig. 7, including Figs. 7A-7E, shows a particular perspective of a request for particular image data transmitting to an image sensor array module 254 of processing module 250 of server device 230 of Fig. 2B, according to an embodiment.

[0081] Fig. 8, including Figs. 8A-8C, shows a particular perspective of a particular image data from the image sensor array exclusive receiving module 256 of processing module 250 of server device 230 of Fig. 2B, according to an embodiment.

[0082] Fig. 9, including Figs. 9A-9E, shows a particular perspective of a received particular image data transmitting to at least one requestor module 258 of processing module 250 of server device 230 of Fig. 2B, according to an embodiment.

[0083] Fig. 10 is a high-level logic flowchart of a process, e g., operational flow 1000, including one or more operations of an acquiring a request for particular image data that is part of a scene operation, a transmitting the request for the particular image data to an image sensor array operation, a receiving only the particular image data from the image sensor array operation, and a transmitting the received particular image data to at least one requestor operation, according to an embodiment.

[0084] Fig. 11A is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0085] Fig. 11B is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0086] Fig. llC is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0087] Fig. 1 ID is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments. [0088] Fig. HE is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a sc operation 1002, according to one or more embodiments.

[0089] Fig. 11F is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a sc operation 1002, according to one or more embodiments.

[0090] Fig. 11G is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a sc operation 1002, according to one or more embodiments.

[0091] Fig. 12A is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imag sensor array operation 1004, according to one or more embodiments.

[0092] Fig. 12B is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imag sensor array operation 1004, according to one or more embodiments.

[0093] Fig. 12C is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imag sensor array operation 1004, according to one or more embodiments.

[0094] Fig. 12D is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imag sensor array operation 1004, according to one or more embodiments.

[0095] Fig. 12E is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imag sensor array operation 1004, according to one or more embodiments.

[0096] Fig. 13A is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image data from the image sensor array operation 1006, according to one or more embodiments.

[0097] Fig. 13B is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image data from the image sensor array operation 1006, according to one or more embodiments.

[0098] Fig. 13C is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image data from the image sensor array operation 1006, according to one or more embodiments.

[0099] Fig. 14A is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

[00100] Fig. 14B is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

[00101] Fig. 14C is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

[00102] Fig. 14D is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

[00103] Fig. 14E is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0025] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.

[0026] Fig. 1, including Figs. 1-A through 1-AN, shows a high-level system diagram of one or more exemplary environments in which transactions and potential transactions may be carried out, according to one or more embodiments. Fig. 1 forms a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein when Figs. 1-A through 1-AN are stitched together in the manner shown in Fig. 1-D, which is reproduced below in table format.

[0027] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 shows "a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets" labeled Fig. 1-A through Fig. 1-AN (Sheets 1-40). The "views on two or more sheets form ,in effect, a single complete view, [and] the views on the several sheets ... [are] so arranged that the complete figure can be assembled" from "partial views drawn on separate sheets ... linked edge to edge. Thus, in Fig. 1, the partial view Figs. 1-A through 1-AN are ordered alphabetically, by increasing in columns from left to right, and increasing in rows top to bottom, as shown in the following table:

Pos. X-Pos 1 X-Pos 2 X-Pos 3 X-Pos 4 X-Pos 5 X-Pos 6 X-Pos 7 X-Pos 8 X-Pos 9 X-Pos 10

(0,0)

Y-Pos. (1,1): Fig. (1,2): Fig. (1,3): Fig. (1,4): Fig. (1,5): Fig. (1,6): (1,7): (1,8): (1,9): (1,10): 1 1-A 1-B 1-C 1-D 1-E Fig. 1-F Fig. 1-G Fig. 1-H Fig. 1-1 Fig. 1-J Y-Pos. (2,1): Fig. (2,2): Fig. (2,3): Fig. (2,4): Fig. (2,5): Fig. (2,6): (2,7): (2,8): (2,9): (2,10): 2 1-K 1-L 1-M 1-N l-O Fig. 1-P Fig. 1-Q Fig. 1-R Fig. 1-S Fig. 1-T

Y-Pos. (3,1): Fig. (3,2): Fig. (3,3): Fig. (3,4): Fig. (3,5): Fig. (3,6): (3,7):Fig. (3,8):Fig. (3,9):Fig. (3,10):Fig. 3 1-U 1-V 1-W 1-X 1-Y Fig. 1-Z 1-AA 1-AB 1-AC 1-AD

Y-Pos. (4,1): Fig. (4,2): Fig. (4,3): Fig. (4,3): Fig. (4,5): Fig. (4,6): (4,7):Fig. (4,8):Fig. (4,8):Fig. (4,10):Fig. 4 1-AE 1-AF 1-AG 1-AH 1-AI Fig. 1-AJ 1-AK 1-AL 1-AM 1-AN able 1. Table showing alignment of enclosed drawings to form partial schematic of one or more environments.

[0028] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 is "... a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets ... [with] no loss in facility of understanding the view." The partial views drawn on the several sheets indicated in the above table are capable of being linked edge to edge, so that no partial view contains parts of another partial view. As here, "where views on two or more sheets form, in effect, a single complete view, the views on the several sheets are so arranged that the complete figure can be assembled without concealing any part of any of the views appearing on the various sheets." 37 C.F.R. § 1.84(h)(2).

[0029] It is noted that one or more of the partial views of the drawings may be blank, or may be absent of substantive elements (e.g., may show only lines, connectors, arrows, and/or the like). These drawings are included in order to assist readers of the application in assembling the single complete view from the partial sheet format required for submission by the USPTO, and, while their inclusion is not required and may be omitted in this or other applications without subtracting from the disclosed matter as a whole, their inclusion is proper, and should be considered and treated as intentional.

[0030] Fig. 1-A, when placed at position (1,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0031] Fig. 1-B, when placed at position (1,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0032] Fig. 1-C, when placed at position (1,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0033] Fig. 1-D, when placed at position (1,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0034] Fig. 1-E, when placed at position (1,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0035] Fig. 1-F, when placed at position (1,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0036] Fig. 1-G, when placed at position (1,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0037] Fig. 1-H, when placed at position (1,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0038] Fig. 1-1, when placed at position (1,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0039] Fig. 1-J, when placed at position (1,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0040] Fig. 1-K, when placed at position (2,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0041] Fig. 1-L, when placed at position (2,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0042] Fig. 1-M, when placed at position (2,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0043] Fig. 1-N, when placed at position (2,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0044] Fig. l-O (which format is changed to avoid confusion as Figure "10" or "ten"), when placed at position (2,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0045] Fig. 1-P, when placed at position (2,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0046] Fig. 1-Q, when placed at position (2,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0047] Fig. 1-R, when placed at position (2,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0048] Fig. 1-S, when placed at position (2,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0049] Fig. 1-T, when placed at position (2,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0050] Fig. 1-U, when placed at position (3,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0051] Fig. 1-V, when placed at position (3,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein. [0052] Fig. 1-W, when placed at position (3,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0053] Fig. 1-X, when placed at position (3,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0054] Fig. 1-Y, when placed at position (3,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0055] Fig. 1-Z, when placed at position (3,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0056] Fig. 1-AA, when placed at position (3,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0057] Fig. 1-AB, when placed at position (3,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0058] Fig. 1-AC, when placed at position (3,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0059] Fig. 1-AD, when placed at position (3,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0060] Fig. 1-AE, when placed at position (4,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0061] Fig. 1-AF, when placed at position (4,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0062] Fig. 1-AG, when placed at position (4,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0063] Fig. 1-AH, when placed at position (4,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0064] Fig. 1-AI, when placed at position (4,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0065] Fig. 1-AJ, when placed at position (4,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0066] Fig. 1-AK, when placed at position (4,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0067] Fig. 1-AL, when placed at position (4,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0068] Fig. 1-AM, when placed at position (4,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0069] Fig. 1-AN, when placed at position (4,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0070] Fig. 2A shows a high-level block diagram of an exemplary environment 200, including an image device 220, according to one or more embodiments.

[0071] Fig. 2B shows a high-level block diagram of a computing device, e.g., a server device 230 operating in an exemplary environment 200, according to one or more embodiments.

[0072] Fig. 3A shows a high-level block diagram of an exemplary operation of a device 220A in an exemplary environment 300A, according to embodiments.

[0073] Fig. 3B shows a high-level block diagram of an exemplary operation of a device 220B in an exemplary environment 300B, according to embodiments.

[0074] Fig. 3C shows a high-level block diagram of an exemplary operation of a device 220C in an exemplary environment 300C, according to embodiments.

[0075] Fig. 4A shows a high-level block diagram of an exemplary operation of an image device 420 in an exemplary environment 400 A, according to embodiments.

[0076] Fig. 4B shows a high-level block diagram of an exemplary operation of an image device 420B in an exemplary environment 400B, according to embodiments.

[0077] Fig. 5A shows a high-level block diagram of an exemplary operation of a server device 530A in an exemplary environment 500A, according to embodiments.

[0078] Fig. 5B shows a high-level block diagram of an exemplary operation of a server device 530B in an exemplary environment 500B, according to embodiments.

[0079] Fig. 6, including Figs. 6A-6G, shows a particular perspective of a request for particular image data that is part of a scene acquiring module 252 of processing module 250 of server device 230 of Fig. 2B, according to an embodiment.

[0080] Fig. 7, including Figs. 7A-7E, shows a particular perspective of a request for particular image data transmitting to an image sensor array module 254 of processing module 250 of server device 230 of Fig. 2B, according to an embodiment.

[0081] Fig. 8, including Figs. 8A-8C, shows a particular perspective of a particular image data from the image sensor array exclusive receiving module 256 of processing module 250 of server device 230 of Fig. 2B, according to an embodiment.

[0082] Fig. 9, including Figs. 9A-9E, shows a particular perspective of a received particular image data transmitting to at least one requestor module 258 of processing module 250 of server device 230 of Fig. 2B, according to an embodiment.

[0083] Fig. 10 is a high-level logic flowchart of a process, e g., operational flow 1000, including one or more operations of an acquiring a request for particular image data that is part of a scene operation, a transmitting the request for the particular image data to an image sensor array operation, a receiving only the particular image data from the image sensor array operation, and a transmitting the received particular image data to at least one requestor operation, according to an embodiment.

[0084] Fig. 11A is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0085] Fig. 11B is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0086] Fig. llC is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0087] Fig. 1 ID is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0088] Fig. HE is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0089] Fig. 11F is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0090] Fig. 11G is a high-level logic flow chart of a process depicting alternate implementations of an acquiring a request for particular image data that is part of a scene operation 1002, according to one or more embodiments.

[0091] Fig. 12A is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an image sensor array operation 1004, according to one or more embodiments. [0092] Fig. 12B is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imaj sensor array operation 1004, according to one or more embodiments.

[0093] Fig. 12C is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imaj sensor array operation 1004, according to one or more embodiments.

[0094] Fig. 12D is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imaj sensor array operation 1004, according to one or more embodiments.

[0095] Fig. 12E is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the request for the particular image data to an imaj sensor array operation 1004, according to one or more embodiments.

[0096] Fig. 13A is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image data from the image sensor array operation 1006, according to one or more embodiments.

[0097] Fig. 13B is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image data from the image sensor array operation 1006, according to one or more embodiments.

[0098] Fig. 13C is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image data from the image sensor array operation 1006, according to one or more embodiments.

[0099] Fig. 14A is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

[00100] Fig. 14B is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

[00101] Fig. 14C is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

[00102] Fig. 14D is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

[00103] Fig. 14E is a high-level logic flow chart of a process depicting alternate implementations of a transmitting the received particular image data to at least one requestor operation 1008, according to one or more embodiments.

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

[0001] For a more complete understanding of embodiments, reference now is made to the following descriptions taken in connection with the accompanying drawings. The use of the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.

[0002] Fig. 1, including Figs. 1-A through 1-AN, shows a high-level system diagram of one or more exemplary environments in which transactions and potential transactions may be carried out, according to one or more embodiments. Fig. 1 forms a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein when Figs. 1-A through 1-AN are stitched together in the manner shown in Fig. 1-D, which is reproduced below in table format.

[0003] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 shows "a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets" labeled Fig. 1-A through Fig. 1-AN (Sheets 1-40). The "views on two or more sheets form ,in effect, a single complete view, [and] the views on the several sheets ... [are] so arranged that the complete figure can be assembled" from "partial views drawn

on separate sheets ... linked edge to edge. Thus, in Fig. 1, the partial view Figs. 1-A through 1-AN are ordered alphabetically, by increasing in columns from left to right, and increasing in rows top to bottom, as shown in the following table:

Table 1. Table showing alignment of enclosed drawings to form partial schematic of one or more environments.

[0004] In accordance with 37 C.F.R. § 1.84(h)(2), Fig. 1 is "... a view of a large machine or device in its entirety ... broken into partial views ... extended over several sheets ... [with] no loss in facility of understanding the view." The partial views drawn on the several sheets indicated in the above table are capable of being linked edge to edge, so that no partial view contains parts of another partial view. As here, "where views on two or more sheets form, in effect, a single complete view, the views on the several sheets are so arranged that the complete figure can be assembled without concealing any part of any of the views appearing on the various sheets." 37 C.F.R. § 1.84(h)(2).

[0005] It is noted that one or more of the partial views of the drawings may be blank, or may be absent of substantive elements (e.g., may show only lines, connectors, arrows, and/or the like). These drawings are included in order to assist readers of the application in assembling the single complete view from the partial sheet format required for submission by the USPTO, and, while their inclusion is not required and may be omitted in this or other applications without subtracting from the disclosed matter as a whole, their inclusion is proper, and should be considered and treated as intentional.

[0006] Fig. 1-A, when placed at position (1,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0007] Fig. 1-B, when placed at position (1,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0008] Fig. 1-C, when placed at position (1,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0009] Fig. 1-D, when placed at position (1,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

JO [0010] Fig. 1-E, when placed at position (1,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0011] Fig. 1-F, when placed at position (1,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0012] Fig. 1-G, when placed at position (1,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0013] Fig. 1-H, when placed at position (1,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0014] Fig. 1-1, when placed at position (1,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0015] Fig. 1-J, when placed at position (1,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0016] Fig. 1-K, when placed at position (2,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0017] Fig. 1-L, when placed at position (2,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0018] Fig. 1-M, when placed at position (2,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0019] Fig. 1-N, when placed at position (2,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0020] Fig. l-O (which format is changed to avoid confusion as Figure "10" or "ten"), when placed at position (2,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0021] Fig. 1-P, when placed at position (2,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0022] Fig. 1-Q, when placed at position (2,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0023] Fig. 1-R, when placed at position (2,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein. [0024] Fig. 1-S, when placed at position (2,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0025] Fig. 1-T, when placed at position (2,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0026] Fig. 1-U, when placed at position (3,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0027] Fig. 1-V, when placed at position (3,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0028] Fig. 1-W, when placed at position (3,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0029] Fig. 1-X, when placed at position (3,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0030] Fig. 1-Y, when placed at position (3,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0031] Fig. 1-Z, when placed at position (3,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0032] Fig. 1-AA, when placed at position (3,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0033] Fig. 1-AB, when placed at position (3,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0034] Fig. 1-AC, when placed at position (3,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0035] Fig. 1-AD, when placed at position (3,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0036] Fig. 1-AE, when placed at position (4,1), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0037] Fig. 1-AF, when placed at position (4,2), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0038] Fig. 1-AG, when placed at position (4,3), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein. [0039] Fig. 1-AH, when placed at position (4,4), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0040] Fig. 1-AI, when placed at position (4,5), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0041] Fig. 1-AJ, when placed at position (4,6), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0042] Fig. 1-AK, when placed at position (4,7), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0043] Fig. 1-AL, when placed at position (4,8), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0044] Fig. 1-AM, when placed at position (4,9), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0045] Fig. 1-AN, when placed at position (4,10), forms at least a portion of a partially schematic diagram of an environment(s) and/or an implementation(s) of technologies described herein.

[0046] Fig. 2A shows a high-level block diagram of an exemplary environment 200, including a requestor device 250, according to one or more embodiments.

[0047] Fig. 2B shows a high-level block diagram of a computing device, e.g., a requestor device 250 operating in an exemplary environment 200, according to one or more embodiments.

[0048] Fig. 3A shows a high-level block diagram of an exemplary operation of a device 220A in an exemplary environment 300A, according to embodiments.

[0049] Fig. 3B shows a high-level block diagram of an exemplary operation of a device 220B in an exemplary environment 300B, according to embodiments.

[0050] Fig. 3C shows a high-level block diagram of an exemplary operation of a device 220C in an exemplary environment 300C, according to embodiments.

[0051] Fig. 4A shows a high-level block diagram of an exemplary operation of an image device 420 in an exemplary environment 400A, according to embodiments.

[0052] Fig. 4B shows a high-level block diagram of an exemplary operation of an image device 420B in an exemplary environment 400B, according to embodiments.

[0053] Fig. 5A shows a high-level block diagram of an exemplary operation of a server device 530A in an exemplary environment 500A, according to embodiments. [0054] Fig. 5B shows a high-level block diagram of an exemplary operation of a server device 530B in an exemplary environment 500B, according to embodiments.

[0055] Fig. 5C shows a high-level block diagram of an exemplary operation of a requestor device 530C in an exemplary environment 500C, according to embodiments.

[0056] Fig. 5D shows a high-level block diagram of an exemplary operation of a requestor device 530D in an exemplary environment 500D, according to embodiments.

[0057] Fig. 6, including Figs. 6A-6F, shows a particular perspective of a input of a request for particular image data accepting module 252 of processing module 250 of requestor device 250 of Fig. 2B, according to an embodiment.

[0058] Fig. 7, including Figs. 7A-7G, shows a particular perspective of a inputted request for the particular image data transmitting module 254 of processing module 250 of requestor device 250 of Fig. 2B, according to an embodiment.

[0059] Fig. 8, including Figs. 8A-8C, shows a particular perspective of a particular image data from the image sensor array exclusive receiving module 256 of processing module 250 of requestor device 250 of Fig. 2B, according to an embodiment.

[0060] Fig. 9, including Figs. 9A-9C, shows a particular perspective of a received particular image data presenting module 258 of processing module 250 of requestor device 250 of Fig. 2B, according to an embodiment.

[0061] Fig. 10 is a high-level logic flowchart of a process, e g., operational flow 1000, including one or more operations of an accepting input of a request for a particular image operation, transmitting the request for the particular image to an image sensor array operation, a receiving only the particular image from the image sensor array operation, and a presenting the received particular image operation, according to an embodiment.

[0062] Fig. 11A is a high-level logic flow chart of a process depicting alternate implementations of an accepting input of a request for a particular image operation 1002, according to one or more embodiments.

[0063] Fig. 1 IB is a high-level logic flow chart of a process depicting alternate implementations of an accepting input of a request for a particular image operation 1002, according to one or more embodiments. [0064] Fig. llC is a high-level logic flow chart of a process depicting alternate implementations of an accepting input of a request for a particular image operation 1002, according to one or more embodiments.

[0065] Fig. 1 ID is a high-level logic flow chart of a process depicting alternate implementations of an accepting input of a request for a particular image operation 1002, according to one or more embodiments.

[0066] Fig. HE is a high-level logic flow chart of a process depicting alternate implementations of an accepting input of a request for a particular image operation 1002, according to one or more embodiments.

[0067] Fig. 1 IF is a high-level logic flow chart of a process depicting alternate implementations of an accepting input of a request for a particular image operation 1002, according to one or more embodiments.

[0068] Fig. 12A is a high-level logic flow chart of a process depicting alternate implementations of transmitting the request for the particular image to an image sensor array operation 1004, according to one or more embodiments.

[0069] Fig. 12B is a high-level logic flow chart of a process depicting alternate implementations of transmitting the request for the particular image to an image sensor array operation 1004, according to one or more embodiments.

[0070] Fig. 12C is a high-level logic flow chart of a process depicting alternate implementations of transmitting the request for the particular image to an image sensor array operation 1004, according to one or more embodiments.

[0071] Fig. 12D is a high-level logic flow chart of a process depicting alternate implementations of transmitting the request for the particular image to an image sensor array operation 1004, according to one or more embodiments.

[0072] Fig. 12E is a high-level logic flow chart of a process depicting alternate implementations of transmitting the request for the particular image to an image sensor array operation 1004, according to one or more embodiments.

[0073] Fig. 12F is a high-level logic flow chart of a process depicting alternate implementations of transmitting the request for the particular image to an image sensor array operation 1004, according to one or more embodiments. [0074] Fig. 12G is a high-level logic flow chart of a process depicting alternate implementations of transmitting the request for the particular image to an image sensor array operation 1004, according to one or more embodiments.

[0075] Fig. 13A is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image from the image sensor array operation 1006, according to one or more embodiments.

[0076] Fig. 13B is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image from the image sensor array operation 1006, according to one or more embodiments.

[0077] Fig. 13C is a high-level logic flow chart of a process depicting alternate implementations of a receiving only the particular image from the image sensor array operation 1006, according to one or more embodiments.

[0078] Fig. 14A is a high-level logic flow chart of a process depicting alternate implementations of a presenting the received particular image operation 1008, according to one or more embodiments.

[0079] Fig. 14B is a high-level logic flow chart of a process depicting alternate implementations of a presenting the received particular image operation 1008, according to one or more embodiments.

[0080] Fig. 14C is a high-level logic flow chart of a process depicting alternate implementations of a presenting the received particular image operation 1008, according to one or more embodiments.

DETAILED DESCRIPTION

Overview

[00104] In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar or identical components or items, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.

[00105] Thus, in accordance with various embodiments, computationally implemented methods, systems, circuitry, articles of manufacture, ordered chains of matter, and computer program products are designed to, among other things, provide an interface for acquiring a request for particular image data that is part of a scene, transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, and receiving only the particular image data from the image sensor array.

[00106] The claims, description, and drawings of this application may describe one or more of the instant technologies in operational/functional language, for example as a set of operations to be performed by a computer. Such operational/functional description in most instances would be understood by one skilled the art as specifically-configured hardware (e.g., because a general purpose computer in effect becomes a special purpose computer once it is programmed to perform particular functions pursuant to instructions from program software (e.g., a high-level computer program serving as a hardware specification)).

[00107] The claims, description, and drawings of this application may describe one or more of the instant technologies in operational/functional language, for example as a set of operations to be performed by a computer. Such operational/functional description in most instances would be understood by one skilled the art as specifically-configured hardware (e.g., because a general purpose computer in effect becomes a special purpose computer once it is programmed to perform particular functions pursuant to instructions from program software).

Operational/Functional Language is a Concrete Specification for Physical

Implementation

[00108] Importantly, although the operational/functional descriptions described herein are understandable by the human mind, they are not abstract ideas of the

operations/functions divorced from computational implementation of those

operations/functions. Rather, the operations/functions represent a specification for the massively complex computational machines or other means. As discussed in detail below, the operational/functional language must be read in its proper technological context, i.e., as concrete specifications for physical implementations.

[00109] The logical operations/functions described herein are a distillation of machine specifications or other physical mechanisms specified by the operations/functions such that the otherwise inscrutable machine specifications may be comprehensible to the human mind. The distillation also allows one of skill in the art to adapt the

operational/functional description of the technology across many different specific vendors' hardware configurations or platforms, without being limited to specific vendors' hardware configurations or platforms.

[00110] Some of the present technical description (e.g., detailed description, drawings, claims, etc.) may be set forth in terms of logical operations/functions. As described in more detail in the following paragraphs, these logical operations/functions are not representations of abstract ideas, but rather representative of static or sequenced specifications of various hardware elements. Differently stated, unless context dictates otherwise, the logical operations/functions will be understood by those of skill in the art to be representative of static or sequenced specifications of various hardware elements. This is true because tools available to one of skill in the art to implement technical disclosures set forth in operational/functional formats— tools in the form of a high-level programming language (e.g., C, java, visual basic), etc.), or tools in the form of Very high speed Hardware Description Language ("VHDL," which is a language that uses text to describe logic circuits) - are generators of static or sequenced specifications of various hardware configurations. This fact is sometimes obscured by the broad term "software," but, as shown by the following explanation, those skilled in the art understand that what is termed "software" is a shorthand for a massively complex interchaining/specification of ordered-matter elements. The term "ordered-matter elements" may refer to physical components of computation, such as assemblies of electronic logic gates, molecular computing logic constituents, quantum computing mechanisms, etc.

[00111] For example, a high-level programming language is a programming language with strong abstraction, e.g., multiple levels of abstraction, from the details of the sequential organizations, states, inputs, outputs, etc., of the machines that a high-level programming language actually specifies. In order to facilitate human comprehension, in many instances, high-level programming languages resemble or even share symbols with natural languages.

[00112] It has been argued that because high-level programming languages use strong abstraction (e.g., that they may resemble or share symbols with natural languages), they are therefore a "purely mental construct." (e.g., that "software" - a computer program or computer programming - is somehow an ineffable mental construct, because at a high level of abstraction, it can be conceived and understood in the human mind). This argument has been used to characterize technical description in the form of

functions/operations as somehow "abstract ideas." In fact, in technological arts (e.g., the information and communication technologies) this is not true.

[00113] The fact that high-level programming languages use strong abstraction to facilitate human understanding should not be taken as an indication that what is expressed is an abstract idea. In fact, those skilled in the art understand that just the opposite is true. If a high-level programming language is the tool used to implement a technical disclosure in the form of functions/operations, those skilled in the art will recognize that, far from being abstract, imprecise, "fuzzy," or "mental" in any significant semantic sense, such a tool is instead a near incomprehensibly precise sequential specification of specific computational machines - the parts of which are built up by activating/selecting such parts from typically more general computational machines over time (e.g., clocked time). This fact is sometimes obscured by the superficial similarities between high-level programming languages and natural languages. These superficial similarities also may cause a glossing over of the fact that high-level programming language implementations ultimately perform valuable work by creating/controlling many different computational machines.

[00114] The many different computational machines that a high-level programming language specifies are almost unimaginably complex. At base, the hardware used in the computational machines typically consists of some type of ordered matter (e.g., traditional electronic devices (e.g., transistors), deoxyribonucleic acid (DNA), quantum devices, mechanical switches, optics, fluidics, pneumatics, optical devices (e.g., optical interference devices), molecules, etc.) that are arranged to form logic gates. Logic gates are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to change physical state in order to create a physical reality of Boolean logic.

[00115] Logic gates may be arranged to form logic circuits, which are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to create a physical reality of certain logical functions. Types of logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), computer memory, etc., each type of which may be combined to form yet other types of physical devices, such as a central processing unit (CPU) - the best known of which is the microprocessor. A modern microprocessor will often contain more than one hundred million logic gates in its many logic circuits (and often more than a billion transistors).

[00116] The logic circuits forming the microprocessor are arranged to provide a microarchitecture that will carry out the instructions defined by that microprocessor's defined Instruction Set Architecture. The Instruction Set Architecture is the part of the microprocessor architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external Input/Output.

[00117] The Instruction Set Architecture includes a specification of the machine language that can be used by programmers to use/control the microprocessor. Since the machine language instructions are such that they may be executed directly by the microprocessor, typically they consist of strings of binary digits, or bits. For example, a typical machine language instruction might be many bits long (e.g., 32, 64, or 128 bit strings are currently common). A typical machine language instruction might take the form "11110000101011110000111100111111" (a 32 bit instruction).

[00118] It is significant here that, although the machine language instructions are written as sequences of binary digits, in actuality those binary digits specify physical reality. For example, if certain semiconductors are used to make the operations of Boolean logic a physical reality, the apparently mathematical bits "1" and "0" in a machine language instruction actually constitute shorthand that specifies the application of specific voltages to specific wires. For example, in some semiconductor technologies, the binary number

"1" (e.g., logical "1") in a machine language instruction specifies around +5 volts applied to a specific "wire" (e.g., metallic traces on a printed circuit board) and the binary number "0" (e.g., logical "0") in a machine language instruction specifies around -5 volts applied to a specific "wire." In addition to specifying voltages of the machines' configuration, such machine language instructions also select out and activate specific groupings of logic gates from the millions of logic gates of the more general machine. Thus, far from abstract mathematical expressions, machine language instruction programs, even though written as a string of zeros and ones, specify many, many constructed physical machines or physical machine states.

[00119] Machine language is typically incomprehensible by most humans (e.g., the above example was just ONE instruction, and some personal computers execute more than two billion instructions every second). Thus, programs written in machine language - which may be tens of millions of machine language instructions long - are

incomprehensible. In view of this, early assembly languages were developed that used mnemonic codes to refer to machine language instructions, rather than using the machine language instructions' numeric values directly (e.g., for performing a multiplication operation, programmers coded the abbreviation "mult," which represents the binary number "011000" in MIPS machine code). While assembly languages were initially a great aid to humans controlling the microprocessors to perform work, in time the complexity of the work that needed to be done by the humans outstripped the ability of humans to control the microprocessors using merely assembly languages.

[00120] At this point, it was noted that the same tasks needed to be done over and over, and the machine language necessary to do those repetitive tasks was the same. In view of this, compilers were created. A compiler is a device that takes a statement that is more comprehensible to a human than either machine or assembly language, such as "add 2 + 2 and output the result," and translates that human understandable statement into a complicated, tedious, and immense machine language code (e.g., millions of 32, 64, or 128 bit length strings). Compilers thus translate high-level programming language into machine language.

[00121] This compiled machine language, as described above, is then used as the technical specification which sequentially constructs and causes the interoperation of many different computational machines such that humanly useful, tangible, and concrete work is done. For example, as indicated above, such machine language- the compiled version of the higher-level language- functions as a technical specification which selects out hardware logic gates, specifies voltage levels, voltage transition timings, etc., such that the humanly useful work is accomplished by the hardware.

[00122] Thus, a functional/operational technical description, when viewed by one of skill in the art, is far from an abstract idea. Rather, such a functional/operational technical description, when understood through the tools available in the art such as those just described, is instead understood to be a humanly understandable representation of a hardware specification, the complexity and specificity of which far exceeds the comprehension of most any one human. With this in mind, those skilled in the art will understand that any such operational/functional technical descriptions - in view of the disclosures herein and the knowledge of those skilled in the art - may be understood as operations made into physical reality by (a) one or more interchained physical machines, (b) interchained logic gates configured to create one or more physical machine(s) representative of sequential/combinatorial logic(s), (c) interchained ordered matter making up logic gates (e.g., interchained electronic devices (e.g., transistors), DNA, quantum devices, mechanical switches, optics, fluidics, pneumatics, molecules, etc.) that create physical reality representative of logic(s), or (d) virtually any combination of the foregoing. Indeed, any physical object which has a stable, measurable, and changeable state may be used to construct a machine based on the above technical description.

Charles Babbage, for example, constructed the first computer out of wood and powered by cranking a handle.

[00123] Thus, far from being understood as an abstract idea, those skilled in the art will recognize a functional/operational technical description as a humanly-understandable representation of one or more almost unimaginably complex and time sequenced hardware instantiations. The fact that functional/operational technical descriptions might lend themselves readily to high-level computing languages (or high-level block diagrams for that matter) that share some words, structures, phrases, etc. with natural language simply cannot be taken as an indication that such functional/operational technical descriptions are abstract ideas, or mere expressions of abstract ideas. In fact, as outlined herein, in the technological arts this is simply not true. When viewed through the tools available to those of skill in the art, such functional/operational technical descriptions are seen as specifying hardware configurations of almost unimaginable complexity.

[00124] As outlined above, the reason for the use of functional/operational technical descriptions is at least twofold. First, the use of functional/operational technical descriptions allows near-infinitely complex machines and machine operations arising from interchained hardware elements to be described in a manner that the human mind can process (e.g., by mimicking natural language and logical narrative flow). Second, the use of functional/operational technical descriptions assists the person of skill in the art in understanding the described subject matter by providing a description that is more or less independent of any specific vendor's piece(s) of hardware.

[00125] The use of functional/operational technical descriptions assists the person of skill in the art in understanding the described subject matter since, as is evident from the above discussion, one could easily, although not quickly, transcribe the technical descriptions set forth in this document as trillions of ones and zeroes, billions of single lines of assembly-level machine code, millions of logic gates, thousands of gate arrays, or any number of intermediate levels of abstractions. However, if any such low-level technical descriptions were to replace the present technical description, a person of skill in the art could encounter undue difficulty in implementing the disclosure, because such a low-level technical description would likely add complexity without a corresponding benefit (e.g., by describing the subject matter utilizing the conventions of one or more vendor- specific pieces of hardware). Thus, the use of functional/operational technical descriptions assists those of skill in the art by separating the technical descriptions from the conventions of any vendor- specific piece of hardware.

[00126] In view of the foregoing, the logical operations/functions set forth in the present technical description are representative of static or sequenced specifications of various ordered-matter elements, in order that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations. The logical operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one of skill in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation. [00127] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software (e.g., a high- level computer program serving as a hardware specification)

implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware in one or more machines, compositions of matter, and articles of manufacture, limited to patentable subject matter under 35 USC 101. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software (e.g., a high-level computer program serving as a hardware specification), and or firmware.

[00128] In some implementations described herein, logic and similar implementations may include computer programs or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of existing software (e.g., a high-level computer program serving as a hardware

specification) or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

[00129] Alternatively or additionally, implementations may include executing a special- purpose instruction sequence or invoking circuitry for enabling, triggering, coordinating, requesting, or otherwise causing one or more occurrences of virtually any functional operation described herein. In some variants, operational or other logical descriptions herein may be expressed as source code and compiled or otherwise invoked as an executable instruction sequence. In some contexts, for example, implementations may be provided, in whole or in part, by source code, such as C++, or other code sequences. In other implementations, source or other code implementation, using commercially available and/or techniques in the art, may be compiled/

/implemented/translated/converted into a high-level descriptor language (e.g., initially implementing described technologies in C or C++ programming language and thereafter converting the programming language implementation into a logic- synthesizable language implementation, a hardware description language implementation, a hardware design simulation implementation, and/or other such similar mode(s) of expression). For example, some or all of a logical expression (e.g., computer programming language implementation) may be manifested as a Verilog-type hardware description (e.g., via Hardware Description Language (HDL) and/or Very High Speed Integrated Circuit Hardware Descriptor Language (VHDL)) or other circuitry model which may then be used to create a physical implementation having hardware (e.g., an Application Specific Integrated Circuit). Those skilled in the art will recognize how to obtain, configure, and optimize suitable transmission or computational elements, material supplies, actuators, or other structures in light of these teachings.

[00130] The term module, as used in the foregoing/following disclosure, may refer to a collection of one or more components that are arranged in a particular manner, or a collection of one or more general-purpose components that may be configured to operate in a particular manner at one or more particular points in time, and/or also configured to operate in one or more further manners at one or more further times. For example, the same hardware, or same portions of hardware, may be configured/reconfigured in sequential/parallel time(s) as a first type of module (e.g., at a first time), as a second type of module (e.g., at a second time, which may in some instances coincide with, overlap, or follow a first time), and/or as a third type of module (e.g., at a third time which may, in some instances, coincide with, overlap, or follow a first time and/or a second time), etc. Reconfigurable and/or controllable components (e.g., general purpose processors, digital signal processors, field programmable gate arrays, etc.) are capable of being configured as a first module that has a first purpose, then a second module that has a second purpose and then, a third module that has a third purpose, and so on. The transition of a reconfigurable and/or controllable component may occur in as little as a few

nanoseconds, or may occur over a period of minutes, hours, or days.

[00131] In some such examples, at the time the component is configured to carry out the second purpose, the component may no longer be capable of carrying out that first purpose until it is reconfigured. A component may switch between configurations as different modules in as little as a few nanoseconds. A component may reconfigure on- the-fly, e.g., the reconfiguration of a component from a first module into a second module may occur just as the second module is needed. A component may reconfigure in stages, e.g., portions of a first module that are no longer needed may reconfigure into the second module even before the first module has finished its operation. Such

reconfigurations may occur automatically, or may occur through prompting by an external source, whether that source is another component, an instruction, a signal, a condition, an external stimulus, or similar. [00132] For example, a central processing unit of a personal computer may, at various times, operate as a module for displaying graphics on a screen, a module for writing data to a storage medium, a module for receiving user input, and a module for multiplying two large prime numbers, by configuring its logical gates in accordance with its instructions. Such reconfiguration may be invisible to the naked eye, and in some embodiments may include activation, deactivation, and/or re-routing of various portions of the component, e.g., switches, logic gates, inputs, and/or outputs. Thus, in the examples found in the foregoing/following disclosure, if an example includes or recites multiple modules, the example includes the possibility that the same hardware may implement more than one of the recited modules, either contemporaneously or at discrete times or timings. The implementation of multiple modules, whether using more components, fewer components, or the same number of components as the number of modules, is merely an implementation choice and does not generally affect the operation of the modules themselves. Accordingly, it should be understood that any recitation of multiple discrete modules in this disclosure includes implementations of those modules as any number of underlying components, including, but not limited to, a single component that reconfigures itself over time to carry out the functions of multiple modules, and/or multiple components that similarly reconfigure, and/or special purpose reconfigurable components.

[00133] Those skilled in the art will recognize that it is common within the art to implement devices and/or processes and/or systems, and thereafter use engineering and/or other practices to integrate such implemented devices and/or processes and/or systems into more comprehensive devices and/or processes and/or systems. That is, at least a portion of the devices and/or processes and/or systems described herein can be integrated into other devices and/or processes and/or systems via a reasonable amount of experimentation. Those having skill in the art will recognize that examples of such other devices and/or processes and/or systems might include - as appropriate to context and application - all or part of devices and/or processes and/or systems of (a) an air conveyance (e.g., an airplane, rocket, helicopter, etc.) , (b) a ground conveyance (e.g., a car, truck, locomotive, tank, armored personnel carrier, etc.), (c) a building (e.g., a home, warehouse, office, etc.), (d) an appliance (e.g., a refrigerator, a washing machine, a dryer, etc.), I a communications system (e.g., a networked system, a telephone system, a Voice over IP system, etc.), (f) a business entity (e.g., an Internet Service Provider (ISP) entity such as Comcast Cable, Qwest, Southwestern Bell, etc.), or (g) a wired/wireless services entity (e.g., Sprint, Cingular, Nextel, etc.), etc.

[00134] In certain cases, use of a system or method may occur in a territory even if components are located outside the territory. For example, in a distributed computing context, use of a distributed computing system may occur in a territory even though parts of the system may be located outside of the territory (e.g., relay, server, processor, signal- bearing medium, transmitting computer, receiving computer, etc. located outside the territory).

[00135] A sale of a system or method may likewise occur in a territory even if components of the system or method are located and/or used outside the territory.

Further, implementation of at least part of a system for performing a method in one territory does not preclude use of the system in another territory

[00136] In a general sense, those skilled in the art will recognize that the various embodiments described herein can be implemented, individually and/or collectively, by various types of electro-mechanical systems having a wide range of electrical

components such as hardware, software, firmware, and/or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101; and a wide range of components that may impart mechanical force or motion such as rigid bodies, spring or torsional bodies, hydraulics, electro-magnetically actuated devices, and/or virtually any combination thereof. Consequently, as used herein "electro-mechanical system" includes, but is not limited to, electrical circuitry operably coupled with a transducer (e.g., an actuator, a motor, a piezoelectric crystal, a Micro Electro Mechanical System (MEMS), etc.), electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), electrical circuitry forming a communications device (e.g., a modem,

communications switch, optical-electrical equipment, etc.), and/or any non-electrical analog thereto, such as optical or other analogs (e.g., graphene based circuitry). Those skilled in the art will also appreciate that examples of electro-mechanical systems include but are not limited to a variety of consumer electronics systems, medical devices, as well as other systems such as motorized transport systems, factory automation systems, security systems, and/or communication/computing systems. Those skilled in the art will recognize that electro-mechanical as used herein is not necessarily limited to a system that has both electrical and mechanical actuation except as context may dictate otherwise.

[00137] In a general sense, those skilled in the art will recognize that the various aspects described herein which can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, and/or any combination thereof can be viewed as being composed of various types of "electrical circuitry." Consequently, as used herein

"electrical circuitry" includes, but is not limited to, electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), and/or electrical circuitry forming a communications device (e.g., a modem, communications switch, optical-electrical equipment, etc.). Those having skill in the art will recognize that the subject matter described herein may be implemented in an analog or digital fashion or some

combination thereof.

[00138] Those skilled in the art will recognize that at least a portion of the devices and/or processes described herein can be integrated into an image processing system. Those having skill in the art will recognize that a typical image processing system generally includes one or more of a system unit housing, a video display device, memory such as volatile or non- volatile memory, processors such as microprocessors or digital signal processors, computational entities such as operating systems, drivers, applications programs, one or more interaction devices (e.g., a touch pad, a touch screen, an antenna, etc.), control systems including feedback loops and control motors (e.g., feedback for sensing lens position and/or velocity; control motors for moving/distorting lenses to give desired focuses). An image processing system may be implemented utilizing suitable commercially available components, such as those typically found in digital still systems and/or digital motion systems.

[00139] Those skilled in the art will recognize that at least a portion of the devices and/or processes described herein can be integrated into a data processing system. Those having skill in the art will recognize that a data processing system generally includes one or more of a system unit housing, a video display device, memory such as volatile or nonvolatile memory, processors such as microprocessors or digital signal processors,

30 computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices (e.g., a touch pad, a touch screen, an antenna, etc.), and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A data processing system may be implemented utilizing suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.

[00140] Those skilled in the art will recognize that at least a portion of the devices and/or processes described herein can be integrated into a mote system. Those having skill in the art will recognize that a typical mote system generally includes one or more memories such as volatile or non- volatile memories, processors such as microprocessors or digital signal processors, computational entities such as operating systems, user interfaces, drivers, sensors, actuators, applications programs, one or more interaction devices (e.g., an antenna USB ports, acoustic ports, etc.), control systems including feedback loops and control motors (e.g., feedback for sensing or estimating position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A mote system may be implemented utilizing suitable components, such as those found in mote computing/communication systems. Specific examples of such components entail such as Intel Corporation's and/or Crossbow Corporation's mote components and supporting hardware, software, and/or firmware.

[00141] For the purposes of this application, "cloud" computing may be understood as described in the cloud computing literature. For example, cloud computing may be methods and/or systems for the delivery of computational capacity and/or storage capacity as a service. The "cloud" may refer to one or more hardware and/or software components that deliver or assist in the delivery of computational and/or storage capacity, including, but not limited to, one or more of a client, an application, a platform, an infrastructure, and/or a server The cloud may refer to any of the hardware and/or software associated with a client, an application, a platform, an infrastructure, and/or a server. For example, cloud and cloud computing may refer to one or more of a computer, a processor, a storage medium, a router, a switch, a modem, a virtual machine (e.g., a

31 virtual server), a data center, an operating system, a middleware, a firmware, a hardware back-end, a software back-end, and/or a software application. A cloud may refer to a private cloud, a public cloud, a hybrid cloud, and/or a community cloud. A cloud may be a shared pool of configurable computing resources, which may be public, private, semi- private, distributable, scaleable, flexible, temporary, virtual, and/or physical. A cloud or cloud service may be delivered over one or more types of network, e.g., a mobile communication network, and the Internet.

[00142] As used in this application, a cloud or a cloud service may include one or more of infrastructure- as- a- service ("IaaS"), platform-as-a-service ("PaaS"), software-as-a- service ("SaaS"), and/or desktop-as-a-service ("DaaS"). As a non-exclusive example, IaaS may include, e.g., one or more virtual server instantiations that may start, stop, access, and/or configure virtual servers and/or storage centers (e.g., providing one or more processors, storage space, and/or network resources on-demand, e.g., EMC and Rackspace). PaaS may include, e.g., one or more software and/or development tools hosted on an infrastructure (e.g., a computing platform and/or a solution stack from which the client can create software interfaces and applications, e.g., Microsoft Azure). SaaS may include, e.g., software hosted by a service provider and accessible over a network (e.g., the software for the application and/or the data associated with that software application may be kept on the network, e.g., Google Apps, SalesForce). DaaS may include, e.g., providing desktop, applications, data, and/or services for the user over a network (e.g., providing a multi-application framework, the applications in the framework, the data associated with the applications, and/or services related to the applications and/or the data over the network, e.g., Citrix). The foregoing is intended to be exemplary of the types of systems and/or methods referred to in this application as "cloud" or "cloud computing" and should not be considered complete or exhaustive.

[00143] One skilled in the art will recognize that the herein described components (e.g., operations), devices, objects, and the discussion accompanying them are used as examples for the sake of conceptual clarity and that various configuration modifications are contemplated. Consequently, as used herein, the specific exemplars set forth and the accompanying discussion are intended to be representative of their more general classes.

32 In general, use of any specific exemplar is intended to be representative of its class, and the non-inclusion of specific components (e.g., operations), devices, and objects should not be taken limiting.

[00144] The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures may be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected", or "operably coupled," to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being "operably couplable," to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components, and/or wirelessly interactable, and/or wirelessly interacting components, and/or logically interacting, and/or logically interactable components.

[00145] To the extent that formal outline headings are present in this application, it is to be understood that the outline headings are for presentation purposes, and that different types of subject matter may be discussed throughout the application (e.g.,

device(s)/structure(s) may be described under process(es)/operations heading(s) and/or process(es)/operations may be discussed under structure(s)/process(es) headings; and/or descriptions of single topics may span two or more topic headings). Hence, any use of formal outline headings in this application is for presentation purposes, and is not intended to be in any way limiting.

[00146] Throughout this application, examples and lists are given, with parentheses, the abbreviation "e.g.," or both. Unless explicitly otherwise stated, these examples and lists

33 are merely exemplary and are non-exhaustive. In most cases, it would be prohibitive to list every example and every combination. Thus, smaller, illustrative lists and examples are used, with focus on imparting understanding of the claim terms rather than limiting the scope of such terms.

[00147] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations are not expressly set forth herein for sake of clarity.

[00148] One skilled in the art will recognize that the herein described components (e.g., operations), devices, objects, and the discussion accompanying them are used as examples for the sake of conceptual clarity and that various configuration modifications are contemplated. Consequently, as used herein, the specific exemplars set forth and the accompanying discussion are intended to be representative of their more general classes. In general, use of any specific exemplar is intended to be representative of its class, and the non-inclusion of specific components (e.g., operations), devices, and objects should not be taken limiting.

[00149] Although one or more users maybe shown and/or described herein, e.g., in Fig. 1, and other places, as a single illustrated figure, those skilled in the art will appreciate that one or more users may be representative of one or more human users, robotic users (e.g., computational entity), and/or substantially any combination thereof (e.g., a user may be assisted by one or more robotic agents) unless context dictates otherwise. Those skilled in the art will appreciate that, in general, the same may be said of "sender" and/or other entity- oriented terms as such terms are used herein unless context dictates otherwise.

[00150] In some instances, one or more components may be referred to herein as "configured to," "configured by," "configurable to," "operable/operative to,"

"adapted/adaptable," "able to," "conformable/conformed to," etc. Those skilled in the art will recognize that such terms (e.g. "configured to") generally encompass active-state components and/or inactive- state components and/or standby-state components, unless context requires otherwise.

34 This Roman Numeral Section, And the Corresponding Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods, and Systems For Integrating Multiple User

Video Imaging Array

DETAILED DESCRIPTION

[0011] In addition to the following, various other method and/or system and/or program product aspects are set forth and described in the teachings such as text (e.g., claims and/or detailed description) and/or drawings of the present disclosure.

[0012] The foregoing is a summary and thus may contain simplifications,

generalizations, inclusions, and/or omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is NOT intended to be in any way limiting. Other aspects, features, and advantages of the devices and/or processes and/or other subject matter described herein will become apparent by reference to the detailed description, the corresponding drawings, and/or in the teachings set forth herein.

[0013] The logical operations/functions set forth in the present technical description are representative of static or sequenced specifications of various ordered-matter elements, in order that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations. The logical operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one of skill in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation.

[0014] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between

35 hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software (e.g., a high- level computer program serving as a hardware specification)

implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware in one or more machines, compositions of matter, and articles of manufacture, limited to patentable subject matter under 35 USC 101. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software (e.g., a high-level computer program serving as a hardware specification), and or firmware.

[0015] In some implementations described herein, logic and similar implementations may include computer programs or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein.

In some variants, for example, implementations may include an update or modification of existing software (e.g., a high-level computer program serving as a hardware

specification) or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one

36 or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

[0016] Alternatively or additionally, implementations may include executing a special- purpose instruction sequence or invoking circuitry for enabling, triggering, coordinating, requesting, or otherwise causing one or more occurrences of virtually any functional operation described herein. In some variants, operational or other logical descriptions herein may be expressed as source code and compiled or otherwise invoked as an executable instruction sequence. In some contexts, for example, implementations may be provided, in whole or in part, by source code, such as C++, or other code sequences. In other implementations, source or other code implementation, using commercially available and/or techniques in the art, may be compiled/

/implemented/translated/converted into a high-level descriptor language (e.g., initially implementing described technologies in C or C++ programming language and thereafter converting the programming language implementation into a logic- synthesizable language implementation, a hardware description language implementation, a hardware design simulation implementation, and/or other such similar mode(s) of expression). For example, some or all of a logical expression (e.g., computer programming language implementation) may be manifested as a Verilog-type hardware description (e.g., via Hardware Description Language (HDL) and/or Very High Speed Integrated Circuit Hardware Descriptor Language (VHDL)) or other circuitry model which may then be used to create a physical implementation having hardware (e.g., an Application Specific Integrated Circuit). Those skilled in the art will recognize how to obtain, configure, and optimize suitable transmission or computational elements, material supplies, actuators, or other structures in light of these teachings.

[0017] For example, a central processing unit of a personal computer may, at various

37 times, operate as a module for displaying graphics on a screen, a module for writing data to a storage medium, a module for receiving user input, and a module for multiplying two large prime numbers, by configuring its logical gates in accordance with its instructions. Such reconfiguration may be invisible to the naked eye, and in some embodiments may include activation, deactivation, and/or re-routing of various portions of the component, e.g., switches, logic gates, inputs, and/or outputs. Thus, in the examples found in the foregoing/following disclosure, if an example includes or recites multiple modules, the example includes the possibility that the same hardware may implement more than one of the recited modules, either contemporaneously or at discrete times or timings. The implementation of multiple modules, whether using more components, fewer

components, or the same number of components as the number of modules, is merely an implementation choice and does not generally affect the operation of the modules themselves. Accordingly, it should be understood that any recitation of multiple discrete modules in this disclosure includes implementations of those modules as any number of underlying components, including, but not limited to, a single component that reconfigures itself over time to carry out the functions of multiple modules, and/or multiple components that similarly reconfigure, and/or special purpose reconfigurable components.

[0018] Those skilled in the art will recognize that it is common within the art to implement devices and/or processes and/or systems, and thereafter use engineering and/or other practices to integrate such implemented devices and/or processes and/or systems into more comprehensive devices and/or processes and/or systems. That is, at least a portion of the devices and/or processes and/or systems described herein can be integrated into other devices and/or processes and/or systems via a reasonable amount of experimentation. Those having skill in the art will recognize that examples of such other devices and/or processes and/or systems might include - as appropriate to context and application— all or part of devices and/or processes and/or systems of (a) an air conveyance (e.g., an airplane, rocket, helicopter, etc.) , (b) a ground conveyance (e.g., a car, truck, locomotive, tank, armored personnel carrier, etc.), (c) a building (e.g., a home, warehouse, office, etc.), (d) an appliance (e.g., a refrigerator, a washing machine, a dryer, etc.), (e) a communications system (e.g., a networked system, a telephone system,

38 a Voice over IP system, etc.), (f) a business entity (e.g., an Internet Service Provider (ISP) entity such as Comcast Cable, Qwest, Southwestern Bell, etc.), or (g) a

wired/wireless services entity (e.g., Sprint, Cingular, Nextel, etc.), etc.

[0019] In a general sense, those skilled in the art will recognize that the various embodiments described herein can be implemented, individually and/or collectively, by various types of electro-mechanical systems having a wide range of electrical

components such as hardware, software, firmware, and/or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101; and a wide range of components that may impart mechanical force or motion such as rigid bodies, spring or torsional bodies, hydraulics, electro-magnetically actuated devices, and/or virtually any combination thereof. Consequently, as used herein "electro-mechanical system" includes, but is not limited to, electrical circuitry operably coupled with a transducer (e.g., an actuator, a motor, a piezoelectric crystal, a Micro Electro Mechanical System (MEMS), etc.), electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), electrical circuitry forming a communications device (e.g., a modem, communications switch, optical-electrical equipment, etc.), and/or any non-electrical analog thereto, such as optical or other analogs (e.g., graphene based circuitry). Those skilled in the art will also appreciate that examples of electro-mechanical systems include but are not limited to a variety of consumer electronics systems, medical devices, as well as other systems such as motorized transport systems, factory automation systems, security systems, and/or communication/computing systems. Those skilled in the art will recognize that electro-mechanical as used herein is not necessarily limited to a system that has both electrical and mechanical actuation except as context may dictate otherwise.

[0020] In a general sense, those skilled in the art will recognize that the various aspects

39 described herein which can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, and/or any combination thereof can be viewed as being composed of various types of "electrical circuitry." Consequently, as used herein "electrical circuitry" includes, but is not limited to, electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), and/or electrical circuitry forming a communications device (e.g., a modem, communications switch, optical-electrical equipment, etc.). Those having skill in the art will recognize that the subject matter described herein may be implemented in an analog or digital fashion or some

combination thereof.

[0021] Referring now to Figs. 1-33, there exist a large number of existing cameras. In an embodiment, users can control the direction in which a remote camera is pointed. In an embodiment, it is not limited to one person controlling the camera at a time, but implementations in which many people are trying to move or otherwise control the camera (e.g., zoom, focus, etc.) at a time.

[0022] In an embodiment, high pixel count images may be used to capture high- resolution images of various scenes. Then, in an embodiment, at the camera level, data can be extracted from the high-resolution image and sent to the user relatively inexpensively. Thus, unneeded or unviewed pixels may be stripped away.

[0023] Thus, in an embodiment, there is an evolution of detail, from a "map" view to a "satellite" view to a "street" view. In an embodiment, there will be a "real time" view of a place available by utilizing one or more cameras in the manner described herein.

[0024] In an embodiment, the system knows where the user is, knows what the user is pointing to, and knows what the user wants, it's constantly looking everywhere, and it extracts from that firehose, peels off that pixels that haven't changed (that seems to be

40 step one). There may be a lot of overlap, so don't send the same things multiple times, so only send it once and then there's onboard processing that recognizes that and does everything it can. It may actually record everything on board into a memory, but in terms of what it sends to the user, it only sends what pixels have changed. There's other compression that goes on there.

[0025] In an embodiment, one or more devices, e.g., in the cloud or network, does the image stitching and image orientation and geographic information, geolocation information, we can use this as kind of a live video overlay of things that are happening in the world, people tracking cars of a certain color, persons of a certain height, or trigger events could make it work. For example, bear cam, or on the space station, you might want to superimpose information on top of that— it could work the same way as google maps work, where it shows "advertising" superimposed (sam note: this would be a separate application), but fundamentally the idea is to allow multiple users to use the big camera at the same time.

[0026] Referring to Fig. 16, in an embodiment, the system may be sending stored video at off-peak time, so if it's capturing a live image, it will compare that live image to whatever it already has on board, with the goal of really minimizing the back-end requirement.

[0027] In an embodiment, the camera is taking images all the time, it' s taking data all the time and pumping down full images (since it's off-peak and no one's using it), and some of those images are on board, it will compare to what was previously captured, and then send the difference. So the cloud get s the difference and recombines (or updates), and the cloud can also add data.

[0028] In an embodiment, in a sports or other setting, multiple cameras could be used to allow users to watch a sporting event at any angle they wanted to, or focusing on any player or performer they wanted to.

[0029] In an embodiment, It's about how to collapse the huge fire hose of pixels, and manage it at the front end so we can take advantage of free memory and expensive data transmission.

[0030] Other Notes

[0031] One point, in an embodiment, cameras capturing images of at bit densities at the

41 absolute limits of current technologies and <= 350 degrees field of view.

[0032] Another Point, in an embodiment, if the device doesn't take certain video format, no reason to try to send that— what we are talking about here is live video feed.

[0033] Another Point, in an embodiment, at one capturing camera such as described, but accepting an input signal— such as e.g. from a user device (e.g., tablet) half a world away— where signal from the user device only selects a part of the 360 degree available field of view.

[0034] Another point, in an embodiment,: In addition to or in the alternative to herein, imagine the at least one capturing camera accepting at least one further downselecting input of at least one of a data rate limit, a display resolution, a processor speed, or a memory of the, e.g., user device (e.g., a mobile phone) half a world away, and the capturing camera further reducing a data set in response to the at least one further downselecting input. For instance, a user-device- specific baseline data set which would result in no or very little wastage of bits or power if somehow magically transported to the user device half a world away and displayed there.

[0035] Another point, in an embodiment: Also note that in some instances user device might also have specified some zoom available at the capturing camera.

[0036] Another point, in an embodiment: In some instances, we are smart

cameras/retrofitted aftermarket add-ons to pre-installed cameras do heavy lifting so that very little wastage in bits or power is experienced over the back haul.

[0037] Another point, in an embodiment: capturing pixels is getting cheaper all the time, and where some problems to which the application proposes are in the back haul— bottleneck and in some implementations, if you want to be inexpensive, you may be limited to a few tens of megabits per second.

[0038] Another point, in an embodiment: Although it may sound counterintuitive, but we may intentionally remove bits from output of potentially super expensive— relatively — cameras that others have purchased for just the purpose of producing just such bit rich images. But we are instead talking about adjusting herein to that only what is appropriate to end user's device/display.

[0039] Another point, in an embodiment, as often as we can, we want to work from a baseline set of data as described and transmit only what has changed as such might be practicably useful to a human or another piece of automation.

42 [0040] Another point, in an embodiment, So in some sense, in some of or technologies we can refer to "proximate-device Field of view"— of the at least one capturing imager— which becomes the size of backhaul or device limit or data plan limit of the user device as viewed as receiving from the standpoint of the transmission device— e.g., capturing camera— irrespective of device.

[0041] Another point, in an embodiment: a Lumia might extract from images and then peel off unusable pixels that haven't changed. The goal may be to send only the difference.

[0042] Another point, in an embodiment: Overlap between user 1 and user 2 may provide efficiencies in selecting useable/used pixels. Only sensed pixels that has changed between images and, for example, stich together.

[0043] Another point, in an embodiment: If you zoom in on where user is looking, one useful idea is to allow multiple users to use at the same time. Have these onboard and sending to the cloud, and will only send the difference with one goal of minimizing the backhaul environment/load.

[0044] Another point, in an embodiment: This could be deployed as a service, so that an entity can implement the infrastructure and sell imaging services.

[0045] Another point, in an embodiment: "Recombine"— the camera is taking images all the time, and during off peak time and overnight it is pumping down full images doing what it can to do that, so it has some of that images on board, knowing that it has already sent this other images on the cloud, so when the cloud gets the difference, and in an embodiment, can add geolocation or other image data (including advertisements and/or other targeted data, based on a variety of different factors, and that is your recombined image.

[0046] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as

43 "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[0047] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[0048] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or

44 phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[0049] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[0050] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning.

All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding

45 product or process as of the date of the filing of this patent application.

[0051] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted

otherwise, unless explicitly stated as such.

[0052] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

46 1. A thing/operation disclosure comprising:

a method substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

2. A thing/operation disclosure comprising:

a device substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

3. A thing/operation disclosure comprising:

a method including one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

4. A thing/operation disclosure comprising:

a device implementing one or more of the devices, stores, and/or interfaces described in the detailed description and/or drawings and/or elsewhere herein.

5. A computationally-implemented thing/operation disclosure, comprising:

first means for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein; and

second means for carrying out one or more second steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

6. A computationally-implemented thing/operation disclosure, comprising:

circuitry for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

7. A thing/operation disclosure, comprising:

a signal -bearing medium bearing:

one or more instructions for configuration as one or more of the modules

47 and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure defined by a computational language, comprising: one or more interchained physical machines ordered as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure, comprising:

one or more general purpose integrated circuits configured to receve instructions to configure as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure comprising:

an integrated circuit configured to purpose itself as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure, comprising:

one or more elements of programmable hardware programmed to function as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

48 This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods, and Systems for Implementation of Multiple

User Video Imaging Array (MU VIA)

DETAILED D N

[0010] Session - November 4, 2014 [0011] Federated cameras:

[0012] - Multiple User Video Imaging Array (MUVIA)

[0013] - Applications with GoPro, Mobileye, other possible 10b valuation companies.

[0014] - Skybox, estimated/possible .5b sale to google

[0015] Inexpensive satellites to deliver 2-3 meter resolution.

[0016] High leap - augmented vision with a head mounted display.

[0017] Deliver real time video the same way a cellular network delivers to multiple users at the same time. Could have stations carrying out a similar function as cell towers if useful.

49 [0018] Pixels are cheap - give multiple users the ability to use parts of a field of view. Backhaul will restrict you. Local storage may be inexpensive relative to transmission costs. Is there value in delivering live, happening now images.

[0019] If you are going to use a bunch of high resolution cameras, the problem is going to be dumping insane amounts of data. How do you manage the backhaul without overwhelming the system.

[0020] Oculus rift, if you had a camera view could you wander around Pike Street market, do virtual tourism. Like streetview, but instead of static images, get a more seamless video experience.

[0021] The difficulty is going to be how to handle massive amounts of data, or not need the data.

[0022] What to do about dark places? Synthesize digital model, static images that are pregenerated.

[0023] If you know user' s resolution, and that can be determined based on the device or the window, then you can set up a system where you may avoid sending them more data than they can use/want.

[0024] Want to figure out what the user really wants to see - so you don't send a bunch of image data that doesn't matter to the user.

[0025] Examples could be seen in, e.g., Fishmonger theater. Dropcam.

[0026] Other examples include getting demographics of drivers of different vehicles. Shoppers. Who are the people who buy X product at X time of day. What will be the cool hat, cool shoes this year based on what you see people wearing on the street.

Inventory management. Behavioral model. New York, New Jersey, Florida, California for shoes. Malibu and outward for swimwear.

[0027] Further examples include, watching waves of first adopters to see what will be popular. It's a similar system to the data mining that Google uses to get data from searches, or that Amazon gets from page views/purchases.

50 [0028] More examples include, see who is looked at most - woman looked at most as an indication of popularity. Resolution, revisit rate. But is it because they look odd or strange, or is it an indicator of a popularity. A system could determine and filter this type of thing.

[0029] How is this implemented? Emotion engine.

[0030] You can generate metrics based on your "scarce" resource, with scarce being relative or absolute depending on applications. For example, if bandwidth is your scarce resource, then that is the metric that you use. For example, evaluate how much bandwidth is used to look at X product.

[0031] Historical - In an embodiment, you may want to go back in time to identify the first 100 people to adopt what became the next trend. There may be a data Storage question, how to organize the data for storage and sorting, etc. Anonymous vs. category of individual based on appearance. Maybe get ID from a loyalty card or other opt-in.

[0032] Hardware: nokia 1020 43 megapixel. $30-$40. 6-1 digital zoom, lossless. Could have an Optical zoom in addition.. In an example, use a Pureview image. Put preprocessing in memory that is tailored to what you want to do. In an implementation, you can filter out stuff that doesn't change from one frame to the next, gets you factors of ten efficiency savings. Frame to frame comparison: disregard areas that you don't care about. Store locally for ten seconds, and dump if no one asks for it.

[0033] Speaker: Don't often do lossless compression, often do slightly lossy compression. When do you interpolate? If looking for general view, not detailed view, may only get hi res image every 50 frames. What I store to go backwards is generally low res.

[0034] One: depending on history or prediction may adjust the processing algorithm. May do more processing as you watch people flow out of the clink. Or at night when nothing usually happens, want to use high res on any event that does happen.

Dynamically modifiable processing algorithm

51 [0035] Two: group cameras - share processing, stitch together images, construct a 3d image, acoustically locate areas where you want to store images longer, allocate more bandwidth, devote more resources to them in general.

[0036] invention areas:

[0037] Speaker:- want to solve the latency - want 1 second latency.

[0038] Augmented vision

[0039] Latency, especially for gaming

[0040] Local high-speed pixel stripping

[0041] Digital zooming and panning

[0042] Camera to camera resource management (e.g., processing and power)

[0043] 3D depth perception using camera overlap

[0044] User management (e.g., image throttling)

[0045] Image networking

[0046] Balance cloud/local resources

[0047] Local spotlight zooming

[0048] Glare and keeping optical surfaces clean

[0049] 10-20 megabits per second on wifi, best you can do in the developed world

Speaker says should be enough.

[0050] Speaker: Resolution is independent of the field of view.

[0051] Speaker: going to want to have a large fraction of the data and then in some storage/processing.

[0052] Speaker: only want to transmit the minimum amount from the camera module.

[0053] Speaker: if you have something that is generating 100 gigabits per second, don't want to choke it down to 10 megabits per second.

[0054] Speaker: proposing a lot of data stripping at the camera.

52 [0055] Speaker: always going to be a trade - no matter how much smarts you have in the camera box - still value in sending more data to storage/higher processing.

[0056] How much compressing, choosing what you want to see at the camera.

[0057] Speaker: if pixels are free what is the next limiting thing: bandwidth, latency.

[0058] Speaker: suppose some customers don't care about latency. Want a whole day's worth of images. That can be sent at slow rates at off peakhours.

[0059] Live street view.

[0060] Parents could watch their kids, or a bot that watches them and send alarms if they go somewhere out of bounds.

[0061] Speaker: Invention: Tell other cameras in the neighborhood that I, the camera, am looking at x, and can you help me get images of x. High res, other improved images over what I can get.

[0062] Speaker: question is there something we have that is uniquely marketable, like preprocessing modules, business model, is there anything about the modules that could be unique? Is there something that's different? 360 view from any location? Solar power?

[0063] Speaker: Invention: smart compression in the camera which takes info from downstream users as to what's interesting and alters its compression and storage based on that.

[0064] Speaker: multi user version of that.

[0065] Autocropping, especially in the video world. Maybe some of that is done around faces. Use stylus to circle and area of interest.

[0066] Strange that there is no suggested photoshop and crop at acquisition of image. Esp in video.

[0067] Speaker: new gopro camera:

[0068] Setting fiducials on an object moving across different cameras. When the camera picks up an object of interest, put fiducial on digital image to track it.

53 [0069] ***Focus on a specific field of view that many users want, hi res at the camera to degrade other views.

[0070] Smarter local triggering from dropcams based on face recognition. Now it's only triggered by change in pixels. Dropcam doesn't scale, too much data for Comcast to handle.

[0071] Solve bandwidth on the camera itself.

[0072] Speaken-could spend more time on local architectures, memory etc.

[0073] How you tell a system like this about what you are interested in? API that says I'm interested in people or shoes or collisions, a menu of different things. Maybe ways that are smarter than a menu, like a scripting language. You open up a little of the programming and the pixels, we will provide access to raw data. Maybe you have to pay a premium of the bandwidth, power, memory that you are consuming.

[0074] Local training - e.g., squirrel events in a certain time window.

[0075] Circle what you are interested in learning. I typically look for x at a certain time every day.

54 1. A thing/operation disclosure comprising:

a method substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

2. A thing/operation disclosure comprising:

a device substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

3. A thing/operation disclosure comprising:

a method including one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

4. A thing/operation disclosure comprising:

a device implementing one or more of the devices, stores, and/or interfaces described in the detailed description and/or drawings and/or elsewhere herein.

5. A computationally-implemented thing/operation disclosure, comprising:

first means for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein; and

second means for carrying out one or more second steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

6. A computationally-implemented thing/operation disclosure, comprising:

circuitry for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

7. A thing/operation disclosure, comprising:

a signal -bearing medium bearing:

one or more instructions for configuration as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

55 A thing/operation disclosure defined by a computational language, comprising: one or more interchained physical machines ordered as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure, comprising:

one or more general purpose integrated circuits configured to receve instructions to configure as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure comprising:

an integrated circuit configured to purpose itself as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure, comprising:

one or more elements of programmable hardware programmed to function as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

56 This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods, and Systems

For Integrating Multiple User

Access Camera Array

DETAILED D

[0011] In addition to the following, various other method and/or system and/or program product aspects are set forth and described in the teachings such as text (e.g., claims and/or detailed description) and/or drawings of the present disclosure.

[0012] The foregoing is a summary and thus may contain simplifications,

generalizations, inclusions, and/or omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is NOT intended to be in any way limiting. Other aspects, features, and advantages of the devices and/or processes and/or other subject matter described herein will become apparent by reference to the detailed description, the corresponding drawings, and/or in the teachings set forth herein.

[0013] The logical operations/functions set forth in the present technical description are representative of static or sequenced specifications of various ordered-matter elements, in order that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations. The logical operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one of skill in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation.

[0014] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software

57 (e.g., a high-level computer program serving as a hardware specification), and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software (e.g., a high- level computer program serving as a hardware specification) implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware in one or more machines, compositions of matter, and articles of manufacture, limited to patentable subject matter under 35 USC 101. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software (e.g., a high-level computer program serving as a hardware specification), and or firmware.

[0015] In some implementations described herein, logic and similar implementations may include computer programs or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In

58 some variants, for example, implementations may include an update or modification of existing software (e.g., a high-level computer program serving as a hardware

specification) or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

[0016] Alternatively or additionally, implementations may include executing a special- purpose instruction sequence or invoking circuitry for enabling, triggering, coordinating, requesting, or otherwise causing one or more occurrences of virtually any functional operation described herein. In some variants, operational or other logical descriptions herein may be expressed as source code and compiled or otherwise invoked as an executable instruction sequence. In some contexts, for example, implementations may be provided, in whole or in part, by source code, such as C++, or other code sequences. In other implementations, source or other code implementation, using commercially available and/or techniques in the art, may be compiled/

/implemented/translated/converted into a high-level descriptor language (e.g., initially implementing described technologies in C or C++ programming language and thereafter converting the programming language implementation into a logic- synthesizable language implementation, a hardware description language implementation, a hardware design simulation implementation, and/or other such similar mode(s) of expression). For example, some or all of a logical expression (e.g., computer programming language implementation) may be manifested as a Verilog-type hardware description (e.g., via Hardware Description Language (HDL) and/or Very High Speed Integrated Circuit Hardware Descriptor Language (VHDL)) or other circuitry model which may then be used to create a physical implementation having hardware (e.g., an Application Specific Integrated Circuit). Those skilled in the art will recognize how to obtain, configure, and

59 optimize suitable transmission or computational elements, material supplies, actuators, or other structures in light of these teachings.

[0017] For example, a central processing unit of a personal computer may, at various times, operate as a module for displaying graphics on a screen, a module for writing data to a storage medium, a module for receiving user input, and a module for multiplying two large prime numbers, by configuring its logical gates in accordance with its instructions. Such reconfiguration may be invisible to the naked eye, and in some embodiments may include activation, deactivation, and/or re-routing of various portions of the component, e.g., switches, logic gates, inputs, and/or outputs. Thus, in the examples found in the foregoing/following disclosure, if an example includes or recites multiple modules, the example includes the possibility that the same hardware may implement more than one of the recited modules, either contemporaneously or at discrete times or timings. The implementation of multiple modules, whether using more components, fewer

components, or the same number of components as the number of modules, is merely an implementation choice and does not generally affect the operation of the modules themselves. Accordingly, it should be understood that any recitation of multiple discrete modules in this disclosure includes implementations of those modules as any number of underlying components, including, but not limited to, a single component that reconfigures itself over time to carry out the functions of multiple modules, and/or multiple components that similarly reconfigure, and/or special purpose reconfigurable components.

[0018] Those skilled in the art will recognize that it is common within the art to implement devices and/or processes and/or systems, and thereafter use engineering and/or other practices to integrate such implemented devices and/or processes and/or systems into more comprehensive devices and/or processes and/or systems. That is, at least a portion of the devices and/or processes and/or systems described herein can be integrated into other devices and/or processes and/or systems via a reasonable amount of experimentation. Those having skill in the art will recognize that examples of such other devices and/or processes and/or systems might include - as appropriate to context and application— all or part of devices and/or processes and/or systems of (a) an air conveyance (e.g., an airplane, rocket, helicopter, etc.) , (b) a ground conveyance (e.g., a

60 car, truck, locomotive, tank, armored personnel carrier, etc.), (c) a building (e.g., a home, warehouse, office, etc.), (d) an appliance (e.g., a refrigerator, a washing machine, a dryer, etc.), (e) a communications system (e.g., a networked system, a telephone system, a Voice over IP system, etc.), (f) a business entity (e.g., an Internet Service Provider (ISP) entity such as Comcast Cable, Qwest, Southwestern Bell, etc.), or (g) a wired/wireless services entity (e.g., Sprint, Cingular, Nextel, etc.), etc.

[0019] In a general sense, those skilled in the art will recognize that the various embodiments described herein can be implemented, individually and/or collectively, by various types of electro-mechanical systems having a wide range of electrical

components such as hardware, software, firmware, and/or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101; and a wide range of components that may impart mechanical force or motion such as rigid bodies, spring or torsional bodies, hydraulics, electro-magnetically actuated devices, and/or virtually any combination thereof. Consequently, as used herein "electro-mechanical system" includes, but is not limited to, electrical circuitry operably coupled with a transducer (e.g., an actuator, a motor, a piezoelectric crystal, a Micro Electro Mechanical System (MEMS), etc.), electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), electrical circuitry forming a communications device (e.g., a modem, communications switch, optical-electrical equipment, etc.), and/or any non-electrical analog thereto, such as optical or other analogs (e.g., graphene based circuitry). Those skilled in the art will also appreciate that examples of electro-mechanical systems include but are not limited to a variety of consumer electronics systems, medical devices, as well as other systems such as motorized transport systems, factory automation systems, security systems, and/or communication/computing systems. Those skilled in

61 the art will recognize that electro-mechanical as used herein is not necessarily limited to a system that has both electrical and mechanical actuation except as context may dictate otherwise.

[0020] In a general sense, those skilled in the art will recognize that the various aspects described herein which can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, and/or any combination thereof can be viewed as being composed of various types of "electrical circuitry." Consequently, as used herein "electrical circuitry" includes, but is not limited to, electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), and/or electrical circuitry forming a communications device (e.g., a modem, communications switch, optical-electrical equipment, etc.). Those having skill in the art will recognize that the subject matter described herein may be implemented in an analog or digital fashion or some

combination thereof.

[0021] Referring now to Figs. 1-8 there exist a large number of existing cameras. In an embodiment, users can control the direction in which a remote camera is pointed. In an embodiment, it is not limited to one person controlling the camera at a time, but implementations in which many people are trying to move or otherwise control the camera (e.g., zoom, focus, etc.) at a time.

[0022] In an embodiment, high pixel count images may be used to capture high- resolution images of various scenes. Then, in an embodiment, at the camera level, data can be extracted from the high-resolution image and sent to the user relatively inexpensively. Thus, unneeded or unviewed pixels may be stripped away.

[0023] Thus, in an embodiment, there is an evolution of detail, from a "map" view to a

62 "satellite" view to a "street" view. In an embodiment, there will be a "real time" view of a place available by utilizing one or more cameras in the manner described herein.

[0024] In an embodiment, the system knows where the user is, knows what the user is pointing to, and knows what the user wants, it's constantly looking everywhere, and it extracts from that firehose, peels off that pixels that haven't changed (that seems to be step one). There may be a lot of overlap, so don't send the same things multiple times, so only send it once and then there's onboard processing that recognizes that and does everything it can. It may actually record everything on board into a memory, but in terms of what it sends to the user, it only sends what pixels have changed. There's other compression that goes on there.

[0025] In an embodiment, one or more devices, e.g., in the cloud or network, does the image stitching and image orientation and geographic information, geolocation information, we can use this as kind of a live video overlay of things that are happening in the world, people tracking cars of a certain color, persons of a certain height, or trigger events could make it work. For example, bear cam, or on the space station, you might want to superimpose information on top of that— it could work the same way as google maps work, where it shows "advertising" superimposed (sam note: this would be a separate application), but fundamentally the idea is to allow multiple users to use the big camera at the same time.

[0026] In an embodiment, It's about how to collapse the huge fire hose of pixels, and manage it at the front end so we can take advantage of free memory and expensive data transmission.

[0027] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as

"open" terms (e.g., the term "including" should be interpreted as "including but not

63 limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[0028] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[0029] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the

64 terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[0030] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[0031] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

65 [0032] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[0033] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

66 1. A thing/operation disclosure comprising:

a method substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

2. A thing/operation disclosure comprising:

a device substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

3. A thing/operation disclosure comprising:

a method including one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

4. A thing/operation disclosure comprising:

a device implementing one or more of the devices, stores, and/or interfaces described in the detailed description and/or drawings and/or elsewhere herein.

5. A computationally-implemented thing/operation disclosure, comprising:

first means for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein; and

second means for carrying out one or more second steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

6. A computationally-implemented thing/operation disclosure, comprising:

circuitry for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

7. A thing/operation disclosure, comprising:

a signal -bearing medium bearing:

one or more instructions for configuration as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

67 A thing/operation disclosure defined by a computational language, comprising: one or more interchained physical machines ordered as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure, comprising:

one or more general purpose integrated circuits configured to receve instructions to configure as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure comprising:

an integrated circuit configured to purpose itself as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure, comprising:

one or more elements of programmable hardware programmed to function as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

68 j— This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods, and Systems For Integrating Multiple User

Video Imaging Array

DETAILED DESCRIPTION »

[0011] In addition to the following, various other method and/or system and/or program product aspects are set forth and described in the teachings such as text (e.g., claims and/or detailed description) and/or drawings of the present disclosure.

[0012] The foregoing is a summary and thus may contain simplifications,

generalizations, inclusions, and/or omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is NOT intended to be in any way limiting. Other aspects, features, and advantages of the devices and/or processes and/or other subject matter described herein will become apparent by reference to the detailed description, the corresponding drawings, and/or in the teachings set forth herein.

[0013] The logical operations/functions set forth in the present technical description are representative of static or sequenced specifications of various ordered-matter elements, inorder that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations. The logical operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one of skill in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation.

[0014] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or

69 firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software (e.g., a high- level computer program serving as a hardware specification) implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware in one or more machines, compositions of matter, and articles of manufacture, limited to patentable subject matter under 35 USC 101. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software (e.g., a high-level computer program serving as a hardware specification), and or firmware.

[0015] In some implementations described herein, logic and similar implementations may include computer programs or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of

70 existing software (e.g., a high-level computer program serving as a hardware

specification) or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

[0016] Alternatively or additionally, implementations may include executing a special- purpose instruction sequence or invoking circuitry for enabling, triggering, coordinating, requesting, or otherwise causing one or more occurrences of virtually any functional operation described herein. In some variants, operational or other logical descriptions herein may be expressed as source code and compiled or otherwise invoked as an executable instruction sequence. In some contexts, for example, implementations may be provided, in whole or in part, by source code, such as C++, or other code sequences. In other implementations, source or other code implementation, using commercially available and/or techniques in the art, may be compiled/

/implemented/translated/converted into a high-level descriptor language (e.g., initially implementing described technologies in C or C++ programming language and thereafter converting the programming language implementation into a logic- synthesizable language implementation, a hardware description language implementation, a hardware design simulation implementation, and/or other such similar mode(s) of expression). For example, some or all of a logical expression (e.g., computer programming language implementation) may be manifested as a Verilog-type hardware description (e.g., via Hardware Description Language (HDL) and/or Very High Speed Integrated Circuit Hardware Descriptor Language (VHDL)) or other circuitry model which may then be used to create a physical implementation having hardware (e.g., an Application Specific Integrated Circuit). Those skilled in the art will recognize how to obtain, configure, and optimize suitable transmission or computational elements, material supplies, actuators,

71 or other structures in light of these teachings.

[0017] For example, a central processing unit of a personal computer may, at various times, operate as a module for displaying graphics on a screen, a module for writing data to a storage medium, a module for receiving user input, and a module for multiplying two large prime numbers, by configuring its logical gates in accordance with its instructions. Such reconfiguration may be invisible to the naked eye, and in some embodiments may include activation, deactivation, and/or re-routing of various portions of the component, e.g., switches, logic gates, inputs, and/or outputs. Thus, in the examples found in the foregoing/following disclosure, if an example includes or recites multiple modules, the example includes the possibility that the same hardware may implement more than one of the recited modules, either contemporaneously or at discrete times or timings. The implementation of multiple modules, whether using more components, fewer

components, or the same number of components as the number of modules, is merely an implementation choice and does not generally affect the operation of the modules themselves. Accordingly, it should be understood that any recitation of multiple discrete modules in this disclosure includes implementations of those modules as any number of underlying components, including, but not limited to, a single component that reconfigures itself over time to carry out the functions of multiple modules, and/or multiple components that similarly reconfigure, and/or special purpose reconfigurable components.

[0018] Those skilled in the art will recognize that it is common within the art to implement devices and/or processes and/or systems, and thereafter use engineering and/or other practices to integrate such implemented devices and/or processes and/or systems into more comprehensive devices and/or processes and/or systems. That is, at least a portion of the devices and/or processes and/or systems described herein can be integrated into other devices and/or processes and/or systems via a reasonable amount of experimentation. Those having skill in the art will recognize that examples of such other devices and/or processes and/or systems might include - as appropriate to context and application— all or part of devices and/or processes and/or systems of (a) an air conveyance (e.g., an airplane, rocket, helicopter, etc.) , (b) a ground conveyance (e.g., a car, truck, locomotive, tank, armored personnel carrier, etc.), (c) a building (e.g., a home,

72 warehouse, office, etc.), (d) an appliance (e.g., a refrigerator, a washing machine, a dryer, etc.), (e) a communications system (e.g., a networked system, a telephone system, a Voice over IP system, etc.), (f) a business entity (e.g., an Internet Service Provider (ISP) entity such as Comcast Cable, Qwest, Southwestern Bell, etc.), or (g) a wired/wireless services entity (e.g., Sprint, Cingular, Nextel, etc.), etc.

[0019] In a general sense, those skilled in the art will recognize that the various embodiments described herein can be implemented, individually and/or collectively, by various types of electro-mechanical systems having a wide range of electrical

components such as hardware, software, firmware, and/or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101; and a wide range of components that may impart mechanical force or motion such as rigid bodies, spring or torsional bodies, hydraulics, electro-magnetically actuated devices, and/or virtually any combination thereof. Consequently, as used herein "electro-mechanical system" includes, but is not limited to, electrical circuitry operably coupled with a transducer (e.g., an actuator, a motor, a piezoelectric crystal, a Micro Electro Mechanical System (MEMS), etc.), electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), electrical circuitry forming a communications device (e.g., a modem, communications switch, optical-electrical equipment, etc.), and/or any non-electrical analog thereto, such as optical or other analogs (e.g., graphene based circuitry). Those skilled in the art will also appreciate that examples of electro-mechanical systems include but are not limited to a variety of consumer electronics systems, medical devices, as well as other systems such as motorized transport systems, factory automation systems, security systems, and/or communication/computing systems. Those skilled in the art will recognize that electro-mechanical as used herein is not necessarily limited to

73 a system that has both electrical and mechanical actuation except as context may dictate otherwise.

[0020] In a general sense, those skilled in the art will recognize that the various aspects described herein which can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, and/or any combination thereof can be viewed as being composed of various types of "electrical circuitry." Consequently, as used herein "electrical circuitry" includes, but is not limited to, electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), and/or electrical circuitry forming a communications device (e.g., a modem, communications switch, optical-electrical equipment, etc.). Those having skill in the art will recognize that the subject matter described herein may be implemented in an analog or digital fashion or some

combination thereof.

[0021] Referring now to Figs. 1-24, there exist cameras. In an embodiment, there may be many cameras. In an embodiment, there may be many existing cameras. In an embodiment, there may be asingle camera. In an embodiment, users can control the direction in which a remote camera is pointed. In an embodiment, it is not limited to one person controlling the camera at a time, but implementations in which many people are trying to move or otherwise control the camera (e.g., zoom, focus, etc.) at a time.

[0022] In an embodiment, high pixel count images may be used to capture high- resolution images of various scenes. Then, in an embodiment, at the camera level, data can be extracted from the high-resolution image and sent to the user relatively inexpensively. Thus, unneeded or unviewed pixels may be stripped away.

[0023] Thus, in an embodiment, there is an evolution of detail, from a "map" view to a

74 "satellite" view to a "street" view. In an embodiment, there will be a "real time" view of a place available by utilizing one or more cameras in the manner described herein.

[0024] In an embodiment, the system knows where the user is, knows what the user is pointing to, and knows what the user wants, it's constantly looking everywhere, and it extracts from that firehose, peels off that pixels that haven't changed (that seems to be step one). There may be a lot of overlap, so don't send the same things multiple times, so only send it once and then there's onboard processing that recognizes that and does everything it can. It may actually record everything on board into a memory, but in terms of what it sends to the user, it only sends what pixels have changed. There's other compression that goes on there.

[0025] In an embodiment, one or more devices, e.g., in the cloud or network, does the image stitching and image orientation and geographic information, geolocation information, we can use this as kind of a live video overlay of things that are happening in the world, people tracking cars of a certain color, persons of a certain height, or trigger events could make it work. For example, bear cam, or on the space station, you might want to superimpose information on top of that— it could work the same way as google maps work, where it shows "advertising" superimposed (sam note: this would be a separate application), but fundamentally the idea is to allow multiple users to use the big camera at the same time.

[0026] Referring to Fig. 8, in an embodiment, the system may be sending stored video at off-peak time, so if it's capturing a live image, it will compare that live image to whatever it already has on board, with the goal of really minimizing the back-end requirement.

[0027] In an embodiment, the camera is taking images all the time, it's taking data all the time and pumping down full images (since it's off-peak and no one's using it), and some of those images are on board, it will compare to what was previously captured, and then send the difference. So the cloud get s the difference and recombines (or updates), and the cloud can also add data.

[0028] In an embodiment, in a sports or other setting, multiple cameras could be used to allow users to watch a sporting event at any angle they wanted to, or focusing on any

75 player or performer they wanted to.

[0029] In an embodiment, It's about how to collapse the huge fire hose of pixels, and manage it at the front end so we can take advantage of free memory and expensive data transmission.

[0030] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[0031] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations.

However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or

"at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

76 [0032] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[0033] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[0034] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all

77 cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[0035] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[0036] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

78 1. A thing/operation disclosure comprising:

a method substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

2. A thing/operation disclosure comprising:

a device substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

3. A method comprising:

a method including one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

4. A thing/operation disclosure comprising:

a device implementing one or more of the devices, stores, and/or interfaces described in the detailed description and/or drawings and/or elsewhere herein.

5. A computationally-implemented thing/operation disclosure, comprising:

first means for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein; and

second means for carrying out one or more second steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

6. A computationally-implemented thing/operation disclosure, comprising:

circuitry for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

7. A thing/operation disclosure, comprising:

a signal -bearing medium bearing:

one or more instructions for configuration as one or more of the modules and/or functions described in the detailed description and/or drawings and/or

79 elsewhere herein. A thing/operation disclosure defined by a computational language, comprising: one or more interchained physical machines ordered as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure, comprising:

one or more general purpose integrated circuits configured to receve instructions to configure as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure comprising:

an integrated circuit configured to purpose itself as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein. A thing/operation disclosure, comprising:

one or more elements of programmable hardware programmed to function as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

80 This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods, and Systems For Integrating Multiple User

Video Imaging Array

DETAILED DESCRIPTION

[0011] In addition to the following, various other method and/or system and/or program product aspects are set forth and described in the teachings such as text (e.g., claims and/or detailed description) and/or drawings of the present disclosure.

[0012] The foregoing is a summary and thus may contain simplifications,

generalizations, inclusions, and/or omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is NOT intended to be in any way limiting. Other aspects, features, and advantages of the devices and/or processes and/or other subject matter described herein will become apparent by reference to the detailed description, the corresponding drawings, and/or in the teachings set forth herein.

[0013] Specifically, Figs. 1A-1C show a message sequence forperforming latency hiding- that is, reducing the appearance of latency to a user of the system in presenting images from a camera.

[0014] As an overview, in an embodiment, a system includes a camera, e.g., a multiple user video imaging array, and a client device that may receive portions of the image data captured by the camera. The system may also include a server, e.g., any device that is remote to one or more of the camera and/or the client, as well as a "cloud" which, in this application, is used as shorthand for any device or system configured to communicate with the camera, the client, or an intermediary of the camera or the client.

[0015] In an embodiment, processing of the message starts at the camera in the top-left

81 of Fig. 1A. Message transmission continues as indicated in Fig. 1A, with some or all arrows indicating transmission of messages and/or other data to perform latency hiding. Processing further continues from Fig 1A to Fig. IB, and likewise from Fig. IB to 1C, as indicated by the capital letters in circles in Figs. 1A to 1C.

[0016] It is noted that not all steps illustrated in Figs. 1A to 1C are required in order to carry out the latency hiding described herein. Rather, the foregoing is merely one implementation of the latency hiding in accordance with the system.

[0017] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[0018] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number

82 of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[0019] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[0020] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[0021] This application may make reference to one or more trademarks, e.g., a word,

83 letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[0022] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[0023] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed

84 herewith and/or elsewhere in the present application.

85 1. A thing/operation disclosure comprising:

a method substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

2. A thing/operation disclosure comprising:

a device substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

3. A thing/operation disclosure comprising:

a method including one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

4. A thing/operation disclosure comprising:

a device implementing one or more of the devices, stores, and/or interfaces described in the detailed description and/or drawings and/or elsewhere herein.

5. A computationally-implemented thing/operation disclosure, comprising:

first means for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein; and

second means for carrying out one or more second steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

6. A computationally-implemented thing/operation disclosure, comprising:

circuitry for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

7. A thing/operation disclosure, comprising:

a signal -bearing medium bearing:

one or more instructions for configuration as one or more of the modules and/or functions described in the detailed description and/or drawings and/or

86 elsewhere herein.

8. A thing/operation disclosure defined by a computational language, comprising: one or more interchained physical machines ordered as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

9. A thing/operation disclosure, comprising:

one or more general purpose integrated circuits configured to receve instructions to configure as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

10. A thing/operation disclosure comprising:

an integrated circuit configured to purpose itself as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

11. A thing/operation disclosure, comprising:

one or more elements of programmable hardware programmed to function as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

87 This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods and Systems for Visual Imaging Arrays

DETAILED DESCRIPTION --

[00110] Referring now to Fig. 1, Fig. 1 shows various implementations of the overall system. At a high level, Fig. 1 shows various implementations of a multiple user video imaging array (hereinafter interchangeably referred to as a "MUVIA"). It is noted that the designation "MUVIA" is merely shorthand and descriptive of an exemplary embodiment, and not a limiting term. Although "multiple user" appears in the name MUVIA, multiple users or even a single user are not required. Further, "video" is used in the designation "MUVIA," but MUVIA systems also may capture still images, multiple images, audio data, electromagnetic waves outside the visible spectrum, and other data as will be described herein. Further, "imaging array" may be used in the MUVIA

designation, but the image sensor in MUVIA is not necessarily an array or even multiple sensors (although commonly implemented as larger groups of image sensors, single- sensor implementations are also contemplated), and "array" here does not necessarily imply any specific structure, but rather any grouping of one or more sensors.

[00111] Generally, although not necessarily required, a MUVIA system may include one or more of a user device (e.g., hereinafter interchangeably referred to as a "client device," in recognition that a user may not necessarily be a human, living, or organic"), a server, and an image sensor array. A "server" in the context of this application may refer to any device, program, or module that is not directly connected to the image sensor array or to the client device, including any and all "cloud" storage, applications, and/or processing.

[00112] For example, in an embodiment, e.g., as shown in Fig. 1-A, Fig. 1-K, Fig. 1-U, Fig. 1-AE, and Fig. 1-AF, in an embodiment, the system may include one or more of

88 image sensor array 3200, array local storage and processing module 3300, server 4000, and user device 5200. Each of these portions will be discussed in more detail herein.

[00113] Referring now to Fig. 1-A, Fig. 1-A depicts user device 5200, which is a device that may be operated or controlled by a user of a MUVIA system. It is noted here that "user" is merely provided as a designation for ease of understanding, and does not imply control by a human or other organism, sentient or otherwise. In an embodiment, for example, in a security-type embodiment, the user device 5200 may be mostly or completely unmonitored, or may be monitored by an artificial intelligence, or by a combination of artificial intelligence, pseudo-artificial intelligence (e.g., that is intelligence amplification) and human intelligence.

[00114] User device 5200 may be, but is not limited to, a wearable device (e.g., glasses, goggles, headgear, a watch, clothing), an implant (e.g., a retinal-implant display), a computer of any kind (e.g., a laptop computer, desktop computer, mainframe, server, etc.), a tablet or other portable device, a phone or other similar device (e.g., smartphone, personal digital assistant), a personal electronic device (e.g., music player, CD player), a home appliance (e.g., a television, a refrigerator, or any other so-called "smart" device), a piece of office equipment (e.g., a copier, scanner, fax device, etc.), a camera or other camera-like device, a video game system, an entertainment/media center, or any other electrical equipment that has a functionality of presenting an image (whether visual or by other means, e.g., a screen, but also other sensory stimulating work).

[00115] User device 5200 may be capable of presenting an image, which, for purposes of clarity and conciseness will be referred to as displaying an image, although

communication through forms other than generating light waves through the visible light spectrum, although the image is not required to be presented at all times or even at all.

For example, in an embodiment, user device 5200 may receive images from server 4000 (or directly from the image sensor array 3200, as will be discussed herein), and may store the images for later viewing, or for processing internally, or for any other reason.

[00116] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection accepting module 5210. User selection accepting module 5210 may be

89 configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-A in the exemplary interface 5212, the user selection accepting module 5210 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00117] In an embodiment, the user selection accepting module may accept a selection of a particular thing- e.g., a building, an animal, or any other object whose representation is present on the screen. Moreover, a user may use a text box to "search" the image for a particular thing, and processing, done at the user device 5200 or at the server 4000, may determine the image and the zoom level for viewing that thing. The search for a particular thing may include a generic search, e.g., "search for people," or "search for penguins," or a more specific search, e.g., "search for the Space Needle," or "search for the White House." The search for a particular thing may take on any known contextual search, e.g., an address, a text string, or any other input.

[00118] In an embodiment, the "user selection" facilitated by the user selection accepting module 5210 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00119] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection transmitting module 5220. The user selection transmitting module 5220 may take the user selection from user selection transmitting module 5220, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5200 may determine the size and parameters of the image prior

90 to sending the request to the server 4000, or that processing may be handled by the server 4000. Following the thick- line arrow leftward from user selection transmitting module 5220 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00120] Referring again to Fig. 1-A, Fig. 1-A also includes a selected image receiving module 5230 and a user selection presenting module 5240, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00121] Referring now to Fig. 1-K, Figs. 1-K and 1-U show an embodiment of a server 4000 that communicates with one or both of user device 5200 and array local storage and processing module 3300. Sever 4000 may be a single computing device, or may be many computing devices, which may or may not be in proximity with each other.

[00122] Referring again to Fig. 1-K, server 4000 may include a user request reception module 4010. The user request reception module 4010 may receive the transmitted request from user selection transmitting module 5220. The user request reception module 4010 may then turn over processing to user request validation module 4020, which may perform, among other things, a check to make sure the user is not requesting more resolution than what their device can handle. For example, if the server has learned (e.g., through gathered information, or through information that was transmitted with the user request or in a same session as the user request), that the user is requesting a 1900x1080 resolution image, and the maximum resolution for the device is 1334x750, then the request will be modified so that no more than the maximum resolution that can be handled by the device is requested. In an embodiment, this may conserve the bandwidth required to transmit from the MUVIA to the server 4000 and/or the user device 3200

[00123] Referring again to Fig. 1-K, in an embodiment, server 4000 may include a user request latency management module 4030. User request latency management module

91 4030 may, in conjunction with user device 3200, attempt to reduce the latency from the time a specific image is requested by user device 3200 to the time the request is acted upon and data is transmitted to the user. The details for this latency management will be described in more detail herein, with varying techniques that may be carried out by any or all of the devices in the chain (e.g., user device, camera array, and server). As an example, in an embodiment, a lower resolution version of the image, e.g., that is stored locally or on the server, may be sent to the user immediately upon the request, and then that image is updated with the actual image taken by the camera. In an embodiment, user request latency management module 4030 also may handle static gap-filling, that is, if the image captured by the camera is unchanging, e.g., has not changed for a particular period of time, then a new image is not necessary to be captured, and an older image, that may be stored on server 4000, may be transmitted to the user device 3200. This process also will be discussed in more detail herein.

[00124] Referring now to Fig. 1-U, which shows more of server 4000, in an

embodiment, server 4000 may include a consolidated user request transmission module 4040, which may be configured to consolidate all the user requests, perform any necessary pre-processing on those requests, and send the request for particular sets of pixels to the array local storage and processing module 3300. The process for

consolidating the user requests and performing pre-processing will be described in more detail herein with respect to some of the other exemplary embodiments. In this embodiment, however, server consolidated user request transmission module 4040 transmits the request (exiting leftward from Fig. 1-U and traveling downward to Fig. 1- AE, through a pathway identified in Fig. 1-AE as lower-bandwidth communication from remote server 3515. It is noted here that "lower bandwidth communication" does not necessarily mean "low bandwidth" or imply any specific number about the bandwidth— it is simply lower than the relatively higher bandwidth communication from the actual image sensor array 3505 to the array local storage and processing module 3300, which will be discussed in more detail herein.

[00125] Referring again to Fig. 1-U, server 4000 also may include requested pixel reception module 4050, user request preparation module 4060, and user request transmission module 4070, which will be discussed in more detail herein, with respect to

92 the dataflow of this embodiment

[00126] Referring now to Figs. 1-AE and 1-AF, Figs. 1-AE and 1-AF show an image sensor array ("ISA") 3200 and an array local storage and processing module 3300, each of which will now be described in more detail.

[00127] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00128] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3300. In an embodiment, array local storage and processing module 3300 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3300 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3300 to the remote server, which may be, but is not required to be, located further away temporally.

[00129] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different

93 location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00130] Referring again to Fig. 1-AE, the image sensor array 3200 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3310. Consolidated user request reception module 3310 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00131] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests.

[00132] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00133] It is noted that more pixels than what are specifically requested by the user may

94 be transmitted, in certain embodiments. For example, the array local processing module 3300 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3300 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00134] Referring now to Fig. 2, one process that may be carried out by array local processing module 3300 includes process 200. Process 200 may include step 202 of capturing an image through use of an array of more than one image sensor, step 204 of selecting one or more views from the image, wherein the one or more views from the image represent selected views, and step 206 of transmitting only the determined one or more views from the image to a remote location. These steps may be carried out by server 4000.

[00135] Referring back to Fig. 1-U, the transmitted pixels transmitted from selected pixel transmission module 3340 of array local processing module 3300 may be received by server 4000, e.g., at requested pixel reception module 4050. Requested pixel reception module 4050 may receive the requested pixels and turn them over to user request preparation module 4060, which may "unpack" the requested pixels, e.g., determining which pixels go to which user, and at what resolutions, along with any postprocessing, including image adjustment, adding in missing cached data, or adding additional data to the images (e.g., advertisements or other data). In an embodiment, server 4000 also may include a user request transmission module 4070, which may be configured to transmit the requested pixels back to the user device 5200.

[00136] Referring again to Fig. 1-A, user device 5200 may include a selected image receiving module 5230, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the

95 user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[00137] Figs. 1-B, 1-C, 1-M, 1-W, 1-AG, and 1-AH show another embodiment of the MUVIA system, in which multiple user devices 5510, 5520, and 5530 may request images captured by the same image sensor array 3200.

[00138] Referring now to Figs. 1-B and 1-C, user device 5510, user device 5520, and user device 5530 are shown. In an embodiment, user devices 5510, 5520, and 5530 may have some or all of the same components as user device 5200, but are not shown here for clarity and ease of understanding the drawing. For each of user devices 5510, 5520, and 5530, exemplary screen resolutions have been chosen. There is nothing specific about these numbers that have been chosen, however, they are merely illustrated for exemplary purposes, and any other numbers could have been chosen in their place.

[00139] For example, in an embodiment, referring to Fig. 1-B, user device 5510 may have a screen resolution of 1920x1080 (e.g., colloquially referred to as "HD quality"). User device 5510 may send an image request to the server 4000, and may also send data regarding the screen resolution of the device.

[00140] Referring now to Fig. 1-C, user device 5520 may have a screen resolution of 1334x750. User device 5520 may send another image request to the server 4000, and, in an embodiment, instead of sending data regarding the screen resolution of the device, may send data that identifies what kind of device it is (e.g., an Apple-branded

smartphone). Server 4000 may use this data to determine the screen resolution for user device 5520 through an internal database, or through contacting an external source, e.g., a manufacturer of the device or a third party supplier of data about devices.

[00141] Referring again to Fig. 1-C, user device 5530 may have a screen resolution of 640x480, and may send the request by itself to the server 4000, without any additional data. In addition, server 4000 may receive independent requests from various users to change their current viewing area on the device.

96 [00142] Referring now to Fig. 1-M, server 4000 may include user request reception module 4110. User request reception module 4110 may receive requests from multiple user devices, e.g., user devices 5510, 5520, and 5530. Server 4000 also may include an independent user view change request reception module 4115, which, in an embodiment, may be a part of user request reception module 4110, and may be configured to receive requests from users that are already connected to the system, to change the view of what they are currently seeing.

[00143] Referring again to Fig. 1-M, server 4000 may include relevant pixel selection module 4120 configured to combine the user selections into a single area, as shown in Fig. 1-M. It is noted that, in an embodiment, the different user devices may request areas that overlap each other. In this case, there may be one or more overlapping areas, e.g., overlapping areas 4122. In an embodiment, the overlapping areas are only transmitted once, in order to save data/transmission costs and increase efficiency.

[00144] Referring now to Fig. 1-W, server 4000 may include selected pixel transmission to ISA module 4130. Module 4130 may take the relevant selected pixels, and transmit them to the array local processing module 3400 of image sensor array 3200. Selected pixel transmission to ISA module 4130 may include communication components, which may be shared with other transmission and/or reception modules.

[00145] Referring now to Fig. 1-AG, array local processing module 3400 may communicate with image sensor array 3200. Similarly to Figs. 1-AE and 1-AF, Figs. 1- AG and 1-AH show array local processing module 3400 and image sensor array 3200, respectively.

[00146] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of

97 sensors and/or megapixel image sensor capacities may be used.

[00147] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3400. In an embodiment, array local storage and processing module 3400 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3400 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3400 to the remote server, which may be, but is not required to be, located further away temporally.

[00148] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00149] Referring again to Fig. 1-AG, the image sensor array 3200 may capture an image that is received by image capturing module 3405. Image capturing module 3405 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3410. Consolidated user request reception module 3410 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3420 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

98 [00150] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3430. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3417. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3415. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3400, or may be subject to other manipulations or processing separate from the user requests.

[00151] Referring gain to Fig. 1-AG, array local processing module 3400 may include flagged selected pixel transmission module 3440, which takes the pixels identified as requested (e.g., "flagged") and transmits them back to the server 4000 for further processing. Similarly to as previously described, this transmission may utilize a lower- bandwidth channel, and module 3440 may include all necessary hardware to effect that lower-bandwidth transmission to server 4000.

[00152] Referring again to Fig. 1-W, the flagged selected pixel transmission module 3440 of array local processing module 3400 may transmit the flagged pixels to server 4000. Specifically, flagged selected pixel transmission module 3440 may transmit the pixels to flagged selected pixel reception from ISA module 4140 of server 4000, as shown in Fig. 1-W.

[00153] Referring again to Fig. 1-W, server 4000 also may include flagged selected pixel separation and duplication module 4150, which may, effectively, reverse the process of combining the pixels from the various selections, duplicating overlapping areas where necessary, and creating the requested images for each of the user devices that requested images. Flagged selected pixel separation and duplication module 4150 also may include the post-processing done to the image, including filling in cached versions of images, image adjustments based on the device preferences and/or the user preferences, and any other image post-processing.

[00154] Referring now to Fig. 1-M (as data flows "northward" from Fig. 1-W from module 4150), server 4000 may include pixel transmission to user device module 4160,

99 which may be configured to transmit the pixels that have been separated out and processed to the specific users that requested the image. Pixel transmission to user device module 4160 may handle the transmission of images to the user devices 5510, 5520, and 5530. In an embodiment, pixel transmission to user device module 4160 may have some or all components in common with user request reception module 4110.

[00155] Following the arrow of data flow to the right and upward from module 4160 of server 4160, the requested user images arrive at user device 5510, user device 5520, and user device 5530, as shown in Figs. 1-B and 1-C. The user devices 5510, 5520, and 5530 may present the received images as previously discussed and/or as further discussed herein.

[00156] Referring now to Fig. 3, server 4000 may execute one or more operations 300. In an embodiment, operations 300 may include one or more of a step 302 depicting receiving a request for a particular image of a scene (e.g., in an embodiment, step 302 may include step 302A for receiving a first request for a first particular image of a scene and a second request for a second particular image of the scene), a step 304 depicting transmitting the request for the particular image of the scene to a multi-image sensor array that is configured to capture an image that is larger than the image of the scene, a step 306 depicting receiving only the particular image from the multi-image sensor array, and a step 308 transmitting the received particular image of the scene.

[00157] Referring again to Fig. 1, Figs. 1-E, l-O, 1-Y, 1-AH, and 1-AI depict a MUVIA implementation according to an embodiment. In an embodiment, referring now to Fig. 1- E, a user device 5600 may include a target selection reception module 5610. Target selection reception module 5610 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA array is pointed at a football stadium, e.g., CenturyLink Field. As an example, a user may select one of the football players visible on the field as a "target." This may be facilitated by a target presentation module, e.g., target presentation module 5612, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the football player.

10 [00158] In an embodiment, target selection reception module 5610 may include an audible target selection module 5614 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00159] Referring again to Fig. 1, e.g., Fig. 1-E, in an embodiment, user device 5600 may include selected target transmission module 5620. Selected target transmission module 5620 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00160] Referring now to Fig. l-O, Fig. 1-0 (and Fig. 1-Y to the direct "south" of Fig. 1- O) shows an embodiment of server 4000. For example, in an embodiment, server 4000 may include a selected target reception module 4210. In an embodiment, selected target reception module 4210 of server 4000 may receive the selected target from the user device 5600. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00161] Referring again to Fig. l-O, in an embodiment, server 4000 may include selected target identification module 4220, which may be configured to take the target data received by selected target reception module 4210 and determine an image that needs to be captured in order to obtain an image that contains the selected target (e.g., in the shown example, the football player). In an embodiment, selected target identification module 4220 may use images previously received (or, in an embodiment, current images) from the image sensor array 3200 to determine the parameters of an image that contains the selected target. For example, in an embodiment, lower-resolution images from the image sensor array 3200 may be transmitted to server 4000 for determining where the target is located within the image, and then specific requests for portions of the image may be transmitted to the image sensor array 3200, as will be discussed herein.

[00162] In an embodiment, server 4000 may perform processing on the selected target

10 data, and/or on image data that is received, in order to create a request that is to be transmitted to the image sensor array. For example, in the given example, the selected target data is a football player. The server 4000, that is, selected target identification module 4220 may perform image recognition on one or more images captured from the image sensor array to determine a "sector" of the entire scene that contains the selected target. In another embodiment, the selected target identification module 4220 may use other, external sources of data to determine where the target is. In yet another

embodiment, the selected target data was selected by the user from the scene displayed by the image sensor array, so such processing may not be necessary.

[00163] Referring again to Fig. l-O, in an embodiment, server 4000 may include pixel information selection module 4230, which may select the pixels needed to capture the target, and which may determine the size of the image that should be transmitted from the image sensor array. The size of the image may be determined based on a type of target that is selected, one or more parameters (set by the user, by the device, or by the server, which may or may not be based on the selected target), by the screen resolution of the

10 device, or by any other algorithm. Pixel information selection module 4230 may determine the pixels to be captured in order to express the target, and may update based on changes in the target's status (e.g., if the target is moving, e.g., in the football example, once a play has started and the football player is moving in a certain direction).

[00164] Referring now to Fig. 1-Y, Fig. 1Y includes more of server 4000 according to an embodiment. In an embodiment, server 4000 may include pixel information transmission to ISA module 4240. Pixel information transmission to ISA module 4240 may transmit the selected pixels to the array local processing module 3500 associated with image sensor array 3500.

[00165] Referring now to Figs. 1-AH and 1-AI, Fig. 1-AH depicts an image sensor array 3200, which in this example is pointed at a football stadium, e.g., CenturyLink field. Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00166] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3500. In an embodiment, array local storage and processing module 3500 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3500 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication

10 from the array local processing module 3500 to the remote server, which may be, but is not required to be, located further away temporally.

[00167] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00168] Referring again to Fig. 1-AE, the image sensor array 3200 may capture an image that is received by image capturing module 3505. Image capturing module 3505 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3510. Consolidated user request reception module 3510 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3520 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00169] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3530. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3517. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3515. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3500, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module may include or communicate with a lower resolution module 3514, which may, in some embodiments, be used to transmit a lower-resolution version

10 of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[00170] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3540. Selected pixel transmission module 3540 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00171] Referring now again to Fig. 1-Y, server 4000 may include a requested image reception from ISA module 4250. Requested image reception from ISA module 4250 may receive the image data from the array local processing module 3500 (e.g., in the arrow coming "north" from Fig. 1-AI. That image, as depicted in Fig. 1-Y, may include the target (e.g., the football player), as well as some surrounding area (e.g., the area of the field around the football player). The "surrounding area" and the specifics of what is included/transmitted from the array local processing module may be specified by the user (directly or indirectly, e.g., through a set of preferences), or may be determined by the server, e.g., in the pixel information selection module 4230.

[00172] Referring again to Fig. 1-Y, server 4000 may also include a requested image transmission to user device module 4260. Requested image transmission to user device module 4260 may transmit the requested image to the user device 5600. Requested image transmission to user device module 4260 may include components necessary to communicate with user device 5600 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

10 [00173] Referring again to Fig. 1-Y, server 4000 may include a server cached image updating module 4270. Server cached image updating module 4270 may take the images received from the array local processing module 3500 (e.g., which may include the image to be sent to the user), and compare the received images with stored or "cached" images on the server, in order to determine if the cached images should be updated. This process may happen frequently or infrequently, depending on embodiments, and may be continuously ongoing as long as there is a data connection, in some embodiments. In some embodiments, the frequency of the process may depend on the available bandwidth to the array local processing module 3500, e.g., that is, at off-peak times, the frequency may be increased. In an embodiment, server cached image updating module 4270 compares an image received from the array local processing module 3500, and, if the image has changes, replaces the cached version of the image with the newer image.

[00174] Referring now again to Fig. 1-E, Fig. 1-E shows user device 5600. In an embodiment, user device 5600 includes image containing selected target receiving module 5630 that may be configured to receive the image from server 4000, e.g., requested image transmission to user device module 4260 of server 4000 (e.g., depicted in Fig. 1-Y, with the data transmission indicated by a rightward-upward arrow passing through Fig. 1-Y and Fig. 1-0 (to the north) before arriving at Fig. 1-E.

[00175] Referring again to Fig. 1-E, Fig. 1-E shows received image presentation module 5640, which may display the requested pixels that include the selected target to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through an exemplary interface that allows the user to monitor the target, and which also may display information about the target (e.g., in an embodiment, as shown in the figures, the game statistics for the football player also may be shown), which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[00176] Referring again to Fig. 1, Figs. 1-F, 1-P, 1-Z, and 1-AJ depict a MUVIA implementation according to an embodiment. This embodiment may be colloquially known as "live street view" in which one or more MUVIA systems allow for a user to

10 move through an area similarly to the well known Google-branded Maps (or Google- Street), except with the cameras working in real time. For example, in an embodiment, referring now to Fig. 1-F, a user device 5600 may include a target selection reception module 5710. Target selection reception module 5710 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA may be focused on a city, and the target may be an address, a building, a car, or a person in the city. As an example, a user may select a street address as a "target." This may be facilitated by a target presentation module, e.g., image selection presentation module 5712, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the street address. In an embodiment, image selection presentation module 5712 may use static images that may or may not be sourced by the MUVIA system, and, in another embodiment, image selection presentation module 5712 may use current or cached views from the MUVIA system.

[00177] In an embodiment, image selection reception module 5710 may include an audible target selection module 5714 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00178] Referring again to Fig. 1, e.g., Fig. 1-E, in an embodiment, user device 5700 may include selected target transmission module 5720. Selected target transmission module 5720 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00179] Referring now to Fig. 1-P, Fig. 1-P depicts a server 4000 of the MUVIA system according to embodiments. In an embodiment, server 4000 may include a selected target reception module 4310. Selected target reception module 4310 may receive the selected target from the user device 3700. In an embodiment, server 4000 may provide all or most of the data that facilitates the selection of the target, that is, the images and the interface, which may be provided, e.g., through a web portal.

10 [00180] Referring again to Fig. 1-P, in an embodiment, server 4000 may include a selected image pre-processing module 4320. Selected image pre-processing module 4320 may perform one or more tasks of pre-processing the image, some of which are described herein for exemplary purposes. For example, in an embodiment, selected image pre-processing module 4320 may include a resolution determination module 4322 which may be configured to determine the resolution for the image in order to show the target (and here, resolution is merely a stand-in for any facet of the image, e.g., color depth, size, shadow, pixelation, filter, etc.). In an embodiment, selected image preprocessing module 4320 may include a cached pixel fill-in module 4324. Cached pixel fill-in module 4324 may be configured to manage which portions of the requested image are recovered from a cache, and which are updated, in order to improve performance. For example, if a view of a street is requested, certain features of the street (e.g., buildings, trees, etc., may not need to be retrieved each time, but can be filled in with a cached version, or, in another embodiment, can be filled in by an earlier version. A check can be done to see if a red parked car is still in the same spot as it was in an hour ago; if so, that part of the image may not need to be updated. Using lower

resolution/prior images stored in a memory 4215, as well as other image processing techniques, cached pixel fill-in module determines which portions of the image do not need to be retrieved, thus reducing bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00181] Referring again to Fig. 1-P, in an embodiment, selected image pre-processing module 4320 of server 4000 may include a static object obtaining module 4326, which may operate similarly to cached pixel fill-in module 4324. For example, as in the example shown in Fig. 1-B, static object obtaining module 4326 may obtain prior versions of static objects, e.g., buildings, trees, fixtures, landmarks, etc., which may save bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00182] Referring again to Fig. 1-P, in an embodiment, pixel information transmission to ISA module 4330 may transmit the request for pixels (e.g., an image, after the pre-

10 processing) to the array local processing module 3600 (e.g., as shown in Figs. 1-Z and 1- AI, with the downward extending dataflow arrow).

[00183] Referring now to Figs. 1-Z and 1-AI, in an embodiment, an array local processing module 3600, that may be connected by a higher bandwidth connection to an image sensor array 3200, may be present.

[00184] . Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00185] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3605 to the array local storage and processing module 3600. In an embodiment, array local storage and processing module 3600 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3600 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3605" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3600 to the remote server, which may be, but is not required to be, located further away temporally.

[00186] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically

10 changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00187] Referring again to Fig. 1-AJ and Fig. 1-Z, the image sensor array 3200 may capture an image that is received by image capturing module 3605. Image capturing module 3605 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3610.

Consolidated user request reception module 3610 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3620 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00188] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3630. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3617. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3615. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3600, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module may include or communicate with a lower resolution module 3614, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[00189] Referring again to Fig. 1-AJ, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3640. Selected pixel transmission module 3640 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an

11 embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3610. Similarly to lower-bandwidth

communication 3615, the lower-bandwidth communication 3610 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3605.

[00190] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3600 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3600 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00191] Referring now again to Fig. 1-P, in an embodiment, server 4000 may include image receiving from ISA module 4340. Image receiving from ISA module 4340 may receive the image data from the array local processing module 3600 (e.g., in the arrow coming "north" from Fig. 1-AJ via Fig. 1-Z). The image may include the pixels that were requested from the image sensor array 3200. In an embodiment, server 4000 also may include received image post-processing module 4350, which may, among other postprocessing tasks, fill in objects and pixels into the image that were determined not to be needed by selected image pre-processing module 4320, as previously described. In an embodiment, server 4000 may include received image transmission to user device module 4360, which may be configured to transmit the requested image to the user device 5700. Requested image transmission to user device module 4260 may include components necessary to communicate with user device 5700 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

11 [00192] Referring now again to Fig. 1-F, user device 5700 may include a server image reception module 5730. Server image reception module 5730 may receive an image from sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-F.

[00193] In an embodiment, as shown in Figs. 1-F and 1-G, server image reception module 5730 may include an audio stream reception module 5732 and a video stream reception module 5734. In an embodiment, as discussed throughout this application, the MUVIA system may capture still images, video, and also sound, as well as other electromagnetic waves and other signals and data. In an embodiment, the audio signals and the video signals may be handled together, or they may be handled separately, as separate streams. Although not every module in the instant diagram separately shows audio streams and video streams, it is noted here that all implementations of MUVIA contemplate both audio and video coverage, as well as still image and other data collection.

[00194] Referring now to Fig. 1-G, which shows another portion of user device 5700, Fig. 1-G may include a display 5755 and a memory 5765, which may be used to facilitate presentation and/or storage of the received images.

[00195] Referring now to Fig. 4, in an embodiment, user device 5700 may carry out an operation 400, as shown in Fig. 4. Operation 400 may include one or more steps, including, in some embodiments, step 402 depicting accepting input of a request for a particular image of the scene, step 404 depicting transmitting the request for the particular image of the scene to a multi-image-sensor array that is configured to capture a captured image that is larger than the particular image, step 406 depicting receiving only the particular image from the multi-image sensor array, and step 408 depicting presenting the received particular image 408.

11 [00196] Figs. 1-H, 1-R, 1-AA, and 1-AB show an embodiment of a MUVIA

implementation. For example, referring now to Fig. 1-H, Fig. 1-H shows an embodiment of a user device 5800. For exemplary purposes, the user device 5800 may be an augmented reality device that shows a user looking down a "street" at which the user is not actually present, e.g., a "virtual tourism" where the user may use their augmented reality device (e.g., googles, e.g., an Oculus Rift-type headgear device) which may be a wearable computer. It is noted that this embodiment is not limited to wearable computers or augmented reality, but as in all of the embodiments described in this disclosure, may be any device. The use of a wearable augmented/virtual reality device is merely used to for illustrative and exemplary purposes.

[00197] In an embodiment, user device 5800 may have a field of view 5810, as shown in Fig. 1-H. The field of view for the user 5810 may be illustrated in Fig. 1-H as follows. The most internal rectangle, shown by the dot hatching, represents the user's "field of view" as they look at their "virtual world." The second most internal rectangle, with the straight line hatching, represents the "nearest" objects to the user, that is, a range where the user is likely to "look" next, by turning their head or moving their eyes. In an embodiment, this area of the image may already be loaded on the device, e.g., through use of a particular codec, which will be discussed in more detail herein. The outermost rectangle, which is the image without hatching, represents further outside the user's viewpoint. This area, too, may already be loaded on the device. By loading areas where the user may eventually look, the system can reduce latency and make a user's motions, e.g., movement of head, eyes, and body, appear "natural" to the system.

[00198] Referring now to Figs. 1-AA and 1-AB, these figures show an array local processing module 3700 that is connected to an image sensor array 3200 (e.g., as shown in Fig. 1-AK, and "viewing" a city as shown in Fig. 1-AJ). The image sensor array 3200 may operate as previously described in this document. In an embodiment, array local processing module 3700 may include a captured image receiving module 3710, which may receive the entire scene captured by the image sensor array 3200, through the higher-bandwidth communication channel 3505. As described previously in this application, these pixels may be "cropped" or "decimated" into the relevant portion of the

11 captured image, as described by one or more of the user device 5800, the server 4000, and the processing done at the array local processing module 3700. This process may occur as previously described. The relevant pixels may be handled by relevant portion of captured image receiving module 3720.

[00199] Referring now to Fig. 1-AB, in an embodiment, the relevant pixels for the image that are processed by relevant portion of captured image receiving module 3720 may be encoded using a particular codec at relevant portion encoding module 3730. In an embodiment, the codec may be configured to encode the innermost rectangle, e.g., the portion that represents the current user's field of view, e.g., portion 3716, at a higher resolution, or a different compression, or a combination of both. The codec may be further configured to encode the second rectangle, e.g., with the vertical line hashing, e.g., portion 3714, at a different resolution and/or a different (e.g., a higher) compression. Similarly, the outermost portion of the image, e.g., the clear portion 3716, may again be coded at still another resolution and/or a different compression. In an embodiment, the codec itself handles the algorithm for encoding the image, and as such, in an

embodiment, the codec may include information about user device 5800.

[00200] As shown in Fig. 1-AB, the encoded portion of the image, including portions 3716, 3174, and 3712, may be transmitted using encoded relevant portion transmitting module 3740. It is noted that "lower compression," "more compression," and "higher compression," are merely used as one example for the kind of processing done by the codec. For example, instead of lower compression, a different sampling algorithm or compacting algorithm may be used, or a lossier algorithm may be implemented for various parts of the encoded relevant portion.

[00201] Referring now to Fig. 5, Fig. 5 shows an exemplary process 500 that may be carried out by array local processing module 3700. It is noted that exemplary process 500 may be carried out by any or all of array local processing module 3700, server 4000, and user device 5800, but for illustrative purposes is shown in Fig. 1 as occurring at array local processing module 3700. Exemplary process 500 may include one or more of step 502 depicting capturing an image that is larger than a field of view of a device configured

11 to receive the captured image, step 504 depicting encoding the captured image such that a first portion of the image is encoded at a first resolution and a second portion of the image is encoded at a second resolution that is lower than the first resolution, wherein the first portion of the image represents the field of view of the device and the second portion represents at least one adjacent region to the field of view of the device, and step 506 depicting transmitting the encoded image to a receiving device configured to decode the captured image and present the first portion of the image to a client of the device.

[00202] Referring now to Fig. 1-R, Fig. 1-R depicts a server 4000 in a MUVIA system according to an embodiment. For example, as shown in Fig. 1-R, server 4000 may include, in addition to portions previously described, an encoded image receiving module 4410. Encoded image receiving module 4410 may receive the encoded image, encoded as previously described, from encoded relevant portion transmitting module 3740 of array local processing module 3700.

[00203] Referring again to Fig. 1-R, server 4000 may include an encoded image transmission controlling module 4420. Encoded image transmission controlling module 4420 may transmit portions of the image to the user device 5800. depending on the bandwidth and the particulars of the user device, the server may send all of the encoded image to the user device, and let the user device decode the portions as needed, or may decode the image and send portions in piecemeal, or with a different encoding, depending on the needs of the user device, and the complexity that can be handled by the user device.

[00204] Referring now to Fig. 6, in another embodiment, the user device, e.g., user device 5800, the server 4000, and the array local processing module 3700 may use a messaging system to perform latency hiding, either in combination with the codec described above or as a separate implementation. Figs. 6A-6C show a "message flow" view of how messages and cached images are passed between the user device 5800 (indicated as "client" and/or "display" and/or "viewer" in Fig. 6A), the server 4000 (indicated as "server" and/or "cloud" in Fig. 6A) and the array local processing module 3700 (indicated as "camera" in Fig. 6A).

11 [00205] Referring again to Fig. 1-H, user device 5800 may include an encoded image transmission receiving module 5720, which may be configured to receive the image that is coded in a particular way, e.g., as will be disclosed in more detail herein. Fig. 1-H also may include an encoded image processing module 5830 that may handle the processing of the image, that is, encoding and decoding portions of the image, or other processing necessary to provide the image to the user.

[00206] Referring now to Fig. 1-AL, Fig. 1-AL shows an implementation of an

Application Programming Interface (API) for the various MUVIA components.

Specifically, image sensor array API 7800 may include, among other elements, a programming specification 7810, that may include, for example, libraries, classes, specifications, templates, or other coding elements that generally make up an API, and an access authentication module 7820 that governs API access to the various image sensor arrays. The API allows third party developers to access the workings of the image sensor array and the array local processing module 3700, so that the third party developers can write applications for the array local processing module 3700, as well as determine which data captured by the image sensor array 3200 (which often may be multiple gigabytes or more of data per second) should be kept or stored or transmitted. In an embodiment, API access to certain functions may be limited. For example, a tiered system may allow a certain number of API calls to the MUVIA data per second, per minute, per hour, or per day. In an embodiment, a third party might pay fees or perform a registration that would allow more or less access to the MUVIA data. In an embodiment, the third party could host their application on a separate web site, and let that web site access the image sensor array 3200 and/or the array local processing module 3700 directly.

[00207] Referring again to Fig. 1, Figs. 1-1, 1-J, 1-S, 1-T, 1-AC, 1-AD, 1-AM, and 1- AN, in an embodiment, show a MUVIA implementation that allows insertion of advertising (or other context-sensitive material) into the images displayed to the user.

[00208] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection accepting module 5910. User selection accepting module 5910 may be configured to receive user input about what the user wants to see. For example, as shown

11 in Fig. l-I in the exemplary interface 5912, the user selection accepting module 5910 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00209] In an embodiment, the "user selection" facilitated by the user selection accepting module 5910 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00210] Referring again to Fig. l-I, in an embodiment, user device 5900 may include a user selection transmitting module 5920. The user selection transmitting module 5920 may take the user selection from user selection transmitting module 5920, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5900 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server 4000. Following the thick- line arrow leftward from user selection transmitting module 5920 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

11 [00211] Referring again to Fig. 1-1, Fig. 1-1 also includes a selected image receiving module 5930 and a user selection presenting module 5940, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00212] Referring now to Fig. 1-T (graphically represented as "down" and "to the right" of Fig. 1-1), in an embodiment, a server 4000 may include a selected target reception module 4510. In an embodiment, selected target reception module 4510 of server 4000 may receive the selected target from the user device 5900. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00213] Referring again to Fig. 1-T, in an embodiment, server 4000 may include selected image pre-processing module 4320. Selected image pre-processing module 4320 may perform one or more tasks of pre-processing the image, some of which have been previously described with respect to other embodiments. In an embodiment, server 4000 also may include pixel information transmission to ISA module 4330 configured to transmit the image request data to the image search array 3200, as has been previously described.

[00214] Referring now to Figs. 1-AD and 1-AN, array local processing module 3700 may be connected to an image sensor array 3200 through a higher-bandwidth

communication link 3505, e.g., a USB or PCI port. In an embodiment, image sensor array 3200 may include a request reception module 3710. Request reception module 3710 may receive the request for an image from the server 4000, as previously described. Request reception module 3710 may transmit the data to a pixel selection module 3720, which may receive the pixels captured from image sensor array 3200, and select the ones that are to be kept. That is, in an embodiment, through use of the (sometimes

consolidated) user requests and the captured image, pixel selection module 3620 may

11 select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00215] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3730. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3717. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3715. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3700, or may be subject to other manipulations or processing separate from the user requests, as described in previous embodiments. In an embodiment, unused pixel decimation module 3730 may be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to fulfill the request of the user.

[00216] Referring again to Fig. 1-AN, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3740. Selected pixel transmission module 3740 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3710. Similarly to lower-bandwidth

communication 3715, the lower-bandwidth communication 3710 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00217] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3700 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another

11 embodiment, server 4000 may expand the user requested areas, so that array local processing module 3700 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00218] Referring now again to Fig. 1-T, in an embodiment, server 4000 may include received image post-processing module 4550. 4340. Received image post-processing module 4550 may receive the image data from the array local processing module 3700 (e.g., in the arrow coming "north" from Fig. 1-AN via Fig. 1-AD). The image may include the pixels that were requested from the image sensor array 3200.

[00219] In an embodiment, server 4000 also may include advertisement insertion module 4560. Advertisement insertion module 4560 may insert an advertisement into the received image. The advertisement may be based one or more of the contents of the image, a characteristic of a user or the user device, or a setting of the advertisement server component 7700 (see, e.g., Fig. 1-AC, as will be discussed in more detail herein). The advertisement insertion module 4560 may place the advertisement into the image using any known image combination techniques, or, in another embodiment, the advertisement image may be in a separate layer, overlay, or any other data structure. In an embodiment, advertisement insertion module 4560 may include context-based advertisement insertion module 4562, which may be configured to add advertisements that are based on the context of the image. For example, if the image is a live street view of a department store, the context of the image may show advertisements related to products sold by that department store, e.g., clothing, cosmetics, or power tools.

[00220] Referring again to Fig. 1-T, server 4000 may include a received image with advertisement transmission to user device module 4570 configured to transmit the image 5900 Received image with advertisement transmission to user device module 4570 may include components necessary to communicate with user device 5900 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

12 [00221] Referring again to Fig. 1-1, user device 5900 may include a selected image receiving module 5930, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5940, which may display the requested pixels to the user, including the advertisement, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-1.

[00222] Referring now to Fig. 1-AC, Fig. 1-AC shows an advertisement server component 7700 configured to deliver advertisements to the server 4000 for insertion into the images prior to delivery to the user. In an embodiment, advertisement server component 7700 may be integrated with server 4000. In another embodiment,

advertisement server component may be separate from server 4000 and may

communicate with server 4000. In yet another embodiment, rather than interacting with server 4000, advertisement server component 7700 may interact directly with the user device 5900, and insert the advertisement into the image after the image has been received, or, in another embodiment, cause the user device to display the advertisement concurrently with the image (e.g., overlapping or adjacent to). In such embodiments, some of the described modules of server 4000 may be incorporated into user device 5900, but the functionality of those modules would operate similarly to as previously described.

[00223] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a user data collection module 7705. User data collection module 7705 may collect data from user device 5900, and use that data to drive placement of advertisements (e.g., based on a user's browser history, e.g., to sports sites, and the like).

[00224] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include advertisement database 7715 which includes

advertisements that are ready to be inserted into images. In an embodiment, these advertisements may be created on the fly.

12 [00225] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include an advertisement request reception module 7710 which receives a request to add an advertisement into the drawing (the receipt of the request is not shown to ease understanding of the drawings). In an embodiment, advertisement server component 7700 may include advertisement selection module 7720, which may include an image analysis module 7722 configured to analyze the image to determine the best context-based advertisement to place into the image. In an embodiment, that decision may be made by the server 4000, or partly at the server 4000 and partly at the advertisement server component 7700 (e.g., the advertisement server component may have a set of advertisements from which a particular one may be chosen). In an embodiment, various third parties may compensate the operators of server component 7700, server 4000, or any other component of the system, in order to receive preferential treatment.

[00226] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a selected advertisement transmission module 7730, which may transmit the selected advertisement (or a set of selected advertisements) to the server 4000. In an embodiment, selected advertisement transmission module 7730 may send the complete image with the advertisement overlaid, e.g., in an implementation in which the advertisement server component 7700 also handles the placement of the advertisement. In an embodiment in which advertisement server component 7700 is integrated with server 4000, this module may be an internal transmission module, as may all such transmission/reception modules.

[00227] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as

12 "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[00228] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[00229] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or

12 phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[00230] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates

otherwise. Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[00231] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To

12 the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[00232] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in some embodiments," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[00233] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

1. A thing/operation disclosure comprising:

a method substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

2. A thing/operation disclosure comprising:

a device substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

3. A thing/operation disclosure comprising:

a method including one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

12 4. A thing/operation disclosure comprising:

a device implementing one or more of the devices, stores, and/or interfaces described in the detailed description and/or drawings and/or elsewhere herein.

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods, and Systems For Integrating Multiple User

Access Camera Array

···

DETAILED DESCRIPTION -- Mj^^^Mjl^M

[0011] In addition to the following, various other method and/or system and/or program product aspects are set forth and described in the teachings such as text (e.g., claims and/or detailed description) and/or drawings of the presentdisclosure.

[0012] The foregoing is a summary and thus may contain simplifications,

generalizations, inclusions, and/or omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is NOT intended to be in any way limiting. Other aspects, features, and advantages of the devices and/or processes and/or other subject matter described herein will become apparent by reference to the detailed description, the corresponding drawings, and/or in the teachings set forth herein.

[0013] The logical operations/functions set forth in the present technical description are representative of static or sequenced specifications of various ordered- matter elements, in

12 order that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations. The logical operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one of skill in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation.

[0014] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software (e.g., a high- level computer program serving as a hardware specification) implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software (e.g., a high-level computer program serving as a hardware specification), and/or firmware in one or more machines, compositions of matter, and articles of manufacture, limited to patentable subject matter under 35 USC 101. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software

3 (e.g., a high-level computer program serving as a hardware specification), and or firmware.

[0015] In some implementations described herein, logic and similar implementations may include computer programs or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of existing software (e.g., a high-level computer program serving as a hardware

specification) or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

[0016] Alternatively or additionally, implementations may include executing a special- purpose instruction sequence or invoking circuitry for enabling, triggering, coordinating, requesting, or otherwise causing one or more occurrences of virtually any functional operation described herein. In some variants, operational or other logical descriptions herein may be expressed as source code and compiled or otherwise invoked as an executable instruction sequence. In some contexts, for example, implementations may be provided, in whole or in part, by source code, such as C++, or other code sequences. In other implementations, source or other code implementation, using commercially available and/or techniques in the art, may be compiled/

/implemented/translated/converted into a high-level descriptor language (e.g., initially implementing described technologies in C or C++ programming language and thereafter

4 converting the programming language implementation into a logic- synthesizable language implementation, a hardware description language implementation, a hardware design simulation implementation, and/or other such similar mode(s) of expression). For example, some or all of a logical expression (e.g., computer programming language implementation) may be manifested as a Verilog-type hardware description (e.g., via Hardware Description Language (HDL) and/or Very High Speed Integrated Circuit Hardware Descriptor Language (VHDL)) or other circuitry model which may then be used to create a physical implementation having hardware (e.g., an Application Specific Integrated Circuit). Those skilled in the art will recognize how to obtain, configure, and optimize suitable transmission or computational elements, material supplies, actuators, or other structures in light of these teachings.

[0017] For example, a central processing unit of a personal computer may, at various times, operate as a module for displaying graphics on a screen, a module for writing data to a storage medium, a module for receiving user input, and a module for multiplying two large prime numbers, by configuring its logical gates in accordance with its instructions. Such reconfiguration may be invisible to the naked eye, and in some embodiments may include activation, deactivation, and/or re-routing of various portions of the component, e.g., switches, logic gates, inputs, and/or outputs. Thus, in the examples found in the foregoing/following disclosure, if an example includes or recites multiple modules, the example includes the possibility that the same hardware may implement more than one of the recited modules, either contemporaneously or at discrete times or timings. The implementation of multiple modules, whether using more components, fewer

components, or the same number of components as the number of modules, is merely an implementation choice and does not generally affect the operation of the modules themselves. Accordingly, it should be understood that any recitation of multiple discrete modules in this disclosure includes implementations of those modules as any number of underlying components, including, but not limited to, a single component that reconfigures itself over time to carry out the functions of multiple modules, and/or multiple components that similarly reconfigure, and/or special purpose reconfigurable components.

5 [0018] Those skilled in the art will recognize that it is common within the art to implement devices and/or processes and/or systems, and thereafter use engineering and/or other practices to integrate such implemented devices and/or processes and/or systems into more comprehensive devices and/or processes and/or systems. That is, at least a portion of the devices and/or processes and/or systems described herein can be integrated into other devices and/or processes and/or systems via a reasonable amount of experimentation. Those having skill in the art will recognize that examples of such other devices and/or processes and/or systems might include - as appropriate to context and application— all or part of devices and/or processes and/or systems of (a) an air conveyance (e.g., an airplane, rocket, helicopter, etc.) , (b) a ground conveyance (e.g., a car, truck, locomotive, tank, armored personnel carrier, etc.), (c) a building (e.g., a home, warehouse, office, etc.), (d) an appliance (e.g., a refrigerator, a washing machine, a dryer, etc.), (e) a communications system (e.g., a networked system, a telephone system, a Voice over IP system, etc.), (f) a business entity (e.g., an Internet Service Provider (ISP) entity such as Comcast Cable, Qwest, Southwestern Bell, etc.), or (g) a wired/wireless services entity (e.g., Sprint, Cingular, Nextel, etc.), etc.

[0019] In a general sense, those skilled in the art will recognize that the various embodiments described herein can be implemented, individually and/or collectively, by various types of electro-mechanical systems having a wide range of electrical

components such as hardware, software, firmware, and/or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101; and a wide range of components that may impart mechanical force or motion such as rigid bodies, spring or torsional bodies, hydraulics, electro-magnetically actuated devices, and/or virtually any combination thereof. Consequently, as used herein "electro-mechanical system" includes, but is not limited to, electrical circuitry operably coupled with a transducer (e.g., an actuator, a motor, a piezoelectric crystal, a Micro Electro Mechanical System (MEMS), etc.), electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or

6 devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), electrical circuitry forming a communications device (e.g., a modem,

communications switch, optical-electrical equipment, etc.), and/or any non-electrical analog thereto, such as optical or other analogs (e.g., graphene based circuitry). Those skilled in the art will also appreciate that examples of electro-mechanical systems include but are not limited to a variety of consumer electronics systems, medical devices, as well as other systems such as motorized transport systems, factory automation systems, security systems, and/or communication/computing systems. Those skilled in the art will recognize that electro-mechanical as used herein is not necessarily limited to a system that has both electrical and mechanical actuation except as context may dictate otherwise.

[0020] In a general sense, those skilled in the art will recognize that the various aspects described herein which can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, and/or any combination thereof can be viewed as being composed of various types of "electrical circuitry." Consequently, as used herein "electrical circuitry" includes, but is not limited to, electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of memory (e.g., random access, flash, read only, etc.)), and/or electrical circuitry forming a communications device (e.g., a modem, communications switch, optical-electrical equipment, etc.). Those having skill in the art will recognize that the subject matter described herein may be implemented in an analog or digital fashion or some

combination thereof.

7 [0021] Referring now to Figs. 1-8 there exist a large number of existing cameras. In an embodiment, users can control the direction in which a remote camera is pointed. In an embodiment, it is not limited to one person controlling the camera at a time, but implementations in which many people are trying to move or otherwise control the camera (e.g., zoom, focus, etc.) at a time.

[0022] In an embodiment, high pixel count images may be used to capture high- resolution images of various scenes. Then, in an embodiment, at the camera level, data can be extracted from the high-resolution image and sent to the user relatively inexpensively. Thus, unneeded or unviewed pixels may be stripped away.

[0023] Thus, in an embodiment, there is an evolution of detail, from a "map" view to a "satellite" view to a "street" view. In an embodiment, there will be a "real time" view of a place available by utilizing one or more cameras in the manner described herein.

[0024] In an embodiment, the system knows where the user is, knows what the user is pointing to, and knows what the user wants, it's constantly looking everywhere, and it extracts from that firehose, peels off that pixels that haven't changed (that seems to be step one). There may be a lot of overlap, so don't send the same things multiple times, so only send it once and then there's onboard processing that recognizes that and does everything it can. It may actually record everything on board into a memory, but in terms of what it sends to the user, it only sends what pixels have changed. There's other compression that goes on there.

[0025] In an embodiment, one or more devices, e.g., in the cloud or network, does the image stitching and image orientation and geographic information, geolocation information, we can use this as kind of a live video overlay of things that are happening in the world, people tracking cars of a certain color, persons of a certain height, or trigger events could make it work. For example, bear cam, or on the space station, you might want to superimpose information on top of that— it could work the same way as google maps work, where it shows "advertising" superimposed (sam note: this would be a separate application), but fundamentally the idea is to allow multiple users to use the big camera at the same time.

8 [0026] In an embodiment, It's about how to collapse the huge fire hose of pixels, and manage it at the front end so we can take advantage of free memory and expensive data transmission.

[0027] In an embodiment, access to the data from the system (e.g., to the images captured by the image sensor array, whether cached, real-time, or a combination thereof), may be time-limited. For example, a user of the system may have their access time- limited, or a portion of their access time-limited. For example, a user may be limited to thirty minutes a day, or ten consecutive minutes, or three hundred minutes per month, or some combination thereof. In another embodiment, a user may be limited to certain aspects of the system, e.g., viewing images at a particular resolution, or "panning" and "zooming" (understanding that in some embodiments, "panning" and "zooming" do not correspond to a physical change to the lens configuration of the image array).

[0028] In an embodiment, the time limit may be based on a type of subscription, e.g., whether the user has paid for the service, is a free user, or how much money or other consideration the user has paid for the service. In an embodiment, the user may be "timed out" of the system once their allotted time has expired. In another embodiment, the user may be given an option to purchase additional time for consideration (e.g., money), either prior to the expiration of their access or at some predetermined time prior to the expiration of their access. In a free implementation, the user may simply be required to click a button or otherwise indicate that they are actively using the system.

[0029] In an embodiment, the rate at which the time allotted to the user is consumed may be dynamic and/or variable. For example, in an embodiment, times at which a particular image sensor array is popular may drain a user' s allotment faster tan other times. In another embodiment, the number of users that are logged into the system at a time that a particular user logs on may determine, at least in part, the rate at which that particular user's access expires. In another embodiment, the user's allotment of time may be consumed at a rate that is at least partially defined by an available bandwidth of the system at the time that the user is using the system.

9 [0030] In an embodiment, there may be one or more entities (e.g., users, developers, other systems, and the like) connected to one or more components of the system (e.g., the image sensor array, the server, or any of the associated hardware and/or software). These entities may receive data from and/or transmit data to one or more components of the system in what may be referred to as a "session." For example, a user may view content from a image sensor array of the system, and may issue commands to change the pixels from the image sensor array that are transferred as part of that "session." In an embodiment, these sessions may have a "timeout" period, e.g., a period of no detected activity (e.g., input from the user), the system may stop transmitting data to the connected entity, or change the rate and/or type of data transmission from the system, e.g., the image sensor array and/or the server .

[0031] In an embodiment, the timeout period may be set by the image sensor array or the server, or by some other component of the system. In another embodiment, the timeout period may be set by the user. In an embodiment, the timeout period may be regulated by a bandwidth available to the user and/or to the image sensor array. For example, if there are many users logged into the system, the bandwidth available to the individual users logged into the system may be reduced, and the timeout period may also be reduced. In an embodiment, the timeout period may be optionally extended by the user, for example, after a warning.

[0032] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

10 [0033] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claimrecitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[0034] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example,

11 the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[0035] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[0036] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

12 [0037] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[0038] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

13 1. A thing/operation disclosure comprising:

a method substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

2. A thing/operation disclosure comprising:

a device substantially as shown and described in the detailed description and/or drawings and/or elsewhere herein.

3. A thing/operation disclosure comprising:

a method including one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

4. A thing/operation disclosure comprising:

a device implementing one or more of the devices, stores, and/or interfaces described in the detailed description and/or drawings and/or elsewhere herein.

5. A computationally-implemented thing/operation disclosure, comprising:

first means for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein; and

second means for carrying out one or more second steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

6. A computationally-implemented thing/operation disclosure, comprising:

circuitry for carrying out one or more first steps as shown and described in the detailed description and/or drawings and/or elsewhere herein.

7. A thing/operation disclosure, comprising:

a signal-bearing medium bearing:

one or more instructions for configuration as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

14 8. A thing/operation disclosure defined by a computational language, comprising: one or more interchained physical machines ordered as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

9. A thing/operation disclosure, comprising:

one or more general purpose integrated circuits configured to receve instructions to configure as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

10. A thing/operation disclosure comprising:

an integrated circuit configured to purpose itself as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

11. A thing/operation disclosure, comprising:

one or more elements of programmable hardware programmed to function as one or more of the modules and/or functions described in the detailed description and/or drawings and/or elsewhere herein.

This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods and Systems for Visual Imaging Arrays

DETAILED DESCRIPTION

15 High-Level System Architecture

[00142] Fig. 1, including Figs. 1-A-l-AN, shows partial views that, when assembled, form a complete view of an entire system, of which at least a portion will be described in more detail. An overview of the entire system of Fig. 1 is now described herein, with a more specific reference to at least one subsystem of Fig. 1 to be described later with respect to Figs. 2-14D.

[00143] Fig. 1 shows various implementations of the overall system. At a high level, Fig. 1 shows various implementations of a multiple user video imaging array (hereinafter interchangeably referred to as a "MUVIA"). It is noted that the designation "MUVIA" is merely shorthand and descriptive of an exemplary embodiment, and not a limiting term. Although "multiple user" appears in the name MUVIA, multiple users or even a single user are not required. Further, "video" is used in the designation "MUVIA," but MUVIA systems also may capture still images, multiple images, audio data, electromagnetic waves outside the visible spectrum, and other data as will be described herein. Further, "imaging array" may be used in the MUVIA designation, but the image sensor in MUVIA is not necessarily an array or even multiple sensors (although commonly implemented as larger groups of image sensors, single-sensor implementations are also contemplated), and "array" here does not necessarily imply any specific structure, but rather any grouping of one or more sensors.

[00144] Generally, although not necessarily required, a MUVIA system may include one or more of a user device (e.g., hereinafter interchangeably referred to as a "client device," in recognition that a user may not necessarily be a human, living, or organic"), a server, and an image sensor array. A "server" in the context of this application may refer to any device, program, or module that is not directly connected to the image sensor array or to the client device, including any and all "cloud" storage, applications, and/or processing.

[00145] For example, in an embodiment, e.g., as shown in Fig. 1-A, Fig. 1-K, Fig. 1-U, Fig. 1-AE, and Fig. 1-AF, in an embodiment, the system may include one or more of image sensor array 3200, array local storage and processing module 3300, server 4000, and user device 5200. Each of these portions will be discussed in more detail herein.

16 [00146] Referring now to Fig. 1-A, Fig. 1-A depicts user device 5200, which is a device that may be operated or controlled by a user of a MUVIA system. It is noted here that "user" is merely provided as a designation for ease of understanding, and does not imply control by a human or other organism, sentient or otherwise. In an embodiment, for example, in a security-type embodiment, the user device 5200 may be mostly or completely unmonitored, or may be monitored by an artificial intelligence, or by a combination of artificial intelligence, pseudo-artificial intelligence (e.g., that is intelligence amplification) and human intelligence.

[00147] User device 5200 may be, but is not limited to, a wearable device (e.g., glasses, goggles, headgear, a watch, clothing), an implant (e.g., a retinal-implant display), a computer of any kind (e.g., a laptop computer, desktop computer, mainframe, server, etc.), a tablet or other portable device, a phone or other similar device (e.g., smartphone, personal digital assistant), a personal electronic device (e.g., music player, CD player), a home appliance (e.g., a television, a refrigerator, or any other so-called "smart" device), a piece of office equipment (e.g., a copier, scanner, fax device, etc.), a camera or other camera-like device, a video game system, an entertainment/media center, or any other electrical equipment that has a functionality of presenting an image (whether visual or by other means, e.g., a screen, but also other sensory stimulating work).

[00148] User device 5200 may be capable of presenting an image, which, for purposes of clarity and conciseness will be referred to as displaying an image, although

communication through forms other than generating light waves through the visible light spectrum, although the image is not required to be presented at all times or even at all. For example, in an embodiment, user device 5200 may receive images from server 4000 (or directly from the image sensor array 3200, as will be discussed herein), and may store the images for later viewing, or for processing internally, or for any other reason.

[00149] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection accepting module 5210. User selection accepting module 5210 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-A in the exemplary interface 5212, the user selection accepting module 5210

17 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00150] In an embodiment, the user selection accepting module may accept a selection of a particular thing- e.g., a building, an animal, or any other object whose representation is present on the screen. Moreover, a user may use a text box to "search" the image for a particular thing, and processing, done at the user device 5200 or at the server 4000, may determine the image and the zoom level for viewing that thing. The search for a particular thing may include a generic search, e.g., "search for people," or "search for penguins," or a more specific search, e.g., "search for the Space Needle," or "search for the White House." The search for a particular thing may take on any known contextual search, e.g., an address, a text string, or any other input.

[00151] In an embodiment, the "user selection" facilitated by the user selection accepting module 5210 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00152] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection transmitting module 5220. The user selection transmitting module 5220 may take the user selection from user selection transmitting module 5220, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5200 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server

18 4000. Following the thick- line arrow leftward from user selection transmitting module 5220 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00153] Referring again to Fig. 1-A, Fig. 1-A also includes a selected image receiving module 5230 and a user selection presenting module 5240, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00154] Referring now to Fig. 1-K, Figs. 1-K and 1-U show an embodiment of a server 4000 that communicates with one or both of user device 5200 and array local storage and processing module 3300. Sever 4000 may be a single computing device, or may be many computing devices, which may or may not be in proximity with each other.

[00155] Referring again to Fig. 1-K, server 4000 may include a user request reception module 4010. The user request reception module 4010 may receive the transmitted request from user selection transmitting module 5220. The user request reception module 4010 may then turn over processing to user request validation module 4020, which may perform, among other things, a check to make sure the user is not requesting more resolution than what their device can handle. For example, if the server has learned (e.g., through gathered information, or through information that was transmitted with the user request or in a same session as the user request), that the user is requesting a 1900x1080 resolution image, and the maximum resolution for the device is 1334x750, then the request will be modified so that no more than the maximum resolution that can be handled by the device is requested. In an embodiment, this may conserve the bandwidth required to transmit from the MUVIA to the server 4000 and/or the user device 3200

[00156] Referring again to Fig. 1-K, in an embodiment, server 4000 may include a user request latency management module 4030. User request latency management module 4030 may, in conjunction with user device 3200, attempt to reduce the latency from the

19 time a specific image is requested by user device 3200 to the time the request is acted upon and data is transmitted to the user. The details for this latency management will be described in more detail herein, with varying techniques that may be carried out by any or all of the devices in the chain (e.g., user device, camera array, and server). As an example, in an embodiment, a lower resolution version of the image, e.g., that is stored locally or on the server, may be sent to the user immediately upon the request, and then that image is updated with the actual image taken by the camera. In an embodiment, user request latency management module 4030 also may handle static gap-filling, that is, if the image captured by the camera is unchanging, e.g., has not changed for a particular period of time, then a new image is not necessary to be captured, and an older image, that may be stored on server 4000, may be transmitted to the user device 3200. This process also will be discussed in more detail herein.

[00157] Referring now to Fig. 1-U, which shows more of server 4000, in an

embodiment, server 4000 may include a consolidated user request transmission module 4040, which may be configured to consolidate all the user requests, perform any necessary pre-processing on those requests, and send the request for particular sets of pixels to the array local storage and processing module 3300. The process for

consolidating the user requests and performing pre-processing will be described in more detail herein with respect to some of the other exemplary embodiments. In this embodiment, however, server consolidated user request transmission module 4040 transmits the request (exiting leftward from Fig. 1-U and traveling downward to Fig. 1- AE, through a pathway identified in Fig. 1-AE as lower-bandwidth communication from remote server 3515. It is noted here that "lower bandwidth communication" does not necessarily mean "low bandwidth" or imply any specific number about the bandwidth— it is simply lower than the relatively higher bandwidth communication from the actual image sensor array 3505 to the array local storage and processing module 3300, which will be discussed in more detail herein.

[00158] Referring again to Fig. 1-U, server 4000 also may include requested pixel reception module 4050, user request preparation module 4060, and user request

20 transmission module 4070 (shown in Fig. 1-T), which will be discussed in more detail herein, with respect to the dataflow of this embodiment

[00159] Referring now to Figs. 1-AE and 1-AF, Figs. 1-AE and 1-AF show an image sensor array ("ISA") 3200 and an array local storage and processing module 3300, each of which will now be described in more detail.

[00160] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00161] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3300. In an embodiment, array local storage and processing module 3300 is integrated into the image sensor array 3200. In another embodiment, the array local storage and processing module 3300 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3300 to the remote server, which may be, but is not required to be, located further away temporally.

[00162] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically

21 changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00163] Referring again to Fig. 1-AE, the image sensor array 3300 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3310. Consolidated user request reception module 3310 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00164] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests.

[00165] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

22 [00166] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3300 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3300 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00167] Referring back to Fig. 1-U, the transmitted pixels transmitted from selected pixel transmission module 3340 of array local processing module 3300 may be received by server 4000, e.g., at requested pixel reception module 4050. Requested pixel reception module 4050 may receive the requested pixels and turn them over to user request preparation module 4060, which may "unpack" the requested pixels, e.g., determining which pixels go to which user, and at what resolutions, along with any postprocessing, including image adjustment, adding in missing cached data, or adding additional data to the images (e.g., advertisements or other data). In an embodiment, server 4000 also may include a user request transmission module 4070, which may be configured to transmit the requested pixels back to the user device 5200.

[00168] Referring again to Fig. 1-A, user device 5200 may include a selected image receiving module 5230, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

23 [00169] Figs. 1-B, 1-C, 1-M, 1-W, 1-AG, and 1-AH show another embodiment of the MUVIA system, in which multiple user devices 5510, 5520, and 5530 may request images captured by the same image sensor array 3200.

[00170] Referring now to Figs. 1-B and 1-C, user device 5510, user device 5520, and user device 5530 are shown. In an embodiment, user devices 5510, 5520, and 5530 may have some or all of the same components as user device 5200, but are not shown here for clarity and ease of understanding the drawing. For each of user devices 5510, 5520, and 5530, exemplary screen resolutions have been chosen. There is nothing specific about these numbers that have been chosen, however, they are merely illustrated for exemplary purposes, and any other numbers could have been chosen in their place.

[00171] For example, in an embodiment, referring to Fig. 1-B, user device 5510 may have a screen resolution of 1920x1080 (e.g., colloquially referred to as "HD quality"). User device 5510 may send an image request to the server 4000, and may also send data regarding the screen resolution of the device.

[00172] Referring now to Fig. 1-C, user device 5520 may have a screen resolution of 1334x750. User device 5520 may send another image request to the server 4000, and, in an embodiment, instead of sending data regarding the screen resolution of the device, may send data that identifies what kind of device it is (e.g., an Apple-branded

smartphone). Server 4000 may use this data to determine the screen resolution for user device 5520 through an internal database, or through contacting an external source, e.g., a manufacturer of the device or a third party supplier of data about devices.

[00173] Referring again to Fig. 1-C, user device 5530 may have a screen resolution of 640x480, and may send the request by itself to the server 4000, without any additional data. In addition, server 4000 may receive independent requests from various users to change their current viewing area on the device.

[00174] Referring now to Fig. 1-M, server 4000 may include user request reception module 4110. User request reception module 4110 may receive requests from multiple user devices, e.g., user devices 5510, 5520, and 5530. Server 4000 also may include an

24 independent user view change request reception module 4115, which, in an embodiment, may be a part of user request reception module 4110, and may be configured to receive requests from users that are already connected to the system, to change the view of what they are currently seeing.

[00175] Referring again to Fig. 1-M, server 4000 may include relevant pixel selection module 4120 configured to combine the user selections into a single area, as shown in Fig. 1-M. It is noted that, in an embodiment, the different user devices may request areas that overlap each other. In this case, there may be one or more overlapping areas, e.g., overlapping areas 4122. In an embodiment, the overlapping areas are only transmitted once, in order to save data/transmission costs and increase efficiency.

[00176] Referring now to Fig. 1-W, server 4000 may include selected pixel transmission to ISA module 4130. Module 4130 may take the relevant selected pixels, and transmit them to the array local processing module 3400 of image sensor array 3200. Selected pixel transmission to ISA module 4130 may include communication components, which may be shared with other transmission and/or reception modules.

[00177] Referring now to Fig. 1-AG, array local processing module 3400 may communicate with image sensor array 3200. Similarly to Figs. 1-AE and 1-AF, Figs. 1- AG and 1-AH show array local processing module 3400 and image sensor array 3200, respectively.

[00178] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

25 [00179] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3400. In an embodiment, array local storage and processing module 3400 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3400 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3400 to the remote server, which may be, but is not required to be, located further away temporally.

[00180] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00181] Referring again to Fig. 1-AG, the image sensor array 3200 may capture an image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3410. Consolidated user request reception module 3410 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3420 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00182] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3430. In an

26 embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3417. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3415. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3400, or may be subject to other manipulations or processing separate from the user requests.

[00183] Referring gain to Fig. 1-AG, array local processing module 3400 may include flagged selected pixel transmission module 3440, which takes the pixels identified as requested (e.g., "flagged") and transmits them back to the server 4000 for further processing. Similarly to as previously described, this transmission may utilize a lower- bandwidth channel, and module 3440 may include all necessary hardware to effect that lower-bandwidth transmission to server 4000.

[00184] Referring again to Fig. 1-W, the flagged selected pixel transmission module 3440 of array local processing module 3400 may transmit the flagged pixels to server 4000. Specifically, flagged selected pixel transmission module 3440 may transmit the pixels to flagged selected pixel reception from ISA module 4140 of server 4000, as shown in Fig. 1-W.

[00185] Referring again to Fig. 1-W, server 4000 also may include flagged selected pixel separation and duplication module 4150, which may, effectively, reverse the process of combining the pixels from the various selections, duplicating overlapping areas where necessary, and creating the requested images for each of the user devices that requested images. Flagged selected pixel separation and duplication module 4150 also may include the post-processing done to the image, including filling in cached versions of images, image adjustments based on the device preferences and/or the user preferences, and any other image post-processing.

[00186] Referring now to Fig. 1-M (as data flows "northward" from Fig. 1-W from module 4150), server 4000 may include pixel transmission to user device module 4160, which may be configured to transmit the pixels that have been separated out and processed to the specific users that requested the image. Pixel transmission to user

27 device module 4160 may handle the transmission of images to the user devices 5510, 5520, and 5530. In an embodiment, pixel transmission to user device module 4160 may have some or all components in common with user request reception module 4110.

[00187] Following the arrow of data flow to the right and upward from module 4160 of server 4000, the requested user images arrive at user device 5510, user device 5520, and user device 5530, as shown in Figs. 1-B and 1-C. The user devices 5510, 5520, and 5530 may present the received images as previously discussed and/or as further discussed herein.

[00188] Referring again to Fig. 1, Figs. 1-E, l-O, 1-Y, 1-AH, and 1-AI depict a MUVIA implementation according to an embodiment. In an embodiment, referring now to Fig. 1- E, a user device 5600 may include a target selection reception module 5610. Target selection reception module 5610 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA array is pointed at a football stadium, e.g., CenturyLink Field. As an example, a user may select one of the football players visible on the field as a "target." This may be facilitated by a target presentation module, e.g., target presentation module 5612, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the football player.

[00189] In an embodiment, target selection reception module 5610 may include an audible target selection module 5614 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00190] Referring again to Fig. 1, e.g., Fig. 1-E, in an embodiment, user device 5600 may include selected target transmission module 5620. Selected target transmission module 5620 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00191] Referring now to Fig. l-O, Fig. 1-0 (and Fig. 1-Y to the direct "south" of Fig. 1- O) shows an embodiment of server 4000. For example, in an embodiment, server 4000

46 may include a selected target reception module 4210. In an embodiment, selected target reception module 4210 of server 4000 may receive the selected target from the user device 5600. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00192] Referring again to Fig. l-O, in an embodiment, server 4000 may include selected target identification module 4220, which may be configured to take the target data received by selected target reception module 4210 and determine an image that needs to be captured in order to obtain an image that contains the selected target (e.g., in the shown example, the football player). In an embodiment, selected target identification module 4220 may use images previously received (or, in an embodiment, current images) from the image sensor array 3200 to determine the parameters of an image that contains the selected target. For example, in an embodiment, lower-resolution images from the image sensor array 3200 may be transmitted to server 4000 for determining where the target is located within the image, and then specific requests for portions of the image may be transmitted to the image sensor array 3200, as will be discussed herein.

[00193] In an embodiment, server 4000 may perform processing on the selected target data, and/or on image data that is received, in order to create a request that is to be transmitted to the image sensor array 3200. For example, in the given example, the selected target data is a football player. The server 4000, that is, selected target identification module 4220 may perform image recognition on one or more images captured from the image sensor array to determine a "sector" of the entire scene that contains the selected target. In another embodiment, the selected target identification module 4220 may use other, external sources of data to determine where the target is. In yet another embodiment, the selected target data was selected by the user from the scene displayed by the image sensor array, so such processing may not be necessary.

47 [00194] Referring again to Fig. 1-0, in an embodiment, server 4000 may include pixel information selection module 4230, which may select the pixels needed to capture the target, and which may determine the size of the image that should be transmitted from the image sensor array. The size of the image may be determined based on a type of target that is selected, one or more parameters (set by the user, by the device, or by the server, which may or may not be based on the selected target), by the screen resolution of the device, or by any other algorithm. Pixel information selection module 4230 may determine the pixels to be captured in order to express the target, and may update based on changes in the target's status (e.g., if the target is moving, e.g., in the football example, once a play has started and the football player is moving in a certain direction).

[00195] Referring now to Fig. 1-Y, Fig. 1Y includes more of server 4000 according to an embodiment. In an embodiment, server 4000 may include pixel information transmission to ISA module 4240. Pixel information transmission to ISA module 4240 may transmit the selected pixels to the array local processing module 3500 associated with image sensor array 3200.

[00196] Referring now to Figs. 1-AH and 1-AI, Fig. 1-AH depicts an image sensor array 3200, which in this example is pointed at a football stadium, e.g., CenturyLink field. Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00197] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3500. In an embodiment, array local storage and processing module

48 3500 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3500 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3500 to the remote server, which may be, but is not required to be, located further away temporally.

[00198] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00199] Referring again to Fig. 1-AE, the image sensor array 3200 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3510. Consolidated user request reception module 3510 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00200] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels

49 may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module 3330 may include or communicate with a lower resolution module 3314, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[00201] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00202] Referring now again to Fig. 1-Y, server 4000 may include a requested image reception from ISA module 4250. Requested image reception from ISA module 4250 may receive the image data from the array local processing module 3500 (e.g., in the arrow coming "north" from Fig. 1-AI. That image, as depicted in Fig. 1-Y, may include the target (e.g., the football player), as well as some surrounding area (e.g., the area of the field around the football player). The "surrounding area" and the specifics of what is included/transmitted from the array local processing module may be specified by the user (directly or indirectly, e.g., through a set of preferences), or may be determined by the server, e.g., in the pixel information selection module 4230 (shown in Fig. l-O).

[00203] Referring again to Fig. 1-Y, server 4000 may also include a requested image transmission to user device module 4260. Requested image transmission to user device module 4260 may transmit the requested image to the user device 5600. Requested

50 image transmission to user device module 4260 may include components necessary to communicate with user device 5600 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00204] Referring again to Fig. 1-Y, server 4000 may include a server cached image updating module 4270. Server cached image updating module 4270 may take the images received from the array local processing module 3500 (e.g., which may include the image to be sent to the user), and compare the received images with stored or "cached" images on the server, in order to determine if the cached images should be updated. This process may happen frequently or infrequently, depending on embodiments, and may be continuously ongoing as long as there is a data connection, in some embodiments. In some embodiments, the frequency of the process may depend on the available bandwidth to the array local processing module 3500, e.g., that is, at off-peak times, the frequency may be increased. In an embodiment, server cached image updating module 4270 compares an image received from the array local processing module 3500, and, if the image has changes, replaces the cached version of the image with the newer image.

[00205] Referring now again to Fig. 1-E, Fig. 1-E shows user device 5600. In an embodiment, user device 5600 includes image containing selected target receiving module 5630 that may be configured to receive the image from server 4000, e.g., requested image transmission to user device module 4260 of server 4000 (e.g., depicted in Fig. 1-Y, with the data transmission indicated by a rightward-upward arrow passing through Fig. 1-Y and Fig. 1-0 (to the north) before arriving at Fig. 1-E.

[00206] Referring again to Fig. 1-E, Fig. 1-E shows received image presentation module 5640, which may display the requested pixels that include the selected target to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through an exemplary interface that allows the user to monitor the target, and which also may display information about the target (e.g., in an embodiment, as shown in the figures, the game statistics for the football player also may

51 be shown), which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[00207] Referring again to Fig. 1, Figs. 1-F, 1-P, 1-Z, and 1-AJ depict a MUVIA implementation according to an embodiment. This embodiment may be colloquially known as "live street view" in which one or more MUVIA systems allow for a user to move through an area similarly to the well known Google-branded Maps (or Google- Street), except with the cameras working in real time. For example, in an embodiment, referring now to Fig. 1-F, a user device 5700 may include a target selection reception module 5710. Target selection reception module 5710 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA may be focused on a city, and the target may be an address, a building, a car, or a person in the city. As an example, a user may select a street address as a "target." This may be facilitated by a target presentation module, e.g., image selection presentation module 5712, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the street address. In an embodiment, image selection presentation module 5712 may use static images that may or may not be sourced by the MUVIA system, and, in another embodiment, image selection presentation module 5712 may use current or cached views from the MUVIA system.

[00208] In an embodiment, image selection presentation module 5712 may include an audible target selection module 5714 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00209] Referring again to Fig. 1, e.g., Fig. 1-F, in an embodiment, user device 5700 may include selected target transmission module 5720. Selected target transmission module 5720 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00210] Referring now to Fig. 1-P, Fig. 1-P depicts a server 4000 of the MUVIA system according to embodiments. In an embodiment, server 4000 may include a selected target

52 reception module 4310. Selected target reception module 4310 may receive the selected target from the user device 3700. In an embodiment, server 4000 may provide all or most of the data that facilitates the selection of the target, that is, the images and the interface, which may be provided, e.g., through a web portal.

[00211] Referring again to Fig. 1-P, in an embodiment, server 4000 may include a selected image pre-processing module 4320. Selected image pre-processing module 4320 may perform one or more tasks of pre-processing the image, some of which are described herein for exemplary purposes. For example, in an embodiment, selected image pre-processing module 4320 may include a resolution determination module 4322 which may be configured to determine the resolution for the image in order to show the target (and here, resolution is merely a stand-in for any facet of the image, e.g., color depth, size, shadow, pixelation, filter, etc.). In an embodiment, selected image preprocessing module 4320 may include a cached pixel fill-in module 4324. Cached pixel fill-in module 4324 may be configured to manage which portions of the requested image are recovered from a cache, and which are updated, in order to improve performance. For example, if a view of a street is requested, certain features of the street (e.g., buildings, trees, etc., may not need to be retrieved each time, but can be filled in with a cached version, or, in another embodiment, can be filled in by an earlier version. A check can be done to see if a red parked car is still in the same spot as it was in an hour ago; if so, that part of the image may not need to be updated. Using lower

resolution/prior images stored in a memory 4215, as well as other image processing techniques, cached pixel fill-in module determines which portions of the image do not need to be retrieved, thus reducing bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00212] Referring again to Fig. 1-P, in an embodiment, selected image pre-processing module 4320 of server 4000 may include a static object obtaining module 4326, which may operate similarly to cached pixel fill-in module 4324. For example, as in the example shown in Fig. 1-B, static object obtaining module 4326 may obtain prior versions of static objects, e.g., buildings, trees, fixtures, landmarks, etc., which may save

53 bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00213] Referring again to Fig. 1-P, in an embodiment, pixel information transmission to ISA module 4330 may transmit the request for pixels (e.g., an image, after the preprocessing) to the array local processing module 3600 (e.g., as shown in Figs. 1-Z and 1- AI, with the downward extending dataflow arrow).

[00214] Referring now to Figs. 1-Z and 1-AI, in an embodiment, an array local processing module 3600, that may be connected by a higher bandwidth connection to an image sensor array 3200, may be present.

[00215] . Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00216] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3605 to the array local storage and processing module 3600. In an embodiment, array local storage and processing module 3600 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3600 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3605" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3600 to the remote server, which may be, but is not required to be, located further away temporally.

54 [00217] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00218] Referring again to Fig. 1-AJ and Fig. 1-Z, the image sensor array 3200 may capture an image that is received by image capturing module 3605. Image capturing module 3605 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3610.

Consolidated user request reception module 3610 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3620 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00219] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3630. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3617. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3615. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3600, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module may include or communicate with a lower resolution module 3614, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

55 [00220] Referring again to Fig. 1-AJ, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3640. Selected pixel transmission module 3640 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00221] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3600 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3600 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00222] Referring now again to Fig. 1-P, in an embodiment, server 4000 may include image receiving from ISA module 4340. Image receiving from ISA module 4340 may receive the image data from the array local processing module 3600 (e.g., in the arrow coming "north" from Fig. 1-AJ via Fig. 1-Z). The image may include the pixels that were requested from the image sensor array 3200. In an embodiment, server 4000 also may include received image post-processing module 4350, which may, among other postprocessing tasks, fill in objects and pixels into the image that were determined not to be needed by selected image pre-processing module 4320, as previously described. In an embodiment, server 4000 may include received image transmission to user device module 4360, which may be configured to transmit the requested image to the user

56 device 5700. Requested image transmission to user device module 4360 may include components necessary to communicate with user device 5700 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00223] Referring now again to Fig. 1-F, user device 5700 may include a server image reception module 5730. Server image reception module 5730 may receive an image from sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-F.

[00224] In an embodiment, as shown in Figs. 1-F and 1-G, server image reception module 5730 may include an audio stream reception module 5732 and a video stream reception module 5734. In an embodiment, as discussed throughout this application, the MUVIA system may capture still images, video, and also sound, as well as other electromagnetic waves and other signals and data. In an embodiment, the audio signals and the video signals may be handled together, or they may be handled separately, as separate streams. Although not every module in the instant diagram separately shows audio streams and video streams, it is noted here that all implementations of MUVIA contemplate both audio and video coverage, as well as still image and other data collection.

[00225] Referring now to Fig. 1-G, which shows another portion of user device 5700, Fig. 1-G may include a display 5755 and a memory 5765, which may be used to facilitate presentation and/or storage of the received images.

[00226] Figs. 1-H, 1-R, 1-AA, and 1-AB show an embodiment of a MUVIA

implementation. For example, referring now to Fig. 1-H, Fig. 1-H shows an embodiment of a user device 5800. For exemplary purposes, the user device 5800 may be an augmented reality device that shows a user looking down a "street" at which the user is not actually present, e.g., a "virtual tourism" where the user may use their augmented

57 reality device (e.g., googles, e.g., an Oculus Rift-type headgear device) which may be a wearable computer. It is noted that this embodiment is not limited to wearable computers or augmented reality, but as in all of the embodiments described in this disclosure, may be any device. The use of a wearable augmented/virtual reality device is merely used to for illustrative and exemplary purposes.

[00227] In an embodiment, user device 5800 may have a field of view 5810, as shown in Fig. 1-H. The field of view for the user 5810 may be illustrated in Fig. 1-H as follows. The most internal rectangle, shown by the dot hatching, represents the user's "field of view" as they look at their "virtual world." The second most internal rectangle, with the straight line hatching, represents the "nearest" objects to the user, that is, a range where the user is likely to "look" next, by turning their head or moving their eyes. In an embodiment, this area of the image may already be loaded on the device, e.g., through use of a particular codec, which will be discussed in more detail herein. The outermost rectangle, which is the image without hatching, represents further outside the user's viewpoint. This area, too, may already be loaded on the device. By loading areas where the user may eventually look, the system can reduce latency and make a user's motions, e.g., movement of head, eyes, and body, appear "natural" to the system.

[00228] Referring now to Figs. 1-AA and 1-AB, these figures show an array local processing module 3700 that is connected to an image sensor array 3200 (e.g., as shown in Fig. 1-AK, and "viewing" a city as shown in Fig. 1-AJ). The image sensor array 3200 may operate as previously described in this document. In an embodiment, array local processing module 3700 may include a captured image receiving module 3710, which may receive the entire scene captured by the image sensor array 3200, through the higher-bandwidth communication channel 3505. As described previously in this application, these pixels may be "cropped" or "decimated" into the relevant portion of the captured image, as described by one or more of the user device 5800, the server 4000, and the processing done at the array local processing module 3700. This process may occur as previously described. The relevant pixels may be handled by relevant portion of captured image receiving module 3720.

58 [00229] Referring now to Fig. 1-AB, in an embodiment, the relevant pixels for the image that are processed by relevant portion of captured image receiving module 3720 may be encoded using a particular codec at relevant portion encoding module 3730. In an embodiment, the codec may be configured to encode the innermost rectangle, e.g., the portion that represents the current user's field of view, e.g., portion 3716, at a higher resolution, or a different compression, or a combination of both. The codec may be further configured to encode the second rectangle, e.g., with the vertical line hashing, e.g., portion 3714, at a different resolution and/or a different (e.g., a higher) compression. Similarly, the outermost portion of the image, e.g., the clear portion 3712, may again be coded at still another resolution and/or a different compression. In an embodiment, the codec itself handles the algorithm for encoding the image, and as such, in an

embodiment, the codec may include information about user device 5800.

[00230] As shown in Fig. 1-AB, the encoded portion of the image, including portions 3716, 3714, and 3712, may be transmitted using encoded relevant portion transmitting module 3740. It is noted that "lower compression," "more compression," and "higher compression," are merely used as one example for the kind of processing done by the codec. For example, instead of lower compression, a different sampling algorithm or compacting algorithm may be used, or a lossier algorithm may be implemented for various parts of the encoded relevant portion.

[00231] Referring now to Fig. 1-R, Fig. 1-R depicts a server 4000 in a MUVIA system according to an embodiment. For example, as shown in Fig. 1-R, server 4000 may include, in addition to portions previously described, an encoded image receiving module 4410. Encoded image receiving module 4410 may receive the encoded image, encoded as previously described, from encoded relevant portion transmitting module 3740 of array local processing module 3700.

[00232] Referring again to Fig. 1-R, server 4000 may include an encoded image transmission controlling module 4420. Encoded image transmission controlling module 4420 may transmit portions of the image to the user device 5800. In an embodiment, at least partially depending on the bandwidth and the particulars of the user device, the

59 server may send all of the encoded image to the user device, and let the user device decode the portions as needed, or may decode the image and send portions in piecemeal, or with a different encoding, depending on the needs of the user device, and the complexity that can be handled by the user device.

[00233] Referring again to Fig. 1-H, user device 5800 may include an encoded image transmission receiving module 5720, which may be configured to receive the image that is coded in a particular way, e.g., as will be disclosed in more detail herein. Fig. 1-H also may include an encoded image processing module 5830 that may handle the processing of the image, that is, encoding and decoding portions of the image, or other processing necessary to provide the image to the user.

[00234] Referring now to Fig. 1-AL, Fig. 1-AL shows an implementation of an

Application Programming Interface (API) for the various MUVIA components.

Specifically, image sensor array API 7800 may include, among other elements, a programming specification 7810, that may include, for example, libraries, classes, specifications, templates, or other coding elements that generally make up an API, and an access authentication module 7820 that governs API access to the various image sensor arrays. The API allows third party developers to access the workings of the image sensor array and the array local processing module 3700, so that the third party developers can write applications for the array local processing module 3700, as well as determine which data captured by the image sensor array 3200 (which often may be multiple gigabytes or more of data per second) should be kept or stored or transmitted. In an embodiment, API access to certain functions may be limited. For example, a tiered system may allow a certain number of API calls to the MUVIA data per second, per minute, per hour, or per day. In an embodiment, a third party might pay fees or perform a registration that would allow more or less access to the MUVIA data. In an embodiment, the third party could host their application on a separate web site, and let that web site access the image sensor array 3200 and/or the array local processing module 3700 directly.

60 [00235] Referring again to Fig. 1, Figs. 1-1, 1-J, 1-S, 1-T, 1-AC, 1-AD, 1-AM, and 1- AN, in an embodiment, show a MUVIA implementation that allows insertion of advertising (or other context-sensitive material) into the images displayed to the user.

[00236] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection accepting module 5910. User selection accepting module 5910 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-1, the user selection accepting module 5910 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00237] In an embodiment, the "user selection" facilitated by the user selection accepting module 5910 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00238] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection transmitting module 5920. The user selection transmitting module 5920 may take the user selection from user selection transmitting module 5920, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5900 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server 4000. Following the thick- line arrow leftward from user selection transmitting module 5920 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed

61 herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00239] Referring again to Fig. 1-1, Fig. 1-1 also includes a selected image receiving module 5930 and a user selection presenting module 5940, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00240] Referring now to Fig. 1-T (graphically represented as "down" and "to the right" of Fig. 1-1), in an embodiment, a server 4000 may include a selected image reception module 4510. In an embodiment, selected image reception module 4510 of server 4000 may receive the selected target from the user device 5900. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00241] Referring again to Fig. 1-T, in an embodiment, server 4000 may include selected image pre-processing module 4520. Selected image pre-processing module 4520 may perform one or more tasks of pre-processing the image, some of which have been previously described with respect to other embodiments. In an embodiment, server 4000 also may include pixel information transmission to ISA module 4330 configured to transmit the image request data to the image search array 3200, as has been previously described.

[00242] Referring now to Figs. 1-AD and 1-AN, array local processing module 3700 may be connected to an image sensor array 3200 through a higher-bandwidth

communication link 3505, e.g., a USB or PCI port. In an embodiment, image sensor array 3200 may include a request reception module 3710. Request reception module 3710 may receive the request for an image from the server 4000, as previously described.

62 Request reception module 3710 may transmit the data to a pixel selection module 3720, which may receive the pixels captured from image sensor array 3200, and select the ones that are to be kept. That is, in an embodiment, through use of the (sometimes

consolidated) user requests and the captured image, pixel selection module 3720 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00243] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3730. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3717. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3715. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3700, or may be subject to other manipulations or processing separate from the user requests, as described in previous embodiments. In an embodiment, unused pixel decimation module 3730 may be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to fulfill the request of the user.

[00244] Referring again to Fig. 1-AN, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3740. Selected pixel transmission module 3740 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3710. Similarly to lower-bandwidth

communication 3715, the lower-bandwidth communication 3710 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00245] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module

63 3700 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3700 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00246] Referring now again to Fig. 1-T, in an embodiment, server 4000 may include received image post-processing module 4550. Received image post-processing module 4550 may receive the image data from the array local processing module 3700 (e.g., in the arrow coming "north" from Fig. 1-AN via Fig. 1-AD). The image may include the pixels that were requested from the image sensor array 3200.

[00247] In an embodiment, server 4000 also may include advertisement insertion module 4560. Advertisement insertion module 4560 may insert an advertisement into the received image. The advertisement may be based one or more of the contents of the image, a characteristic of a user or the user device, or a setting of the advertisement server component 7700 (see, e.g., Fig. 1-AC, as will be discussed in more detail herein). The advertisement insertion module 4560 may place the advertisement into the image using any known image combination techniques, or, in another embodiment, the advertisement image may be in a separate layer, overlay, or any other data structure. In an embodiment, advertisement insertion module 4560 may include context-based advertisement insertion module 4562, which may be configured to add advertisements that are based on the context of the image. For example, if the image is a live street view of a department store, the context of the image may show advertisements related to products sold by that department store, e.g., clothing, cosmetics, or power tools.

[00248] Referring again to Fig. 1-T, server 4000 may include a received image with advertisement transmission to user device module 4570 configured to transmit the ima

64 5900 Received image with advertisement transmission to user device module 4570 may include components necessary to communicate with user device 5900 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00249] Referring again to Fig. 1-1, user device 5900 may include a selected image receiving module 5930, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5940, which may display the requested pixels to the user, including the advertisement, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-1.

[00250] Referring now to Fig. 1-AC, Fig. 1-AC shows an advertisement server component 7700 configured to deliver advertisements to the server 4000 for insertion into the images prior to delivery to the user. In an embodiment, advertisement server component 7700 may be integrated with server 4000. In another embodiment,

advertisement server component may be separate from server 4000 and may

communicate with server 4000. In yet another embodiment, rather than interacting with server 4000, advertisement server component 7700 may interact directly with the user device 5900, and insert the advertisement into the image after the image has been received, or, in another embodiment, cause the user device to display the advertisement concurrently with the image (e.g., overlapping or adjacent to). In such embodiments, some of the described modules of server 4000 may be incorporated into user device 5900, but the functionality of those modules would operate similarly to as previously described.

[00251] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a user data collection module 7705. User data collection module 7705 may collect data from user device 5900, and use that data to drive placement of advertisements (e.g., based on a user's browser history, e.g., to sports sites, and the like).

65 [00252] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include advertisement database 7715 which includes

advertisements that are ready to be inserted into images. In an embodiment, these advertisements may be created on the fly.

[00253] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include an advertisement request reception module 7710 which receives a request to add an advertisement into the drawing (the receipt of the request is not shown to ease understanding of the drawings). In an embodiment, advertisement server component 7700 may include advertisement selection module 7720, which may include an image analysis module 7722 configured to analyze the image to determine the best context-based advertisement to place into the image. In an embodiment, that decision may be made by the server 4000, or partly at the server 4000 and partly at the advertisement server component 7700 (e.g., the advertisement server component may have a set of advertisements from which a particular one may be chosen). In an embodiment, various third parties may compensate the operators of server component 7700, server 4000, or any other component of the system, in order to receive preferential treatment.

[00254] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a selected advertisement transmission module 7730, which may transmit the selected advertisement (or a set of selected advertisements) to the server 4000. In an embodiment, selected advertisement transmission module 7730 may send the complete image with the advertisement overlaid, e.g., in an implementation in which the advertisement server component 7700 also handles the placement of the advertisement. In an embodiment in which advertisement server component 7700 is integrated with server 4000, this module may be an internal transmission module, as may all such transmission/reception modules.

Exemplary Environment 200

[00255] Referring now to Fig. 2A, Fig. 2A illustrates an example environment 200 in which methods, systems, circuitry, articles of manufacture, and computer program

66 products and architecture, in accordance with various embodiments, may be implemented by at least one image device 220. Image device 220 may include a number of individual sensors that capture data. Although commonly referred to throughout this application as "image data," this is merely shorthand for data that can be collected by the sensors.

Other data, including video data, audio data, electromagnetic spectrum data (e.g., infrared, ultraviolet, radio, microwave data), thermal data, and the like, may be collected by the sensors.

[00256] Referring again to Fig. 2A, in an embodiment, image device 220 may operate in an environment 200. Specifically, in an embodiment, image device 220 may capture a scene 215. The scene 215 may be captured by a number of sensors 243. Sensors 243 may be grouped in an array, which in this context means they may be grouped in any pattern, on any plane, but have a fixed position relative to one another. Sensors 243 may capture the image in parts, which may be stitched back together by processor 222. There may be overlap in the images captured by sensors 243 of scene 215, which may be removed.

[00257] Upon capture of the scene in image device 220, in processes and systems that will be described in more detail herein, the requested pixels are selected. Specifically, pixels that have been identified by a remote user, by a server, by the local device, by another device, by a program written by an outside user with an API, by a component or other hardware or software in communication with the image device, and the like, are transmitted to a remote location via a communications network 240. The pixels that are to be transmitted may be illustrated in Fig. 2A as selected portion 255, however this is a simplified expression meant for illustrative purposes only.

[00258] Referring again to Fig. 2A, in an embodiment, server device 230 may be any device or group of devices that is connected to a communication network. Although in some examples, server device 230 is distant from image device 220, that is not required. Server device 230 may be "remote" from image device 220, which may be that they are separate components, but does not necessarily imply a specific distance. The

communications network may be a local transmission component, e.g., a PCI bus. Server

67 device 230 may include a request handling module 232 that handles requests for images from user devices, e.g., user device 250A and 240B. Request handling module 232 also may handle other remote computers and/or users that want to take active control of the image device, e.g., through an API, or through more direct control.

[00259] Server device 230 mlso may include an image device management module, which may perform some of the processing to determine which of the captured pixels of image device 220 are kept. For example, image device management module 234 may do some pattern recognition, e.g., to recognize objects of interest in the scene, e.g., a particular football player, as shown in the example of Fig. 2A. In other embodiments, this processing may be handled at the image device 220 or at the user device 250. In an embodiment, server device 230 limits a size of the selected portion by a screen resolution of the requesting user device.

[00260] Server device 230 then may transmit the requested portions to the user devices, e.g., user device 250A and user device 250B. In another embodiment, the user device or devices may directly communicate with image device 220, cutting out server device 230 from the system.

[00261] In an embodiment, user device 250A and 250B are shown, however user devices may be any electronic device or combination of devices, which may be located together or spread across multiple devices and/or locations. Image device 220 may be a server device, or may be a user-level device, e.g., including, but not limited to, a cellular phone, a network phone, a smartphone, a tablet, a music player, a walkie-talkie, a radio, an augmented reality device (e.g., augmented reality glasses and/or headphones), wearable electronics, e.g., watches, belts, earphones, or "smart" clothing, earphones, headphones, audio/visual equipment, media player, television, projection screen, flat screen, monitor, clock, appliance (e.g., microwave, convection oven, stove, refrigerator, freezer), a navigation system (e.g., a Global Positioning System ("GPS") system), a medical alert device, a remote control, a peripheral, an electronic safe, an electronic lock, an electronic security system, a video camera, a personal video recorder, a personal audio recorder, and the like. Device 220 may include a device interface 243 which may allow the device 220

68 to output data to the client in sensory (e.g., visual or any other sense) form, and/or allow the device 220 to receive data from the client, e.g., through touch, typing, or moving a pointing device (e.g., a mouse). User device 250 may include a viewfinder or a viewport that allows a user to "look" through the lens of image device 220, regardless of whether the user device 250 is spatially close to the image device 220.

[00262] Referring again to Fig. 2A, in various embodiments, the communication network 240 may include one or more of a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a wireless local area network (WLAN), a personal area network (PAN), a Worldwide Interoperability for Microwave Access (WiMAX), public switched telephone network (PTSN), a general packet radio service (GPRS) network, a cellular network, and so forth. The communication networks 240 may be wired, wireless, or a combination of wired and wireless networks. It is noted that "communication network" as it is used in this application refers to one or more communication networks, which may or may not interact with each other.

[00263] Referring now to Fig. 2B, Fig. 2B shows a more detailed version of image device 220, according to an embodiment. The image device 220 may include a device memory 245. In an embodiment, device memory 245 may include memory, random access memory ("RAM"), read only memory ("ROM"), flash memory, hard drives, disk- based media, disc-based media, magnetic storage, optical storage, volatile memory, nonvolatile memory, and any combination thereof. In an embodiment, device memory 245 may be separated from the device, e.g., available on a different device on a network, or over the air. For example, in a networked system, there may be more than one image device 220 whose device memories 245 may be located at a central server that may be a few feet away or located across an ocean. In an embodiment, device memory 245 may include of one or more of one or more mass storage devices, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), cache memory such as random access memory (RAM), flash memory, synchronous random access memory (SRAM), dynamic random access memory

(DRAM), and/or other types of memory devices. In an embodiment, memory 245 may

69 be located at a single network site. In an embodiment, memory 245 may be located at multiple network sites, including sites that are distant from each other.

[00264] Referring again to Fig. 2B, in an embodiment, image device 220 may include one or more image sensors 243 and may communicate with a communication network 240. Although sensors 243 are referred to as "image" sensors, this is merely shorthand for sensors that collect dat, including image data, video data, sound data, electromagnetic spectrum data, and other data. The image sensors 243 may be in an array, which merely means that the image sensors may have a specific location relative to each other.

[00265] Referring again to Fig. 2B, Fig. 2B shows a more detailed description of image device 220. In an embodiment, device 220 may include a processor 222. Processor 222 may include one or more microprocessors, Central Processing Units ("CPU"), a Graphics Processing Units ("GPU"), Physics Processing Units, Digital Signal Processors, Network Processors, Floating Point Processors, and the like. In an embodiment, processor 222 may be a server. In an embodiment, processor 222 may be a distributed-core processor. Although processor 222 is as a single processor that is part of a single device 220, processor 222 may be multiple processors distributed over one or many devices 220, which may or may not be configured to operate together.

[00266] Processor 222 is illustrated as being configured to execute computer readable instructions in order to execute one or more operations described above, and as illustrated in Fig. 10, Figs. 11A-11F, Figs. 12A-12C, Figs. 13A-13D, and Figs. 14A-14D. In an embodiment, processor 222 is designed to be configured to operate as processing module 250, which may include one or more of a multiple image sensor based scene capturing module 252 configured to capture a scene that includes one or more images, through use of more than one image sensor, a scene particular portion selecting module 254 configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene, a selected particular portion transmitting module 256 configured to transmit the selected particular portion from the scene to a remote location, and a scene pixel de-emphasizing module 258

70 configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

Exemplary Environment 300

[00267] Referring now to Fig. 3, Fig. 3 shows an exemplary embodiment of an image device, e.g., image device 220A operating in an environment 300. In an embodiment, image device 220A may include an array 310 of image sensors 312 as shown in Fig. 3. The array of image sensors in this image is shown in a rectangular grid, however this is merely exemplary to show that image sensors 312 may be arranged in any format. In an embodiment, each image sensor 312 may capture a portion of scene 315, which portions are then processed by processor 350. Although processor 350 is shown as local to image device 220A, it may be remote to image device 220A, with a sufficiently high-bandwidth connection to receive all of the data from the array of image sensors (e.g., multiple USB 3.0 lines). In an embodiment, the selected portions from the scene (e.g., the portions shown in the shaded box, e.g., selected portion 315), may be transmitted to a remote device 330, which may be a user device or a server device, as previously described. In an embodiment, the pixels that are not transmitted to remote device 330 may be stored in a local memory 340 or discarded.

Exemplary Environment 400

[00268] Referring now to Fig. 4, Fig. 4 shows an exemplary embodiment of an image device, e.g., image device 420 operating in an environment 400. In an embodiment, image device 420 may include an image sensor array 420, e.g., an array of image sensors, which, in this example, are arranged around a polygon to increase the field of view that can be captured, that is, they can capture scene 415, illustrated in Fig. 4 as a natural landmark that can be viewed in a virtual tourism setting. Processor 422 receives the scene 415 and selects the pixels from the scene 415 that have been requested by a user, e.g., requested portions 417. Requested portions 417 may include an overlapping area 424 that is only transmitted once. In an embodiment, the requested portions 417 may be transmitted to a remote location via communications network 240.

71 Exemplary Environment 500

[00269] Referring now to Fig. 5, Fig. 5 shows an exemplary embodiment of an image device, e.g., image device 520 operating in an environment 500. In an embodiment, image device 520 may capture a scene, of which a part of the scene, e.g., scene portion

515, as previously described in other embodiments (e.g., some parts of image device 520 are omitted for simplicity of drawing). In an embodiment, e.g., scene portion 515 may show a street-level view of a busy road, e.g., for a virtual tourism or virtual reality simulator. In an embodiment, different portions of the scene portion 515 may be transmitted at different resolutions or at different times . For example, in an embodiment, a central part of the scene portion 515, e.g., portion 516, which may correspond to what a user's eyes would see, is transmitted at a first resolution, or "full" resolution relative to what the user's device can handle. In an embodiment, an outer border outside portion

516, e.g., portion 514, may be transmitted at a second resolution, which may be lower, e.g., lower than the first resolution. In another embodiment, a further outside portion, e.g., portion 512, may be discarded, transmitted at a still lower rate, or transmitted asynchronously.

Exemplary Embodiments of the Various Modules of Portions of Processor 250

[00270] Figs. 6-9 illustrate exemplary embodiments of the various modules that form portions of processor 250. In an embodiment, the modules represent hardware, either that is hard-coded, e.g., as in an application- specific integrated circuit ("ASIC") or that is physically reconfigured through gate activation described by computer instructions, e.g., as in a central processing unit.

[00271] Referring now to Fig. 6, Fig. 6 illustrates an exemplary implementation of the multiple image sensor based scene capturing module 252. As illustrated in Fig. 6, the multiple image sensor based scene capturing module may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 6, e.g., Fig. 6A, in an embodiment, module 252 may include one or more of multiple image sensor based scene that includes the one or more images capturing through use of an array of image sensors module 602, multiple image sensor based scene that includes the one or more images capturing through use of two image sensors

72 arranged side by side and angled toward each other module 604, multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a grid pattern module 606, and multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line pattern module 608. In an embodiment, module 608 may include one or more of multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line such that a field of view is greater than 120 degrees module 610 and multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line such that a field of view is 180 degrees module 612.

[00272] Referring again to Fig. 6, e.g., Fig. 6B, as described above, in an embodiment, module 252 may include one or more of multiple image sensor based scene that includes the one or more images capturing through use of more than one stationary image sensor module 614, multiple image sensor based scene that includes the one or more images capturing through use of more than one static image sensor module 616, multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted in a fixed location module 620, and multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on a movable platform module 622. In an embodiment, module 616 may include multiple image sensor based scene that includes the one or more images capturing through use of more than one static image sensor that has a fixed focal length and a fixed field of view module 618. In an embodiment, module 622 may include multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on an unmanned aerial vehicle module 624.

[00273] Referring again to Fig. 6, e.g., Fig. 6C, in an embodiment, module 252 may include one or more of multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor 626, multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the scene, through use

73 of more than one image sensor 628, particular image from each image sensor acquiring module 632, and acquired particular image from each image sensor combining into the scene module 634. In an embodiment, module 628 may include multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the sene that are configured to be stitched together, through use or more than one image sensor module 630. In an embodiment, module 632 may include particular image with at least partial other image overlap from each image sensor acquiring module 636. In an embodiment, module 636 may include particular image with at least partial adjacent image overlap from each image sensor acquiring module 638.

[00274] Referring again to Fig. 6, e.g., Fig. 6D, in an embodiment, module 252 may include one or more of multiple image sensor based scene that is larger than a bandwidth for remote transmission of the scene capturing through use of more than one image sensor module 640, multiple image sensor based scene of a tourist destination capturing through use of more than one image sensor module 646, multiple image sensor based scene of a highway bridge capturing through use of more than one image sensor module 648, and multiple image sensor based scene of a home interior capturing through use of more than one image sensor module 650. In an embodiment, module 640 may include multiple image sensor based scene that contains more image data than a bandwidth for remote transmission of the scene capturing through use of more than one image sensor module 642. In an embodiment, module 642 may include multiple image sensor based scene that contains more image data than a bandwidth for remote transmission of the scene by a factor of ten capturing through use of more than one image sensor module 644.

[00275] Referring again to Fig. 6, e.g., Fig. 6E, in an embodiment, module 252 may include one or more of multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors module 652, multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors that direct image data to a common collector module 654, and multiple image sensor based scene that includes the one or more images capturing

74 through use of an grouping of image sensors that include charge-coupled devices and complementary metal-oxide-semiconductor devices module 656.

[00276] Referring again to Fig. 6, e.g., Fig. 6F, in an embodiment, module 252 may include one or more of multiple image and sound data based scene that includes the one or more images capturing through use of image and sound sensors module 658, multiple image sensor based scene that includes sound wave image data capturing through use of an array of sound wave image sensors module 662, and multiple video capture sensor based scene that includes the one or more images capturing through use of one or more video capture sensors module 664. In an embodiment, module 658 may include multiple image and sound data based scene that includes the one or more images capturing through use of image and sound microphones module 660.

[00277] Referring now to Fig. 7, Fig. 7 illustrates an exemplary implementation of scene particular portion selecting module 254. As illustrated in Fig. 7, the scene particular portion selecting module 254 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 7, e.g., Fig. 7 A, in an embodiment, module 254 may include one or more of scene particular portion that is smaller than the scene and that includes at least one image selecting module 702, scene particular portion that is smaller than the scene and that includes at least one requested image selecting module 706, particular image request receiving module 710, and received request for particular image selecting from scene module 712. In an embodiment, module 702 may include scene particular portion that is smaller than the scene and that includes at least one remote user-requested image selecting module 704. In an embodiment, module 706 may include scene particular portion that is smaller than the scene and that includes at least one remote- operator requested image selecting module 708.

[00278] Referring again to Fig. 7, e.g., Fig. 7B, in an embodiment, module 254 may include one or more of first request for a first particular image and second request for a second particular image receiving module 714 and scene particular portion that is first particular image and second particular image selecting module 716. In an embodiment,

75 module 714 may include one or more of first request for a first particular image and second request for a second particular image that does not overlap the first particular image receiving module 718 and first request for a first particular image and second request for a second particular image that overlaps the first particular image receiving module 720. In an embodiment, module 720 may include first request for a first particular image and second request for a second particular image that overlaps the first particular image and an overlapping portion is configured to be transmitted once only receiving module 722.

[00279] Referring again to Fig. 7, e.g., Fig. 7C, in an embodiment, module 254 may include scene particular portion that is smaller than the scene and that contains a particular image object selecting module 724. In an embodiment, module 724 may include one or more of scene particular portion that is smaller than the scene and that contains a particular image object that is a person selecting module 726 and scene particular portion that is smaller than the scene and that contains a particular image object that is a vehicle selecting module 728.

[00280] Referring now to Fig. 8, Fig. 8 illustrates an exemplary implementation of selected particular portion transmitting module 256. As illustrated in Fig. 8A, the selected particular portion transmitting module 256 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 8, e.g., Fig. 8A, in an embodiment, module 256 may include selected particular portion transmitting to a remote server module 802. In an embodiment, module 802 may include one or more of selected particular portion transmitting to a remote server configured to receive particular image requests module 804 and selected particular portion transmitting to a remote server that requested a particular image module 808. In an embodiment, module 804 may include selected particular portion transmitting to a remote server configured to receive particular discrete image requests module 806.

[00281] Referring again to Fig. 8, e.g., Fig. 8B, in an embodiment, module 256 may include selected particular portion transmitting at a particular resolution module 810. In

76 an embodiment, module 810 may include one or more of available bandwidth to remote location determining module 812, selected particular portion transmitting at a particular resolution based on determined available bandwidth module 814, selected particular portion transmitting at a particular resolution less than a scene resolution module 816, and selected particular portion transmitting at a particular resolution less than a captured particular portion resolution module 818.

[00282] Referring again to Fig. 8, e.g., Fig. 8C, in an embodiment, module 256 may include one or more of first segment of selected particular portion transmitting at a first resolution module 820 and second segment of selected particular portion transmitting at a second resolution module 822. In an embodiment, module 820 may include one or more of first segment of selected particular portion that surrounds the second segment transmitting at a first resolution module 823, first segment of selected particular portion that borders the second segment transmitting at a first resolution module 824, first segment of selected particular portion that is determined by selected particular portion content transmitting at a first resolution module and 826. In an embodiment, module 826 may include first segment of selected particular portion that is determined as not containing an item of interest transmitting at a first resolution module 828. In an embodiment, module 828 may include one or more of first segment of selected particular portion that is determined as not containing a person of interest transmitting at a first resolution module 830 and first segment of selected particular portion that is determined as not containing an object designated for tracking transmitting at a first resolution module 832. In an embodiment, module 822 may include one or more of second segment that is a user-selected area of selected particular portion transmitting at a second resolution that is higher than the first resolution module 834 and second segment that is a surrounded by the first segment transmitting at a second resolution that is higher than the first resolution module 836.

[00283] Referring again to Fig. 8, e.g., Fig. 8D, in an embodiment, module 256 may include modules 820 and 822, as previously described. In an embodiment, module 822 may include second segment of selected particular portion that contains an object of interest transmitting at the second resolution module 838. In an embodiment, module

77 838 may include one or more of second segment of selected particular portion that contains an object of interest that is a football player transmitting at the second resolution module 840, second segment of selected particular portion that contains an object of interest that is a landmark transmitting at the second resolution module 842, second segment of selected particular portion that contains an object of interest that is an animal transmitting at the second resolution module 844, and second segment of selected particular portion that contains an object of interest that is a selected object in a dwelling transmitting at the second resolution module 846.

[00284] Referring now to Fig. 9, Fig. 9 illustrates an exemplary implementation of scene pixel de-emphasizing module 258. As illustrated in Fig. 9 A, the scene pixel de- emphasizing module 258 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 9, e.g., Fig. 9 A, in an embodiment, module 258 may include one or more of scene pixel nonselected nontransmitting module 902, scene pixel exclusive pixels selected from the particular portion transmitting to a particular device module 904, and scene nonselected pixel deleting module 910. In an embodiment, module 904 may include one or more of scene pixel exclusive pixels selected from the particular portion transmitting to a particular device that requested the particular portion module 906 and scene pixel exclusive pixels selected from the particular portion transmitting to a particular device user that requested the particular portion module 908.

[00285] Referring again to Fig. 9, e.g., Fig. 9B, in an embodiment, module 258 may include one or more of scene nonselected pixels indicating as nontransmitted module 912, scene nonselected pixels discarding module 916, and scene nonselected retention preventing module 918. In an embodiment, module 912 may include scene nonselected pixels appending data that indicates nontransmission module 914.

[00286] Referring again to Fig. 9, e.g., Fig. 9C, in an embodiment, module 258 may include scene pixel subset retaining module 920. In an embodiment, module 920 may include one or more of scene pixel ten percent subset retaining module 922, scene pixel targeted object subset retaining module 924, and scene pixel targeted automation

78 identified object subset retaining module 928. In an embodiment, module 924 may include scene pixel targeted scenic landmark object subset retaining module 926.

[00287] Referring again to Fig. 9, e.g., Fig. 9D, in an embodiment, module 258 may include scene pixel subset storing in separate storage module 930. In an embodiment, module 930 may include scene pixel subset storing in separate local storage module 932 and scene pixel subset storing in separate storage for separate transmission module 934. In an embodiment, module 934 may include one or more of scene pixel subset storing in separate storage for separate transmission at off-peak time module 936 and scene pixel subset storing in separate storage for separate transmission as lower-priority data module 938.

[00288] In some implementations described herein, logic and similar implementations may include software or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of existing software or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software, firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

[00289] Following are a series of flowcharts depicting implementations. For ease of understanding, the flowcharts are organized such that the initial flowcharts present implementations via an example implementation and thereafter the following flowcharts present alternate implementations and/or expansions of the initial flowchart(s) as either sub-component operations or additional component operations building on one or more

79 earlier-presented flowcharts. Those having skill in the art will appreciate that the style of presentation utilized herein (e.g., beginning with a presentation of a flowchart(s) presenting an example implementation and thereafter providing additions to and/or further details in subsequent flowcharts) generally allows for a rapid and easy

understanding of the various process implementations. In addition, those skilled in the art will further appreciate that the style of presentation used herein also lends itself well to modular and/or object-oriented program design paradigms.

Exemplary Operational Implementation of Processor 250 and Exemplary Variants

[00290] Further, in Fig. 10 and in the figures to follow thereafter, various operations may be depicted in a box-within-a-box manner. Such depictions may indicate that an operation in an internal box may comprise an optional example embodiment of the operational step illustrated in one or more external boxes. However, it should be understood that internal box operations may be viewed as independent operations separate from any associated external boxes and may be performed in any sequence with respect to all other illustrated operations, or may be performed concurrently. Still further, these operations illustrated in Fig. 10 as well as the other operations to be described herein may be performed by at least one of a machine, an article of manufacture, or a composition of matter.

[00291] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software, and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other

technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some

80 combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software, and or firmware.

[00292] Throughout this application, examples and lists are given, with parentheses, the abbreviation "e.g.," or both. Unless explicitly otherwise stated, these examples and lists are merely exemplary and are non-exhaustive. In most cases, it would be prohibitive to list every example and every combination. Thus, smaller, illustrative lists and examples are used, with focus on imparting understanding of the claim terms rather than limiting the scope of such terms.

[00293] Referring now to Fig. 10, Fig. 10 shows operation 1000, e.g., an example operation of message processing device 230 operating in an environment 200. In an embodiment, operation 1000 may include operation 1002 depicting capturing a scene that includes one or more images, through use of an array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene capturing module 252 capturing (e.g., collecting data, that includes visual data, e.g., pixel data, sound data, electromagnetic data, nonvisible spectrum data, and the like) that includes one or more images (e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), through use of an array (e.g., any grouping configured to work together in unison, regardless of arrangement, symmetry, or appearance) of more than one image sensor (e.g., a device, component, or collection of components configured to collect light, sound, or other electromagnetic spectrum data, and/or to convert the collected into digital data, or perform at least a portion of the foregoing actions).

[00294] Referring again to Fig. 10, operation 1000 may include operation 1004 depicting selecting a particular portion of the scene that includes at least one image, wherein the

81 selected particular portion is smaller than the scene. For example, Fig. 2, e.g., Fig. 2B, shows scene particular portion selecting module 254 selecting (e.g., whether actively or passively, choosing, flagging, designating, denoting, signifying, marking for, taking some action with regard to, changing a setting in a database, creating a pointer to, storing in a particular memory or part/address of a memory, etc.) a particular portion (e.g., some subset of the entire scene that includes some pixel data, whether pre-or post-processing, which may or may not include data from multiple of the array of more than one image sensor) of the scene (e.g., the data, e.g., image data or otherwise (e.g., sound,

electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, or post), that includes at least one image (e.g., a portion of pixel or other data that is related temporally or spatially (e.g., contiguous or partly contiguous), wherein the selected particular portion is smaller (e.g., some objectively measurable feature has a lower value, e.g., size, resolution, color, color depth, pixel data granularity, number of colors, hue, saturation, alpha value, shading) than the scene scene (e.g., the data, e.g., image data or otherwise (e.g., sound,

electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, orpost).

[00295] Referring again to Fig. 10, operation 1000 may include operation 1006 depicting transmitting only the selected particular portion from the scene to a remote location. For example, Fig. 2 e.g., Fig. 2B shows selected particular portion transmitting module 256 transmitting only (e.g., not transmitting the parts of the scene that are not part of the selected particular portion) the selected particular portion (e.g., the designated pixel data) from the scene (e.g., the data, e.g., image data or otherwise (e.g., sound, electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, or post) to a remote location (e.g., a device or other component that is separate from the image device, "remote" here not necessarily implying or excluding any particular distance, e.g., the remote device may be a server device, some combination of cloud devices, an individual user device, or some combination of devices).

82 [00296] Referring again to Fig. 10, operation 1000 may include operation 1008 depicting de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 2, e.g., Fig. 2B, shows scene pixel de-emphasizing module 258 de-emphasizing (e.g., whether actively or passively, taking some action to separate pixels from the scene that are not part of the selected particular portion, including deleting, marking for deletion, storing in a separate location or memory address, flagging, moving, or, in an embodiment, simply not saving the pixels in an area in which they can be readily retained.

[00297] Figs. 11A-11F depict various implementations of operation 1002, depicting capturing a scene that includes one or more images, through use of an array of more than one image sensor according to embodiments. Referring now to Fig. 11 A, operation 1002 may include operation 1102 depicting capturing the scene that includes the one or more images, through use of an array of image sensors. For example, Fig. 6, e.g., Fig. 6A shows multiple image sensor based scene that includes the one or more images capturing through use of an array of image sensors module 402 capturing the scene that includes the one or more images, through use of an array of image sensors (e.g., one hundred three-megapixel image sensors attached to a metal plate and arranged in a consistent pattern).

[00298] Figs. 11A-11E depict various implementations of operation 1002, depicting capturing a scene that includes one or more images, through use of an array of more than one image sensor according to embodiments. Referring now to Fig. 11 A, operation 1002 may include operation 1102 depicting capturing the scene that includes the one or more images, through use of an array of image sensors. For example, Fig. 6, e.g., Fig. 6A shows multiple image sensor based scene that includes the one or more images capturing through use of an array of image sensors module 402 capturing the scene (e.g., a live street view of a busy intersection in Alexandria, VA) that includes the one or more images (e.g., images of cars passing by in the live street view, images of shops on the intersection, images of people in the crosswalk, images of trees growing on the side, images of a particular person that has been designated for watching).

83 [00299] Referring again to Fig. 11 A, operation 1002 may include operation 1104 depicting capturing the scene that includes the one or more images, through use of two image sensors arranged side by side and angled toward each other. For example, Fig. 6, e.g., Fig. 6A, shows multiple image sensor based scene that includes the one or more images capturing through use of two image sensors arranged side by side and angled toward each other module 604 capturing the scene (e.g., a virtual tourism scene, e.g., a camera array pointed at the Great Pyramids) that includes the one or more images (e.g., images that have been selected by users that are using the virtual tourism scene, e.g., to view the pyramid entrance or the top vents in the pyramid) through use of two image sensors (e.g., two digital cameras with a 100 megapixel rating) arranged side by side (e.g., in a line, when viewed from a particular perspective) and angled toward each other (e.g., the digital cameras are pointed toward each other).

[00300] Referring again to Fig. 11 A, operation 1002 may include operation 1106 depicting capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a grid. For example, Fig. 6, e.g., Fig. 6A, shows multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a grid pattern module 606 capturing the scene (e.g., a football stadium area) that includes the one or more images (e.g., images of the field, images of a person in the crowd, images of a specific football player), through use of the array of image sensors (e.g., one hundred image sensors) arranged in a grid (e.g., the image sensors are attached to a rigid object and formed in a 10x10 gridpattern).

[00301] Referring again to Fig. 11 A, operation 1002 may include operation 1108 depicting capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line. For example, Fig. 6, e.g., Fig. 6 A, shows multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line pattern module 608 capturing the scene (e.g., a checkout line at a grocery) that includes the one or more images (e.g., images of the shoppers and the items in the shoppers'

84 [00302] Referring again to Fig. 11 A, operation 1108 may include operation 1110 depicting capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is greater than 120 degrees. For example, Fig. 6, e.g., Fig. 6A, shows multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line such that a field of view is greater than 120 degrees module 610 capturing the scene (e.g., a highway bridge) that includes the one or more images (e.g., automatically tracked images of the license plates of every car that crosses the highway bridge), through use of the array of image sensors arranged in a line such that a field of view (e.g., the viewable area of the camera array) is greater than 120 degrees.

[00303] Referring again to Fig. 11 A, operation 1108 may include operation 1112 depicting capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is 180 degrees. For example, Fig. 6, e.g., Fig. 6A, shows multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line such that a field of view is 180 degrees module 612 capturing the scene (e.g., a room of a home) that includes the one or more images (e.g., images of the appliances in the home and recordation of when those appliances are used, e.g., when a refrigerator door is opened, when a microwave is used, when a load of laundry is placed into a washing machine) arranged in a line such that a field of view is 180 degrees.

[00304] Referring now to Fig. 11B, operation 1002 may include operation 1114 depicting capturing the scene that includes one or more images, through use of an array of more than one stationary image sensor. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one stationary image sensor module 614 capturing the scene (e.g., a warehouse that is a target for break-ins) that includes one or more images (e.g., images of each person that walks past the warehouse), through use of an array of more than one stationary image sensor (e.g., the image sensor does not move independently of the other image sensors).

85 [00305] Referring again to Fig. 11B, operation 1002 may include operation 1116 depicting capturing the scene that includes one or more images, through use of an array of more than one static image sensor. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one static image sensor module 616 capturing the scene (e.g., a virtual tourism scene of a waterfall and lake) that includes one or more images (e.g., images of the waterfall and of the animals that collect there), through use of an array of more than one static image sensor (e.g., an image sensor that does not change its position, zoom, or pan).

[00306] Referring again to Fig. 11B, operation 1116 may include operation 1118 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one static image sensor that has a fixed focal length and a fixed field of view module 618 capturing the scene (e.g., a virtual tourism scene of a mountain trail) that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view.

[00307] Referring again to Fig. 11B, operation 1002 may include operation 1120 depicting capturing the scene that includes the one or more images, through use of an array of more than one image sensor mounted in a stationary location. For example, Fig. 6, e.g., Fig 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted in a fixed location module 620 capturing the scene (e.g., a juvenile soccer field) that includes the one or more images (e.g., images of each player on the youth soccer team), through use of an array of more than one image sensor mounted in a stationary location (e.g., on a pole, or on the side of a building our structure).

[00308] Referring again to Fig. 11B, operation 1002 may include operation 1122 depicting capturing the scene that includes one or more images, through use of an array

86 of image sensors mounted on a movable platform. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on a movable platform module 622 capturing the scene (e.g., a line of people leaving a warehouse-type store with a set of items that has to be compared against a receipt) that includes one or more images (e.g., images of the person's receipt that is visible in the cart, and images of the items in the cart, through use of an array of image sensors mounted on a movable platform (e.g., like a rotating camera, or on a drone, or on a remote controlled vehicle, or simply on a portable stand that can be picked up and set down).

[00309] Referring again to Fig. 11B, operation 1122 may include operation 1124 depicting capturing the scene that includes one or more images, through use of an array of image sensors mounted on an unmanned aerial vehicle. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on an unmanned aerial vehicle module 624 capturing the scene (e.g., a view of a campsite where soldiers are gathered) that includes one or more images (e.g., images of the soldiers and their equipment caches), through use of an array of image sensors mounted on an unmanned aerial vehicle (e.g., a drone).

[00310] Referring now to Fig. 11C, operation 1002 may include operation 1126 depicting capturing the scene that includes one or more images, through use of an array of image sensors mounted on a satellite. For example, Fig. 6, e.g., Fig. 6C, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on a satellite module 626 capturing the scene (e.g., a live street view inside the city of Seattle, WA) that includes one or more images, through use of an array of image sensors mounted on a satellite.

[00311] Referring again to Fig. 11C, operation 1002 may include operation 1128 depicting capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene. For example, Fig. 6, e.g., Fig. 6C, shows

87 multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the scene, through use of more than one image sensor 628 capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors (e.g., two megapixel CCM sensors) each capture an image that represents a portion of th scene (e.g., a live street view of a parade downtown).

[00312] Referring again to Fig. 11C, operation 1128 may include operation 1130 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together. For example, Fig. 6, e.g., Fig. 6C, shows multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the scene that are configured to be stitched together, through use of more than one image sensor 630 capturing the scene (e.g., a protest in a city square) that includes one or more images (e.g., images of people in the protest), through use of the array of more than one image sensor (e.g., twenty- five ten megapixel sensors), wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together (e.g., after capturing, an automated process lines up the individually captured images, removes overlap, and merges them into a single image).

[00313] Referring again to Fig. 11C, operation 1002 may include operation 1132 depicting acquiring an image from each image sensor of the more than one image sensors. For example, Fig. 6, e.g., Fig. 6C, shows particular image from each image sensor acquiring module 632 acquiring an image from each image sensor of the more than one image sensors.

[00314] Referring again to Fig. 11C, operation 1002 may include operation 1134, which may appear in conjunction with operation 1132, operation 1134 depicting combining the acquired images from the more than one image sensors into the scene. For example, Fig. 6, e.g., Fig. 6C, shows acquired particular image from each image sensor combining into

88 the scene module 634 combining the acquired images (e.g., images from a scene of a plains oasis) from the more than one image sensors (e.g., three thousand one-megapixel sensors) into the scene (e.g., the scene of a plains oasis).

[00315] Referring again to Fig. 11C, operation 1132 may include operation 1136 depicting acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image. For example, Fig. 6, e.g., Fig. 6C, shows particular image with at least partial other image overlap from each image sensor acquiring module 636 acquiring images (e.g., a portion of the overall image, e.g., which is a view of a football stadium during a game) from each image sensor (e.g., a CMOS sensor), wherein each acquired image at least partially overlaps at least one other image (e.g., each image overlaps its neighboring image by five percent).

[00316] Referring again to Fig. 11C, operation 1136 may include operation 1138 depicting acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image captured by an adjacent image sensor. For example, Fig. 6, e.g., Fig. 6C, shows particular image with at least partial adjacent image overlap from each image sensor acquiring module 638 acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image (e.g., each image is a portion of a street corner that will form a live street view) at least partially overlaps at least one other image captured by an adjacent (e.g., there are no intervening image sensors in between) image sensor.

[00317] Referring now to Fig. 11D, operation 1002 may include operation 1140 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene that is larger than a bandwidth for remote transmission of the scene capturing through use of more than one image sensor module 640 capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a

89 size of the scene (e.g., 1 gigapixel, captured 60 times per second) is greater than a capacity to transmit the scene (e.g., the capacity to transmit may be 30

megapixels/second) .

[00318] Referring again to Fig. 11D, operation 1140 may include operation 1142 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene that contains more image data than a bandwidth for remote transmission of the scene capturing through use of more than one image sensor module 642 capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured (e.g., 750 megapixels/second) exceeds a bandwidth for transmitting the image data to a remote location (e.g., the capacity to transmit may be 25 megapixels/second).

[00319] Referring again to Fig. 11D, operation 1142 may include operation 1144 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the amount of image data captured exceeds the bandwidth for transmitting the image data to the remote location by a factor of ten. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene that contains more image data than a bandwidth for remote transmission of the scene by a factor of ten capturing through use of more than one image sensor module 644 capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured (e.g., 750 megapixels/second) exceeds a bandwidth for transmitting the image data to a remote location by a factor of ten (e.g., the capacity to transmit may be 75 megapixels/second).

[00320] Referring again to Fig. 11D, operation 1002 may include operation 1146 depicting capturing a scene of a tourist destination that includes one or more images, through use of an array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene of a tourist destination capturing through use of more than one image sensor module 646 capturing a scene of a tourist destination

90 that includes one or more images (e.g., images of Mount Rushmore) through use of an array of more than one image sensor (e.g., fifteen CMOS 10 megapixel sensors aligned in two staggered arc- shaped rows).

[00321] Referring again to Fig. 11D, operation 1002 may include operation 1148 depicting capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include one or more images of cars crossing the highway across the bridge. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene of a highway bridge capturing through use of more than one image sensor module 648 capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor (e.g., twenty-two inexpensive, lightweight 2 megapixel sensors, arranged in a circular grid), wherein the one or more images include one or more images of cars crossing the highway across the bridge).

[00322] Referring again to Fig. 11D, operation 1002 may include operation 1150 depicting capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include an image of an appliance in the home. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene of a home interior capturing through use of more than one image sensor module 650 capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor (e.g., multiple sensors working in connection but placed at different angles and placements across the house, and which may feed into a same source, and which may provide larger views of the house), wherein the one or more images include an image of an appliance (e.g., a refrigerator) in the home.

[00323] Referring now to Fig. HE, operation 1002 may include operation 1152 depicting capturing the scene that includes one or more images, through use of a grouping of more than one image sensor. For example, Fig. 6, e.g., Fig. 6E, shows multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors module 652 capturing the scene (e.g., a

91 historic landmark) that includes one or more images (e.g., images of the entrance to the historic landmark and points of interest at the historic landmark), through use of a grouping of more than one image sensor (e.g., a grouping of one thousand two-megapixel camera sensors arranged on a concave surface.

[00324] Referring again to Fig. HE, operation 1002 may include operation 1154 depicting capturing the scene that includes one or more images, through use of multiple image sensors whose data is directed to a common source. For example, Fig. 6, e.g., Fig. 6E, shows multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors that direct image data to a common collector module 654 capturing the scene that includes one or more images, through use of multiple image sensors (e.g., 15 CMOS sensors) whose data is directed to a common source (e.g., a common processor, e.g., the fifteen CMOS sensors all transmit their digitized data to be processed by a common processor, or a common architecture (e.g., multi processor or multi-core processors).

[00325] Referring again to Fig. HE, operation 1002 may include operation 1156 depicting capturing the scene that includes one or more images, through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal-oxide-semiconductor devices. For example, Fig. 6, e.g., Fig. 6E, shows multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors that include charge-coupled devices and complementary metal- oxide- semiconductor devices module 656 capturing the scene (e.g., a museum interior for a virtual tourism site) that includes one or more images (e.g., images of one or more exhibits and/or artifacts in the museum), through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal-oxide semiconductor devices.

[00326] Referring now to Fig. 11F, operation 1002 may include operation 1158 depicting capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is

92 configured to capture image data and sound data. For example, Fig. 6, e.g., Fig. 6F, shows multiple image and sound data based scene that includes the one or more images capturing through use of image and sound sensors module 658 capturing the scene that includes image data (e.g., images of a waterfall and oasis for animals) and sound data (e.g., sound of the waterfall and the cries and calls of the various animals at the oasis), through use of an array of more than one image sensor (e.g., a grouping of twelve sensors each rated at 25 megapixels, and twelve further sensors, each rated at 2 megapixels), wherein the array of more than one image sensor is configured to capture image data (e.g., the images at the waterfall) and sound data (e.g., the sounds from the waterfall and the animals that are there).

[00327] Referring again to Fig. 11F, operation 1158 may include operation 1160 depicting capturing the scene that includes image data and sound data, through use of the array of more than one image sensor, wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data. For example, Fig. 6, e.g., Fig. 6F, shows multiple image and sound data based scene that includes the one or more images capturing through use of image and sound microphones module 660 capturing the scene (e.g., images of an old battlefield for virtual tourism) that includes image data and sound data, through use of the array of more than one image sensor (e.g., alternatively placed CMOS sensors and microphones), wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data.

[00328] Referring again to Fig. 11F, operation 1002 may include operation 1162 depicting capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor that captures soundwave data. For example, Fig. 6, e.g., Fig. 6F, shows multiple image sensor based scene that includes sound wave image data capturing through use of an array of sound wave image sensors module 662 capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor (e.g., audio sensors) that captures soundwave data (e.g., sound data, whether in visualization form or convertible to visualization form).

93 [00329] Referring again to Fig. 11F, operation 1002 may include operation 1164 depicting capturing a scene that includes video data, through use of an array of more than one video capture device. For example, Fig. 6, e.g., Fig. 6F, shows multiple video capture sensor based scene that includes the one or more images capturing through use of one or more video capture sensors module 664 capturing a scene (e.g., a scene of a watering hole) that includes video data (e.g., a video of a lion sunning itself), through use of an array of more than one video capture device (e.g., a video camera, or a group of CMOS sensors capturing video at 15 frames per second)

[00330] Figs. 12A-12C depict various implementations of operation 1004, depicting selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene, according to embodiments.

Referring now to Fig. 12A, operation 1004 may include operation 1202 depicting selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene. For example, Fig. 7, e.g., Fig. 7A, shows scene particular portion that is smaller than the scene and that includes at least one image selecting module 702 selecting the particular image (e.g., a picture of a bear) from the scene (e.g., a tropical oasis), wherein the selected particular image (e.g., the image of the bear) represents the request for the image that is smaller than the entire scene (e.g., the entire scene, when combined, may be close to 1,000,000 x 1,000,000, but the image of the bear may be matched to the user device's resolution, e.g., 640x480).

[00331] Referring again to Fig.l2A, operation 1202 may include operation 1204 depicting selecting the particular image from the scene, wherein the selected particular image represents a particular remote user-requested image that is smaller than the scene. For example, Fig. 7, e.g., Fig. 7A, shows scene particular portion that is smaller than the scene and that includes at least one remote user-requested image selecting module 704 selecting the particular image (e.g., an image of the quarterback) from the scene (e.g., a football game at a stadium), wherein the selected particular image (e.g., the image of the quarterback) represents a particular remote-user requested image (e.g., a user, watching the game back home, has selected that the images focus on the quarterback) that is

94 smaller than the scene (e.g., the image of the quarterback is transmitted at 1920 x 1080 resolution at 30fps, whereas the scene is captured at approximately 1,000,000 x

1,000,000 at 60 fps).

[00332] Referring again to Fig. 12A, operation 1004 may include operation 1206 depicting selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene. For example, Fig. 7, e.g., Fig. 7A, shows scene particular portion that is smaller than the scene and that includes at least one requested image selecting module 706 selecting the particular portion of the scene (e.g., a particular restaurant, e.g., Ray's Steakhouse, and a particular person, e.g., President Obama) of the scene (e.g., a street corner during live street view) that includes a requested image, wherein the selected particular portion (e.g., the steakhouse and the President) is smaller than the scene (e.g., a city block where the steakhouse is located).

[00333] Referring again to Fig. 12A, operation 1206 may include operation 1208 depicting selecting the particular portion of the scene that includes an image requested by a remote operator of the array of more than one image sensors, wherein the selected particular portion is smaller than the scene. For example, Fig. 7, e.g., Fig. 7A, shows scene particular portion that is smaller than the scene and that includes at least one remote- operator requested image selecting module 708 selecting the particular portion of the scene that includes an image requested by a remote operator of the array (e.g., by "remote operator" here it is meant the remote operator selects from the images available from the scene, giving the illusion of "zooming" and "panning"; in other embodiments, the array of image sensors or the individual image sensors may be moved by remote command, if so equipped (e.g., on movable platforms or mounted on UAVs or satellites, or if each image sensor is wired to hydraulics or servos)) of more than one image sensors, wherein the selected particular portion is smaller than the scene.

[00334] Referring again to Fig. 12A, operation 1004 may include operation 1210 depicting receiving a request for a particular image. For example, Fig. 7, e.g., Fig. 7A,

95 shows particular image request receiving module 710 receiving a request for a particular image (e.g., an image of a soccer player on the field as a game is going on).

[00335] Referring again to Fig. 12A, operation 1004 may include operation 1212, which may appear in conjunction with operation 1210, operation 1212 depicting selecting the particular image from the scene, wherein the particular image is smaller than the scene. For example, Fig. 7, e.g., Fig. 7A, shows received request for particular image selecting from scene module 712 selecting the particular image (e.g., the player on the soccer field) from the scene (e.g., the soccer field and the stadium, and, e.g., the surrounding parking lots), wherein the particular image (e.g., the image of the player) is smaller than (e.g., is expressed in fewer pixels than) the scene (e.g., the soccer field and the stadium, and, e.g., the surrounding parking lots).

[00336] Referring now to Fig. 12B, operation 1004 may include operation 1214 depicting receiving a first request for a first particular image and a second request for a second particular image. For example, Fig. 7, e.g., Fig. 7B, shows first request for a first particular image and second request for a second particular image receiving module 714 receiving a first request for a first particular image (e.g., a quarterback player on the football field) and a second request for a second particular image (e.g., a defensive lineman on the football field).

[00337] Referring again to Fig. 12B, operation 1004 may include operation 1216, which may appear in conjunction with operation 1214, operation 1216 depicting selecting the particular portion of the scene that includes the first particular image and the second particular image, wherein the particular portion of the scene is smaller than the scene. For example, Fig. 7, e.g., Fig. 7B, shows scene particular portion that is first particular image and second particular image selecting module 716 selecting the particular portion of the scene that includes the first particular image (e.g., the area at which the quarterback is standing) and the second particular image (e.g., the area at which the defensive lineman is standing), wherein the particular portion of the scene is smaller than the scene. In an embodiment, the detection of the image that contains the quarterback is done by automation. In another embodiment, the user selects the person they wish to follow (e.g.,

96 by voice), and the system tracks that person as they move through the scene, capturing that person as the "particular image" regardless of their location in the scene.

[00338] Referring again to Fig. 12B, operation 1214 may include operation 1218 depicting receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image do not overlap. For example, Fig. 7, e.g., Fig. 7B, shows first request for a first particular image and second request for a second particular image that does not overlap the first particular image receiving module 718 receiving the first request for the first particular image (e.g., a request to see the first exhibit in a museum) and the second request for the second particular image (e.g., a request to see the last exhibit in the museum), wherein the first particular image and the second particular image do not overlap (e.g., the first particular image and the second particular image are both part of the scene, but do not share any common pixels).

[00339] Referring again to Fig. 12B, operation 1214 may include operation 1220 depicting receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion. For example, Fig. 7, e.g., Fig. 7B, shows first request for a first particular image and second request for a second particular image that overlaps the first particular image receiving module 720 receiving a first request for a first particular image (e.g., a request to watch a particular animal at a watering hole) and a second request for a second particular image (e.g., a request to watch a different animal, e.g., an alligator), wherein the first particular image and the second particular image overlap at an overlapping portion (e.g., a portion of the pixels in the first particular image are the same ones as used in the second particular image).

[00340] Referring again to Fig. 12B, operation 1220 may include operation 1222 depicting receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only. For example, Fig. 7, e.g., Fig. 7B, shows first

97 request for a first particular image and second request for a second particular image that overlaps the first particular image and an overlapping portion is configured to be transmitted once only receiving module 722 first particular image (e.g., a request to watch a particular animal at a watering hole) and a second request for a second particular image (e.g., a request to watch a different animal, e.g., an alligator), wherein the first particular image and the second particular image overlap at an overlapping portion (e.g., a portion of the pixels in the first particular image are the same ones as used in the second particular image), and wherein the overlapping portion is configured to be transmitted once only (e.g., the shared pixels are transmitted once only to a remote server, where they are used to transmit both the first particular image and the second particular image to their ultimate destinations).

[00341] Referring now to Fig. 12C, operation 1004 may include operation 1224 depicting selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and that contains a particular image object selecting module 724 selecting the particular portion (e.g., a person walking down the street) of the scene (e.g., a street view of a busy intersection) that includes at least one image (e.g., the image of the person), wherein the selected particular portion is smaller than the scene and contains a particular image object (e.g., the person walking down the street).

[00342] Referring again to Fig. 12C, operation 1224 may include operation 1226 depicting selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a person. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and that contains a particular image object that is a person selecting module 726 selecting the particular portion (e.g., a person walking down the street) of the scene (e.g., a street view of a busy intersection) that includes at least one image (e.g., the image of the person), wherein the selected particular portion is smaller than the scene and contains an image object of a person (e.g., the person walking down the street).

98 [00343] Referring again to Fig. 12C, operation 1224 may include operation 1228 depicting selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a car. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and that contains a particular image object that is a vehicle selecting module 728 selecting the particular portion (e.g., an area of a bridge) of the scene (e.g., a highway bridge) that includes at least one image (e.g., an image of a car, with license plates and silhouettes of occupants of the car), wherein the selected particular portion is smaller (e.g., occupies less space in memory) than the scene and contains an image object of a car.

[00344] Referring again to Fig. 12C, operation 1004 may include operation 1230 depicting selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting device. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and is size-defined by a characteristic of a requesting device selecting module 730 selecting the particular portion of the scene (e.g., a picture of a lion at a scene of a watering hole) that includes at least one image (e.g., a picture of a lion), wherein a size of the particular portion (e.g., a number of pixels) of the scene (e.g., the watering hole) is at least partially based on a characteristic (e.g., an available bandwidth) of a requesting device (e.g., a smartphone device).

[00345] Referring again to Fig. 12C, operation 1004 may include operation 1232 depicting selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and is size-defined by a screen resolution of a requesting device selecting module 732 selecting the particular portion of the scene (e.g., a rock concert) that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution (e.g., 1920 pixels by 1080 pixels, e.g., "HD" quality) of the requesting device (e.g., a smart TV).

99 [00346] Referring again to Fig. 12C, operation 1004 may include operation 1234 depicting selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and is size-defined by a combined screen size of at least one requesting device selecting module 734 selecting the particular portion of the scene (e.g., a historic battlefield) that includes at least one image (e.g., particular areas of the battlefield), wherein the size (e.g., the number of pixels) of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device (e.g., if there are five devices that are requesting 2000x1000 size images, then the particular portion may be 10000x1000 pixels (that is, 5x as large), less any overlap, for example).

[00347] Figs. 13A-13D depict various implementations of operation 1006, depicting transmitting only the selected particular portion from the scene to a remote location, according to embodiments. Referring now to Fig. 13A, operation 1006 may include operation 1302 depicting transmitting only the selected particular portion to a remote server. For example, Fig. 8, e.g., Fig. 8A, shows selected particular portion transmitting to a remote server module 802 transmitting only the selected particular portion (e.g., an image of a drummer at a live show) to a remote server (e.g., a remote location that receives requests for various parts of the scene and transmits the requests to the camera array).

[00348] Referring again to Fig. 13A, operation 1302 may include operation 1304 depicting transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene. For example, Fig. 8, e.g., Fig. 8A, shows selected particular portion transmitting to a remote server configured to receive particular image requests module 804 transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images (e.g., points of interest in a virtual tourism setting) from the scene.

100 [00349] Referring again to Fig. 13A, operation 1304 may include operation 1306 depicting transmitting only the selected particular portion to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene. For example, Fig. 8, e.g., Fig. 8A, shows selected particular portion transmitting to a remote server configured to receive particular discrete image requests module 806 transmitting only the selected particular portion (the portion that includes the areas designated by discrete users as ones to watch) to the remote server that is configured to receive multiple requests from discrete users for multiple images (e.g., each discrete user may want to view a different player in a game or a different area of a field for a football game) from the scene (e.g., a football game played inside a stadium).

[00350] Referring again to Fig. 13A, operation 1302 may include operation 1308 depicting transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image. For example, Fig. 8, e.g., Fig. 8A, shows selected particular portion transmitting to a remote server that requested a particular image module 808 transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image.

[00351] Referring now to Fig. 13B, operation 1006 may include operation 1310 depicting transmitting only the selected particular portion from the scene at a particular resolution. For example, Fig. 8, e.g., Fig. 8B, shows selected particular portion transmitting at a particular resolution module 810 transmitting only the selected particular portion (e.g., an image that the user selected) from the scene (e.g., a virtual tourism scene of the Eiffel Tower) at a particular resolution (e.g., 1920 x 1080 pixels, e.g., "HD" resolution.

[00352] Referring again to Fig. 13B, operation 1310 may include operation 1312 depicting determining an available bandwidth for transmission to the remote location. For example, Fig. 8, e.g., Fig. 8B, shows available bandwidth to remote location determining module 812 determining an available bandwidth (e.g., how much data can be transmitted over a particular network at a particular time, e.g., whether compensating for

101 conditions or component-based) for transmission to the remote location (e.g., a remote server that handles requests from users).

[00353] Referring again to Fig. 13B, operation 1310 may include operation 1314, which may appear in conjunction with operation 1312, operation 1314 depicting transmitting only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth. For example, Fig. 8, e.g., Fig. 8B, shows selected particular portion transmitting at a particular resolution based on determined available bandwidth module 814 transmitting only the selected particular portion (e.g., an image of a lion) from the scene (e.g., a watering hole) at the particular resolution (e.g., resolution the size of a web browser that the user is using to watch the lion) that is at least partially based on the determined available bandwidth (e.g., as the bandwidth decreases, the resolution also decreases).

[00354] Referring again to Fig. 13B, operation 1310 may include operation 1316 depicting transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured. For example, Fig. 8, e.g., Fig. 8B, shows selected particular portion transmitting at a particular resolution less than a scene resolution module 816

transmitting only the selected particular portion from the scene (e.g., a picture of a busy street from a street view), wherein the particular resolution is less than a resolution at which the scene was captured.

[00355] Referring again to Fig. 13B, operation 1310 may include operation 1318 depicting transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured. For example, Fig. 8, e.g., Fig. 8B, shows selected particular portion transmitting at a particular resolution less than a captured particular portion resolution module 818 transmitting only the selected particular portion from the scene (e.g., a picture of a busy street from a live street view), wherein the particular resolution is less than a resolution at which the particular portion (e.g., a specific person

102 walking across the street, or a specific car (e.g., a Lamborghini) parked on the corner) was captured.

[00356] Referring now to Fig. 13C, operation 1006 may include operation 1320 depicting transmitting a first segment of the selected particular portion at a first resolution, to the remote location. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion transmitting at a first resolution module 820 transmitting a first segment (e.g., an exterior portion) of the selected particular portion (e.g., a view of a street) at a first resolution (e.g., at full high-definition resolution), to the remote location (e.g., to a server that is handling user requests for a virtual reality environment).

[00357] Referring again to Fig. 13C, operation 1006 may include operation 1322, which may appear in conjunction with operation 1320, operation 1322 depicting transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location. For example, Fig. 8, e.g., Fig. 8C, shows second segment of selected particular portion transmitting at a second resolution module 822 transmitting a second segment of the selected particular portion (e.g., an interior portion, e.g., the portion that the user selected) that is higher than the first resolution, to the remote location (e.g., a remote server that is handling user requests and specifying to the image device which pixels to be captured).

[00358] Referring again to Fig. 13C, operation 1320 may include operation 1323 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that surrounds the second segment transmitting at a first resolution module 823 transmitting the first segment of the selected particular portion (e.g., a portion of the image that surrounds the portion requested by the user, e.g., a portion that an automated calculation has determined is likely to contain the lion from the scene of the watering hole at some point) at the first resolution, wherein the first seglment of the selected particular portion surrounds (e.g., is around the second particular portion in at least two opposite directions) the second segment of the selected

103 particular portion (e.g., the lion from the watering hole, e.g., the portion requested by the user)

[00359] Referring again to Fig. 13C, operation 1320 may include operation 1324 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that borders the second segment transmitting at a first resolution module 824 transmitting the first segment of the selected particular portion (e.g., an area of a live street view that a person is walking towards) at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion (e.g., the second segment includes the person the user wants to watch, and the first segment, which borders the second segment, is where the device automation is calculating that the person will be, based on a direction the person is moving).

[00360] Referring again to Fig. 13C, operation 1320 may include operation 1326 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that is determined by selected particular portion content transmitting at a first resolution module 826 transmitting the first segment of the selected particular portion (e.g., the segment surrounding a soccer player at a game) at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content (e.g., the first segment does not contain the person of interest, e.g., the soccer player) of the selected particular portion.

[00361] Referring again to Fig. 13C, operation 1326 may include operation 1328 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest. For example, Fig. 8, e.g., Fig. 8C, shows first segment of

104 selected particular portion that is determined as not containing an item of interest transmitting at a first resolution module 828 transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest (e.g., no animal activity at the watering hole in the first segment of the selected particular portion, where the scene is a watering hole).

[00362] Referring again to Fig. 13C, operation 1328 may include operation 1330 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that is determined as not containing a person of interest transmitting at a first resolution module 830 transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest (e.g., a user is watching the football game and wants to see the quarterback, and the first segment does not contain the quarterback (e.g., but may contain a lineman, a receiver, or a running back).

[00363] Referring again to Fig. 13C, operation 1328 may include operation 1332 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that is determined as not containing an object designated for tracking transmitting at a first resolution module 832 transmitting the first segment of the selected particular portion at the first resolution (e.g., 1920x1080, e.g., HD resolution), wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking (e.g., in a virtual safari, tracking an elephant moving across the plains, and the first segment does not contain the elephant, but may be a prediction about where the elephant will be).

105 [00364] Referring now to Fig. 13D, operation 1322 may include operation 1334 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user. For example, Fig. 8, e.g., Fig. 8C, shows second segment that is a user-selected area of selected particular portion transmitting at a second resolution that is higher than the first resolution module 834 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user.

[00365] Referring again to Fig. 13D, operation 1322 may include operation 1336 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion. For example, Fig. 8, e.g., Fig. 8C, shows second segment that is a surrounded by the first segment transmitting at a second resolution that is higher than the first resolution module 836 transmitting the second segment of the selected particular portion (e.g., a person walking down the street in a live street view) at the second resolution (e.g., 1920x1080 pixel count at 64-bit depth) that is higher than the first resolution (e.g., 1920x1080 pixel count at 8 -bit depth), wherein the seconde segment of the selected particular portion is surrounded by the first segment of the selected particular portion.

[00366] Referring again to Fig. 13D, operation 1322 may include operation 1338 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest. For example, Fig. 8, e.g., Fig. 8C, shows second segment of selected particular portion that contains an object of interest transmitting at the second resolution module 838 transmitting the second segment of the selected particular portion (e.g., on a security

106 camera array, the second segment is the portion that contains the person that is walking around the building at night) at the second resolution (e.g., 640x480 pixel resolution) that is higher than the first resolution (e.g., 320x200 pixel resolution), wherein the second segment of the selected particular portion is determined to contain a selected item of interest (e.g., a person or other moving thing (e.g., animal, robot, car) moving around a perimeter of a building).

[00367] Referring again to Fig. 13D, operation 1338 may include operation 1340 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected football player. For example, Fig. 8, e.g., Fig. 8D, shows second segment of selected particular portion that contains an object of interest that is a football player transmitting at the second resolution module 840 transmitting the second segment of the selected particular portion

transmitting the second segment of the selected particular portion at the second resolution (e.g., 3840x2160, e.g., "4K resolution") that is higher than the first resolution (e.g., 640x480 resolution), wherein the second segment of the selected particular portion is determined to contain a selected football player.

[00368] Referring again to Fig. 13D, operation 1338 may include operation 1342 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected landmark. For example, Fig. 8, e.g., Fig. 8D, shows second segment of selected particular portion that contains an object of interest that is a landmark transmitting at the second resolution module 842 transmitting the second segment of the selected particular portion (e.g., a nose of the Sphinx, where the selected particular portion also includes the first segment which is the area around the Sphinx's nose) at the second resolution (e.g., 2560x1400) that is higher than the first resolution (e.g., 1920x1080), wherein the second segment of the selected particular portion is determined to contain a selected landmark (e.g., the Sphinx's nose).

107 [00369] Referring again to Fig. 13D, operation 1338 may include operation 1344 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation. For example, Fig. 8, e.g., Fig. 8D, shows second segment of selected particular portion that contains an object of interest that is an animal transmitting at the second resolution module 844 transmitting the second segment of the selected particular portion at the second resolution (e.g., 640x480) that is higher than the first resolution (e.g., 320x240), wherein the second segment of the selected particular portion is determined to contain a selected animal (e.g., a panda bear at an oasis) for observation (e.g., a user has requested to see the panda bear).

[00370] Referring again to Fig. 13D, operation 1338 may include operation 1346 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected object in a dwelling. For example, Fig. 8, e.g., Fig. 8D, shows second segment of selected particular portion that contains an object of interest that is a selected object in a dwelling transmitting at the second resolution module 846 transmitting the second segment of the selected particular portion (e.g., the second segment contains the refrigerator) at the second resolution (e.g., full HD resolution) that is higher than the first resolution (e.g., the first resolution is 60% of the second resolution), wherein the second segment of the selected particular portion is determined to contain a selected object (e.g., a refrigerator) in a dwelling.

[00371] Figs. 14A-14D depict various implementations of operation 1008, depicting de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene, according to embodiments. Referring now to Fig. 14A, operation 1008 may include operation 1402 depicting transmitting only pixels associated with the selected particular portion to the remote location. For example, Fig. 9, e.g., Fig. 9A, shows scene pixel nonselected nontransmitting module 902 transmitting only pixels associated with the selected particular portion to the remote location (e.g., a remote server that handles access requests and determines which portions of the captured scene will be transmitted).

108 [00372] Referring again to Fig. 14A, operation 1008 may include operation 1404 depicting transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion. For example, Fig. 9, e.g., Fig. 9A, shows scene pixel exclusive pixels selected from the particular portion transmitting to a particular device module 904 transmitting only the selected particular portion (e.g., a person wheeling a shopping cart out of a discount store) from the scene (e.g., an exit area of a discount store at which people's receipts are compared against the items they have purchased) to a particular device (e.g., a television screen of a remote manager who is not on site at the discount store) that requested the selected particular portion.

[00373] Referring again to Fig. 14A, operation 1404 may include operation 1406 depicting transmitting only the selected particular portion from the scene to a particular device operated by a user that requested the selected particular portion. For example, Fig. 9, e.g., Fig. 9A, shows scene pixel exclusive pixels selected from the particular portion transmitting to a particular device that requested the particular portion module 906 transmitting only the selected particular portion (e.g., the picture of the panda from the zoo) from the scene (e.g., the panda area at a zoo) to a particular device (e.g., a smartphone device) operated by a user that requested the selected particular portion (e.g., the panda area).

[00374] Referring again to Fig. 14A, operation 1404 may include operation 1408 depicting transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion through selection of the particular portion from the scene. For example, Fig. 9, e.g., Fig. 9 A, shows scene pixel exclusive pixels selected from the particular portion transmitting to a particular device user that requested the particular portion module 908 transmitting only the selected particular portion (e.g., a drummer from a rock concert) from the scene (e.g., a concert venue where a rock concert is taking place) to a particular device (e.g., a television set in a person's house over a thousand miles away) that requested the selected particular portion (e.g., the drummer from the rock concert) through selection of the particular portion from the scene (e.g., the person watching the television used the remote to navigate through the scene and draw a box around the drummer, then gave the television a verbal command to focus

109 on the drummer, where the verbal command was picked up by the television remote and translated into a command to the television, which transmitted the command to a remote server, which caused the drummer of the rock band to be selected as the selected particular portion) .

[00375] Referring again to Fig. 14A, operation 1008 may include operation 1410 depicting deleting pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9A, shows scene nonselected pixel deleting module 910 deleting (e.g., not storing in permanent memory) pixels from the scene that are not part of the selected particular portion of the scene. It is noted here that "deleting" does not necessarily imply an active "deletion" of the pixels out of memory, but rather may include allowing the pixels to be overwritten or otherwise not retained, without an active step.

[00376] Referring now to Fig. 14B, operation 1008 may include operation 1412 depicting indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion. For example, Fig. 9, e.g., Fig. 9B, shows scene nonselected pixels indicating as nontransmitted module 912 indicating that pixels from the scene that are not part of the selected particular portion of the scene (e.g., a concert venue during a show) are not transmitted with the selected particular portion (e.g., a selection of the drummer in a band).

[00377] Referring again to Fig. 14B, operation 1412 may include operation 1414 depicting appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non-transmission of the pixels to which data was appended. For example, Fig. 9, e.g., Fig. 9B, shows scene nonselected pixels appending data that indicates nontransmission module 916 appending data to pixels (e.g., a "flag," e.g., setting a bit to zero rather than one) from the scene that are not part of the selected portion of the scene, said appended data (e.g., the bit is set to zero) configured to indicate non-transmission of the pixels to which data was appended (e.g., only those pixels to which the appended bit is "1" will be transmitted).

110 [00378] Referring again to Fig. 14B, operation 1008 may include operation 1416 depicting discarding pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9B, shows scene nonselected pixels discarding module 916 discarding (e.g., not making any particular attempt to save in a manner which would allow ease of retrieval) pixels from the scene (e.g., a scene of a football stadium) that are not part of the selected particular portion of the scene (e.g., the quarterback has been selected).

[00379] Referring again to Fig. 14B, operation 1008 may include operation 1418 depicting denying retention of pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9B, shows scene nonselected retention preventing module 918 denying retention (e.g., preventing transmission to a remote server or long term storage) of pixels from the scene (e.g., a scene of a desert oasis) that are not part of the selected particular portion of the scene (e.g., are not being "watched" by any users or that have not been instructed to be retained).

[00380] Referring now to Fig. 14C, operation 1008 may include operation 1420 depicting retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel subset retaining module 920 retaining a subset of pixels (e.g., edge pixels and 25% sampling inside color blocks) from the scene (e.g., a virtual tourism of the Sphinx in Egypt) that are not part of the selected particular portion of the scene (e.g., the nose of the Sphinx).

[00381] Referring again to Fig. 14C, operation 1420 may include operation 1422 depicting retaining ten percent of the pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel ten percent subset retaining module 922 retaining ten percent (e.g., one in ten) of the pixels from the scene (e.g., a scene of an interior of a grocery store) that are not part of the selected particular portion of the scene (e.g., a specific shopper whom the owner has identified as a potential shoplifter).

Il l [00382] Referring again to Fig. 14C, operation 1420 may include operation 1424 depicting retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel targeted object subset retaining module 924 retaining pixels from the scene that have been identified as part of a targeted object (e.g., a particular person of interest, e.g., a popular football player on a football field, or a person flagged by the government as a spy outside of a warehouse) that are not part of the selected particular portion of the scene (e.g., a person using the camera array is not watching that particular person).

[00383] Referring again to Fig. 14C, operation 1424 may include operation 1426 depicting retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel targeted scenic landmark object subset retaining module 926 retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene (e.g., a live street view of Yellowstone National Park).

[00384] Referring again to Fig. 14C, operation 1424 may include operation 1428 retaining pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel targeted automation identified object subset retaining module 928 retaining pixels from the scene (e.g., a virtual tourism scene through a national monument) that have been identified as part of a targeted object (e.g., part of the monument) through automated pattern recognition (e.g., automated analysis performed on the image to recognize known shapes and/or patterns, including faces, bodies, persons, objects, cars, structures, tools, etc.) that are not part of the selected particular portion (e.g., the selected particular portion was a different part of the monument) of the scene.

[00385] Referring now to Fig. 14D, operation 1008 may include operation 1430 depicting storing pixels from the scene that are not part of the selected particular portion

112 of the scene in a separate storage. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate storage module 930 storing pixels from the scene that are not part of the selected particular portion of the scene (e.g., pixels that are not part of the areas selected by the user for transmission to the user screen) in a separate storage (e.g., a local storage attached to a device that houses the array of image sensors).

[00386] Referring again to Fig. 14D, operation 1430 may include operation 1432 depicting storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than one image sensor. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate local storage module 932 storing pixels from the scene (e.g., a live street view of a busy intersection in Washington, DC) that are not part of the selected particular portion of the scene (e.g., that are not part of the selected person or area of interest) in a separate storage (e.g., a local memory, e.g., a hard drive that is connected to a processor that receives data from the array of image sensors) that is local to the array of more than one image sensor (e.g., is in the same general vicinity, without specifying a specific type of connection).

[00387] Referring again to Fig. 14D, operation 1430 may include operation 1434 depicting storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate storage for separate transmission module 934 storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location (e.g., a remote server) separately from the selected particular portion of the scene (e.g., at a different time, or through a different communications channel or medium, or at a different bandwidth, transmission speed, error checking rate, etc.).

[00388] Referring again to Fig. 14D, operation 1434 may include operation 1436 depicting storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are

113 configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate storage for separate transmission at off-peak time module 936 storing pixels from the scene (e.g., a waterfall scene) that are not part of the selected particular portion of the scene (e.g., parts that are not the selected animals drinking at the bottom of the waterfall) in a separate storage (e.g., a local memory, e.g., a solid state memory that is local to the array of image sensors), wherein pixels stored in the separate storage (e.g., the solid state memory) are configured to be transmitted to the remote location (e.g., a remote server, which will determine if the pixels have use for training pattern detection algorithms or using as caching copies, or determining hue and saturation values for various parts of the scene) at a time when there are no selected particular portions to be transmitted.

[00389] Referring again to Fig. 14D, operation 1434 may include operation 1438 depicting storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate storage for separate transmission as lower-priority data module 938 storing (e.g., in a separate storage, e.g., in a local disk drive or memory that is local to the array of image sensors) pixels from the scene that are not part of the selected particular portion (e.g., not requested to be transmitted to an external location) of the scene in a separate storage (e.g., a local disk drive or other memory that is local to the array of image sensors), wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location (e.g., to a server, which will analyze the content and determine if any of the content is useful for caching or image analysis, e.g., pattern recognition, training automated recognition algorithms, etc.) based on assigning a lower priority to the stored pixels (e.g., the stored pixels will be sent when there is available bandwidth to the remote location that is not being used to transmit requested portions, e.g., the particular portion).

114 [00390] It is noted that, in the foregoing examples, various concrete, real-world examples of terms that appear in the following claims are described. These examples are meant to be exemplary only and non-limiting. Moreover, any example of any term may be combined or added to any example of the same term in a different place, or a different term in a different place, unless context dictates otherwise.

[00391] All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in any Application Data Sheet, are incorporated herein by reference, to the extent not inconsistent herewith.

[00392] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware, or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101. In an embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101, and that designing the circuitry and/or writing the code for the software (e.g., a high-level computer program serving as a hardware specification) and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that

115 the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.)

[00393] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[00394] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the

introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least

116 one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[00395] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[00396] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

117 Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[00397] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[00398] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must

118 exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[00399] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

119 STRUCTURED DISCLOSURE DIRECTED TOWARD ONE OF SKILL IN THE ART [START]

Those skilled in the art will appreciate the indentations and cross-referencing of the following as instructive. In general, each immediately following numbered paragraph in this "Structured Disclosure Directed Toward One Of Skill In The Art Section" should be treated as preceded by the word "Clause," as can be seen by the Clauses that cross- reference against them, and each cross-referencing paragraph is to be understood as permissively incorporating its parent clause(s) by reference.

120 As used i n the herein, and i n pa rticu la r the following, thing/operation disclosures, the word "com prising" ca n genera lly be interpreted as " incl uding but not limited to" :

1. A computationally-implemented thing/operation disclosure, comprising:

capturing a scene that includes one or more images, through use of an array of more than one image sensor;

selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

transmitting only the selected particular portion from the scene to a remote location; and

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

2. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of an array of image sensors.

3. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of two image sensors arranged side by side and angled toward each other.

4. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a grid.

121 5. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line.

6. The computationally-implemented thing/operation disclosure of clause 5, wherein said capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line comprises:

capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is greater than 120 degrees.

7. The computationally-implemented thing/operation disclosure of clause 5, wherein said capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line comprises:

capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is 180 degrees.

8. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of more than one stationary image sensor.

9. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of more than one static image sensor.

122 10. The computationally-implemented thing/operation disclosure of clause 9, wherein said capturing the scene that includes one or more images, through use of an array of more than one static image sensor comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view.

11. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of an array of more than one image sensor mounted in a stationary location.

12. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform.

13. The computationally-implemented thing/operation disclosure of clause 12, wherein said capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform comprises:

capturing the scene that includes one or more images, through use of an array of image sensors mounted on an unmanned aerial vehicle.

14. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of image sensors mounted on a satellite.

123 15. The computationally-implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene.

16. The computationally-implemented thing/operation disclosure of clause 15, wherein said capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together.

17. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

acquiring an image from each image sensor of the more than one image sensors; and

combining the acquired images from the more than one image sensors into the scene.

18. The computationally-implemented thing/operation disclosure of clause 17, wherein said acquiring an image from each image sensor of the more than one image sensors comprises:

acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image.

124 19. The computationally- implemented thing/operation disclosure of clause 18, wherein said acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image comprises:

acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image captured by an adjacent image sensor.

20. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene.

21. The computationally- implemented thing/operation disclosure of clause 20, wherein said capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location.

22. The computationally- implemented thing/operation disclosure of clause 21, wherein said capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the amount of image data captured exceeds the bandwidth for transmitting the image data to the remote location by a factor of ten.

125 23. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing a scene of a tourist destination that includes one or more images, through use of an array of more than one image sensor.

24. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include one or more images of cars crossing the highway across the bridge.

25. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include an image of an appliance in the home.

26. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of a grouping of more than one image sensor.

27. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

126 capturing the scene that includes one or more images, through use of multiple image sensors whose data is directed to a common source.

28. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal-oxide- semiconductor devices.

29. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data.

30. The computationally-implemented thing/operation disclosure of clause 29, wherein said capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data comprises:

capturing the scene that includes image data and sound data, through use of the array of more than one image sensor, wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data.

31. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

127 capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor that captures soundwave data.

32. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing a scene that includes video data, through use of an array of more than one video capture device.

33. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene.

34. The computationally-implemented thing/operation disclosure of clause 33, wherein said selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene comprises:

selecting the particular image from the scene, wherein the selected particular image represents a particular remote user-requested image that is smaller than the scene.

35. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene.

36. The computationally-implemented thing/operation disclosure of clause 35, wherein said selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene comprises:

128 selecting the particular portion of the scene that includes an image requested by a remote operator of the array of more than one image sensors, wherein the selected particular portion is smaller than the scene.

37. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

receiving a request for a particular image; and

selecting the particular image from the scene, wherein the particular image is smaller than the scene.

38. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

receiving a first request for a first particular image and a second request for a second particular image; and

selecting the particular portion of the scene that includes the first particular image and the second particular image, wherein the particular portion of the scene is smaller than the scene.

39. The computationally-implemented thing/operation disclosure of clause 38, wherein said receiving a first request for a first particular image and a second request for a second particular image comprises:

receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image do not overlap.

40. The computationally-implemented thing/operation disclosure of clause 38, wherein said receiving a first request for a first particular image and a second request for a second particular image comprises:

129 receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion.

41. The computationally- implemented thing/operation disclosure of clause 40, wherein said receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion comprises:

receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only.

42. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object.

43. The computationally- implemented thing/operation disclosure of clause 42, wherein said selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a person.

44. The computationally-implemented thing/operation disclosure of clause 42, wherein said selecting the particular portion of the scene that includes at least one image, wherein the selected

130 particular portion is smaller than the scene and contains a particular image object comprises:

selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a car.

45. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting device.

46. The computationally-implemented thing/operation disclosure of clause 45, wherein said selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosure comprises:

selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device.

47. The computationally-implemented thing/operation disclosure of clause 45, wherein said selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosure comprises:

selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device.

48. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

131 transmitting only the selected particular portion to a remote server.

49. The computationally-implemented thing/operation disclosure of clause 48, wherein said transmitting only the selected particular portion to a remote server comprises:

transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene.

50. The computationally-implemented thing/operation disclosure of clause 49, wherein said transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene comprises:

transmitting only the selected particular portion to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene.

51. The computationally-implemented thing/operation disclosure of clause 48, wherein said transmitting only the selected particular portion to a remote server comprises:

transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image.

52. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

transmitting only the selected particular portion from the scene at a particular resolution.

53. The computationally- implemented thing/operation disclosure of clause 52, wherein said transmitting only the selected particular portion from the scene at a particular resolution comprises:

132 determining an available bandwidth for transmission to the remote location; and transmitting only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth.

133 54. The computationally-implemented thing/operation disclosure of clause 52, wherein said transmitting only the selected particular portion from the scene at a particular resolution comprises:

transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured.

55. The computationally- implemented thing/operation disclosure of clause 52, wherein said transmitting only the selected particular portion from the scene at a particular resolution comprises:

transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured.

56. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

transmitting a first segment of the selected particular portion at a first resolution, to the remote location; and

transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location.

57. The computationally- implemented thing/operation disclosure of clause 56, wherein said transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion.

58. The computationally- implemented thing/operation disclosure of clause 56, wherein said transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

134 transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion.

59. The computationally-implemented thing/operation disclosure of clause 56, wherein said transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion.

60. The computationally-implemented thing/operation disclosure of clause 59, wherein said transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion comprises:

transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest.

61. The computationally- implemented thing/operation disclosure of clause 60, wherein said transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest.

62. The computationally-implemented thing/operation disclosure of clause 60, wherein said transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

135 transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking.

63. The computationally- implemented thing/operation disclosure of clause 56, wherein said transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user.

64. The computationally-implemented thing/operation disclosure of clause 56, wherein said transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion.

65. The computationally- implemented thing/operation disclosure of clause 56, wherein said transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest.

66. The computationally-implemented thing/operation disclosure of clause 65, wherein said transmitting the second segment of the selected particular portion at the second resolution that is

136 higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected football player.

67. The computationally-implemented thing/operation disclosure of clause 65, wherein said transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected landmark.

68. The computationally-implemented thing/operation disclosure of clause 65, wherein said transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation.

69. The computationally-implemented thing/operation disclosure of clause 65, wherein said transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of

137 the selected particular portion is determined to contain a selected object in a dwelling.

138 70. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

transmitting only pixels associated with the selected particular portion to the remote location.

71. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion.

72. The computationally- implemented thing/operation disclosure of clause 71, wherein said transmitting only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion comprises:

transmitting only the selected particular portion from the scene to a particular device operated by a user that requested the selected particular portion.

73. The computationally- implemented thing/operation disclosure of clause 71, wherein said transmitting only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion comprises:

transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion through selection of the particular portion from the scene.

74. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

deleting pixels from the scene that are not part of the selected particular portion of

139 the scene.

140 75. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion.

76. The computationally-implemented thing/operation disclosure of clause 75, wherein said indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion comprises:

appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non-transmission of the pixels to which data was appended.

77. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

discarding pixels from the scene that are not part of the selected particular portion of the scene.

78. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

denying retention of pixels from the scene that are not part of the selected particular portion of the scene.

79. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene.

141 80. The computationally-implemented thing/operation disclosure of clause 79, wherein said retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

retaining ten percent of the pixels from the scene that are not part of the selected particular portion of the scene.

81. The computationally- implemented thing/operation disclosure of clause 79, wherein said retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene.

82. The computationally-implemented thing/operation disclosure of clause 81, wherein said retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene comprises:

retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene.

83. The computationally- implemented thing/operation disclosure of clause 79, wherein said retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

retaining pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene.

84. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage.

142 85. The computationally- implemented thing/operation disclosure of clause 84, wherein said storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than one image sensor.

86. The computationally-implemented thing/operation disclosure of clause 84, wherein said storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene.

87. The computationally- implemented thing/operation disclosure of clause 86, wherein said storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted.

88. The computationally- implemented thing/operation disclosure of clause 86, wherein said storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

143 storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels.

89. A computationally-implemented thing/operation disclosure, comprising

means for capturing a scene that includes one or more images, through use of an array of more than one image sensor;

means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

means for transmitting only the selected particular portion from the scene to a remote location; and

means for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

90. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes the one or more images, through use of an array of image sensors.

91. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes the one or more images, through use of two image sensors arranged side by side and angled toward each other.

92. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

144 means for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a grid

93. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line.

94. The computationally-implemented thing/operation disclosure of clause 93, wherein means for capturing the scene that includes the one or more images,

through use of the array of image sensors arranged in a line comprises:

means for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is greater than 120 degrees.

95. The computationally-implemented thing/operation disclosure of clause 93, wherein means for capturing the scene that includes the one or more images,

through use of the array of image sensors arranged in a line comprises:

means for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is 180 degrees.

96. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of more than one stationary image sensor.

145 97. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of more than one static image sensor.

98. The computationally-implemented thing/operation disclosure of clause 97, wherein said means for capturing the scene that includes one or more images, through use of an array of more than one static image sensor comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view.

99. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes the one or more images, through use of an array of more than one image sensor mounted in a stationary location.

100. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform.

101. The computationally-implemented thing/operation disclosure of clause 100, wherein said means for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform comprises:

means for capturing the scene that includes one or more images, through use of an array of image sensors mounted on an unmanned aerial vehicle.

146 102. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a satellite.

103. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene.

104. The computationally-implemented thing/operation disclosure of clause 103, wherein said means for capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together.

105. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for acquiring an image from each image sensor of the more than one image sensors; and

means for combining the acquired images from the more than one image sensors into the scene.

147 106. The computationally-implemented thing/operation disclosure of clause 105, wherein said means for acquiring an image from each image sensor of the more than one image sensors comprises:

means for acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image.

107. The computationally-implemented thing/operation disclosure of clause 106, wherein said means for acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image comprises:

means for acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image captured by an adjacent image sensor.

108. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene.

109. The computationally-implemented thing/operation disclosure of clause 108, wherein said means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location.

148 110. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the amount of image data captured exceeds the bandwidth for transmitting the image data to the remote location by a factor of ten.

111. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing a scene of a tourist destination that includes one or more images, through use of an array of more than one image sensor.

112. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include one or more images of cars crossing the highway across the bridge.

113. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include an image of an appliance in the home.

149 114. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of a grouping of more than one image sensor.

115. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of multiple image sensors whose data is directed to a common source.

116. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled thing/operation disclosures and a complementary metal-oxide- semiconductor thing/operation disclosures.

117. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data.

118. The computationally- implemented thing/operation disclosure of clause 117, wherein said means for capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data comprises:

150 means for capturing the scene that includes image data and sound data, through use of the array of more than one image sensor, wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data

119. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor that captures soundwave data.

120. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing a scene that includes video data, through use of an array of more than one video capture device.

121. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene.

122. The computationally- implemented thing/operation disclosure of clause 121, wherein said means for selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene comprises:

means for selecting the particular image from the scene, wherein the selected particular image represents a particular remote user-requested image that is

151 smaller than the scene.

152 123. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene.

124. The computationally-implemented thing/operation disclosure of clause 123, wherein said means for selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular portion of the scene that includes an image requested by a remote operator of the array of more than one image sensors, wherein the selected particular portion is smaller than the scene.

125. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for receiving a request for a particular image; and

means for selecting the particular image from the scene, wherein the particular image is smaller than the scene.

126. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for receiving a first request for a first particular image and a second request for a second particular image; and

means for selecting the particular portion of the scene that includes the first particular image and the second particular image, wherein the particular portion of

153 the scene is smaller than the scene.

154 127. The computationally-implemented thing/operation disclosure of clause 126, wherein said means for receiving a first request for a first particular image and a second request for a second particular image comprises:

means for receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image do not overlap.

128. The computationally-implemented thing/operation disclosure of clause 126, wherein said means for receiving a first request for a first particular image and a second request for a second particular image comprises:

means for receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion.

129. The computationally-implemented thing/operation disclosure of clause 128, wherein said means for receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion comprises:

means for receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only.

130. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object.

155 131. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a person.

132. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a car.

133. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting device.

134. The computationally-implemented thing/operation disclosure of clause 133, wherein said means for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosure comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device.

156 135. The computationally-implemented thing/operation disclosure of clause 133, wherein said means for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosure comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device.

136. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for transmitting only the selected particular portion from the scene to a remote location comprises:

means for transmitting only the selected particular portion to a remote server.

137. The computationally-implemented thing/operation disclosure of clause 136, wherein said means for transmitting only the selected particular portion to a remote server comprises:

means for transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene.

138. The computationally-implemented thing/operation disclosure of clause 137, wherein said means for transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene comprises:

means for transmitting only the selected particular portion to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene.

139. The computationally-implemented thing/operation disclosure of clause 136, wherein said means for transmitting only the selected particular portion to a remote server comprises:

157 means for transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image.

158 140. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for transmitting only the selected particular portion from the scene to a remote location comprises:

means for transmitting only the selected particular portion from the scene at a particular resolution.

141. The computationally-implemented thing/operation disclosure of clause 140, wherein said means for transmitting only the selected particular portion from the scene at a particular resolution comprises:

means for determining an available bandwidth for transmission to the remote location; and

means for transmitting only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth.

142. The computationally-implemented thing/operation disclosure of clause 140, wherein said means for transmitting only the selected particular portion from the scene at a particular resolution comprises:

means for transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured.

143. The computationally-implemented thing/operation disclosure of clause 140, wherein said means for transmitting only the selected particular portion from the scene at a particular resolution comprises:

means for transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured.

159 144. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for transmitting only the selected particular portion from the scene to a remote location comprises:

means for transmitting a first segment of the selected particular portion at a first resolution, to the remote location; and

means for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location.

145. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion.

146. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion.

147. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion.

148. The computationally-implemented thing/operation disclosure of clause 147, wherein said means for transmitting the first segment of the selected particular portion at the first resolution,

160 wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest.

149. The computationally-implemented thing/operation disclosure of clause 148, wherein said means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest.

150. The computationally-implemented thing/operation disclosure of clause 148, wherein said means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking.

151. The computationally- implemented thing/operation disclosure of clause 144, wherein said means for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user.

161 152. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion.

153. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest.

154. The computationally-implemented thing/operation disclosure of clause 153, wherein said means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected football player.

155. The computationally-implemented thing/operation disclosure of clause 153, wherein said means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of

162 interest comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second

163 segment of the selected particular portion is determined to contain a selected landmark.

156. The computationally-implemented thing/operation disclosure of clause 153, wherein said means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation.

157. The computationally-implemented thing/operation disclosure of clause 153, wherein said means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected object in a dwelling.

158. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for transmitting only pixels associated with the selected particular portion to the remote location.

159. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

164 de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

165 means for transmitting only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion.

160. The computationally-implemented thing/operation disclosure of clause 159, wherein said means for transmitting only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion comprises:

means for transmitting only the selected particular portion from the scene to a particular device operated by a user that requested the selected particular portion.

161. The computationally-implemented thing/operation disclosure of clause 159, wherein said means for transmitting only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion comprises:

means for transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion through selection of the particular portion from the scene.

162. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for deleting pixels from the scene that are not part of the selected particular portion of the scene.

163. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular

166 portion.

167 164. The computationally-implemented thing/operation disclosure of clause 163, wherein said means for indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion comprises:

means for appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non-transmission of the pixels to which data was appended.

165. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for discarding pixels from the scene that are not part of the selected particular portion of the scene.

166. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for denying retention of pixels from the scene that are not part of the selected particular portion of the scene.

167. The computationally-implemented thing/operation disclosure of clause 166, wherein said means for denying retention of pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene.

168. The computationally-implemented thing/operation disclosure of clause 167, wherein said means for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for retaining ten percent of the pixels from the scene that are not part of the

168 selected particular portion of the scene.

169 169. The computationally-implemented thing/operation disclosure of clause 167, wherein said means for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene.

170. The computationally- implemented thing/operation disclosure of clause 169, wherein said means for retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene comprises:

means for retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene.

171. The computationally-implemented thing/operation disclosure of clause 167, wherein said means for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for retaining pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene.

172. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage.

173. The computationally-implemented thing/operation disclosure of clause 172, wherein said means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than

170 one image sensor.

171 174. The computationally-implemented thing/operation disclosure of clause 172, wherein said means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene.

175. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted.

176. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels.

177. A computationally-implemented thing/operation disclosure, comprising

172 circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor;

circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

circuitry for transmitting only the selected particular portion from the scene to a remote location; and

circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

178. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes the one or more images, through use of an array of image sensors.

179. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes the one or more images, through use of two image sensors arranged side by side and angled toward each other.

180. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a grid

181. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

173 circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line.

182. The computationally-implemented thing/operation disclosure of clause 181, wherein circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line comprises:

circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is greater than 120 degrees.

183. The computationally-implemented thing/operation disclosure of clause 181, wherein circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line comprises:

circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is 180 degrees.

184. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of more than one stationary image sensor.

185. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of more than one static image sensor.

174 186. The computationally-implemented thing/operation disclosure of clause 185, wherein said circuitry for capturing the scene that includes one or more images, through use of an array of more than one static image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view.

187. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes the one or more images, through use of an array of more than one image sensor mounted in a stationary location.

188. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform.

189. The computationally-implemented thing/operation disclosure of clause 188, wherein said circuitry for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of image sensors mounted on an unmanned aerial vehicle.

190. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a satellite.

175 191. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene.

192. The computationally-implemented thing/operation disclosure of clause 191, wherein said circuitry for capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together.

193. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for acquiring an image from each image sensor of the more than one image sensors; and

circuitry for combining the acquired images from the more than one image sensors into the scene.

194. The computationally-implemented thing/operation disclosure of clause 193, wherein said circuitry for acquiring an image from each image sensor of the more than one image sensors comprises:

circuitry for acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image.

176 195. The computationally-implemented thing/operation disclosure of clause 194, wherein said circuitry for acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image comprises:

circuitry for acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image captured by an adjacent image sensor.

196. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene.

197. The computationally-implemented thing/operation disclosure of clause 196, wherein said circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location.

198. The computationally-implemented thing/operation disclosure of clause 197, wherein said circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the amount of image data captured exceeds the bandwidth for transmitting the image data to the remote

177 location by a factor of ten.

178 199. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing a scene of a tourist destination that includes one or more images, through use of an array of more than one image sensor.

200. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include one or more images of cars crossing the highway across the bridge.

201. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include an image of an appliance in the home.

202. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of a grouping of more than one image sensor.

203. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

179 circuitry for capturing the scene that includes one or more images, through use of multiple image sensors whose data is directed to a common source.

204. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal- oxide- semiconductor devices.

205. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data.

206. The computationally-implemented thing/operation disclosure of clause 205, wherein said circuitry for capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data comprises:

circuitry for capturing the scene that includes image data and sound data, through use of the array of more than one image sensor, wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data

207. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

180 circuitry for capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor that captures soundwave data.

208. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing a scene that includes video data, through use of an array of more than one video capture device.

209. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene.

210. The computationally- implemented thing/operation disclosure of clause 209, wherein said circuitry for selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene comprises:

circuitry for selecting the particular image from the scene, wherein the selected particular image represents a particular remote user-requested image that is smaller than the scene.

211. The computationally- implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene.

181 212. The computationally- implemented thing/operation disclosure of clause 211, wherein said circuitry for selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene comprises:

circuitry for selecting the particular portion of the scene that includes an image requested by a remote operator of the array of more than one image sensors, wherein the selected particular portion is smaller than the scene.

213. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for receiving a request for a particular image; and

circuitry for selecting the particular image from the scene, wherein the particular image is smaller than the scene.

214. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for receiving a first request for a first particular image and a second request for a second particular image; and

circuitry for selecting the particular portion of the scene that includes the first particular image and the second particular image, wherein the particular portion of the scene is smaller than the scene.

215. The computationally-implemented thing/operation disclosure of clause 214, wherein said circuitry for receiving a first request for a first particular image and a second request for a second particular image comprises:

circuitry for receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image do not overlap.

182 216. The computationally-implemented thing/operation disclosure of clause 214, wherein said circuitry for receiving a first request for a first particular image and a second request for a second particular image comprises:

circuitry for receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion.

217. The computationally-implemented thing/operation disclosure of clause 216, wherein said circuitry for receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion comprises:

circuitry for receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only.

218. The computationally- implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object.

219. The computationally-implemented thing/operation disclosure of clause 218, wherein said circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a person.

183 220. The computationally- implemented thing/operation disclosure of clause 218, wherein said circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a car.

221. The computationally- implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting device.

222. The computationally- implemented thing/operation disclosure of clause 221, wherein said circuitry for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosure comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device.

223. The computationally- implemented thing/operation disclosure of clause 221, wherein said circuitry for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosure comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device.

184 224. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for transmitting only the selected particular portion from the scene to a remote location comprises:

circuitry for transmitting only the selected particular portion to a remote server.

225. The computationally-implemented thing/operation disclosure of clause 224, wherein said circuitry for transmitting only the selected particular portion to a remote server comprises:

circuitry for transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene.

226. The computationally-implemented thing/operation disclosure of clause 225, wherein said circuitry for transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene comprises:

circuitry for transmitting only the selected particular portion to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene.

227. The computationally-implemented thing/operation disclosure of clause 224, wherein said circuitry for transmitting only the selected particular portion to a remote server comprises:

circuitry for transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image.

228. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for transmitting only the selected particular portion from the scene to a remote location comprises:

circuitry for transmitting only the selected particular portion from the scene at a

185 particular resolution.

186 229. The computationally-implemented thing/operation disclosure of clause 228, wherein said circuitry for transmitting only the selected particular portion from the scene at a particular resolution comprises:

circuitry for determining an available bandwidth for transmission to the remote location; and

circuitry for transmitting only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth.

230. The computationally-implemented thing/operation disclosure of clause 228, wherein said circuitry for transmitting only the selected particular portion from the scene at a particular resolution comprises:

circuitry for transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured.

231. The computationally- implemented thing/operation disclosure of clause 228, wherein said circuitry for transmitting only the selected particular portion from the scene at a particular resolution comprises:

circuitry for transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured.

232. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for transmitting only the selected particular portion from the scene to a remote location comprises:

circuitry for transmitting a first segment of the selected particular portion at a first resolution, to the remote location; and

circuitry for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location.

187 233. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion.

234. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion.

235. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion.

236. The computationally-implemented thing/operation disclosure of clause 235, wherein said circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest.

237. The computationally-implemented thing/operation disclosure of clause 236, wherein said circuitry for transmitting the first segment of the selected particular portion at the first resolution,

188 wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest.

238. The computationally-implemented thing/operation disclosure of clause 236, wherein said circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking.

239. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user.

240. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion.

189 241. The computationally- implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest.

242. The computationally- implemented thing/operation disclosure of clause 241, wherein said circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected football player.

243. The computationally- implemented thing/operation disclosure of clause 241, wherein said circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected landmark.

244. The computationally- implemented thing/operation disclosure of clause 241, wherein said circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second

190 segment of the selected particular portion is determined to contain a selected item of interest comprises:

191 circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation.

245. The computationally- implemented thing/operation disclosure of clause 241, wherein said circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected object in a dwelling.

246. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for transmitting only pixels associated with the selected particular portion to the remote location.

247. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion.

248. The computationally-implemented thing/operation disclosure of clause 247, wherein said circuitry for transmitting only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion comprises:

circuitry for transmitting only the selected particular portion from the scene to a

192 particular device operated by a user that requested the selected particular portion.

193 249. The computationally-implemented thing/operation disclosure of clause 247, wherein said circuitry for transmitting only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion comprises:

circuitry for transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion through selection of the particular portion from the scene.

250. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for deleting pixels from the scene that are not part of the selected particular portion of the scene.

251. The computationally- implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion.

252. The computationally- implemented thing/operation disclosure of clause 251, wherein said circuitry for indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion comprises:

circuitry for appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non- transmission of the pixels to which data was appended.

253. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the

194 selected particular portion of the scene comprises:

195 circuitry for discarding pixels from the scene that are not part of the selected particular portion of the scene.

254. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for denying retention of pixels from the scene that are not part of the selected particular portion of the scene.

255. The computationally-implemented thing/operation disclosure of clause 254, wherein said circuitry for denying retention of pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene.

256. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for retaining ten percent of the pixels from the scene that are not part of the selected particular portion of the scene.

257. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene.

258. The computationally-implemented thing/operation disclosure of clause 257, wherein said circuitry for retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene comprises:

196 circuitry for retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene.

259. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for retaining pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene.

260. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage.

261. The computationally- implemented thing/operation disclosure of clause 260, wherein said circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than one image sensor.

262. The computationally-implemented thing/operation disclosure of clause 260, wherein said circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene.

197 263. The computationally-implemented thing/operation disclosure of clause 262, wherein said circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted.

264. The computationally-implemented thing/operation disclosure of clause 262, wherein said circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels.

265. A thing/operation disclosure, comprising:

a signal -bearing medium bearing:

one or more instructions for capturing a scene that includes one or more images, through use of an array of more than one image sensor;

one or more instructions for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

one or more instructions for transmitting only the selected particular portion from the scene to a remote location; and

one or more instructions for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

198 266. A thing/operation disclosure defined by a computational language comprising: one or more interchained physical machines ordered for capturing a scene that includes one or more images, through use of an array of more than one image sensor; one or more interchained physical machines ordered for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

one or more interchained physical machines ordered for transmitting only the selected particular portion from the scene to a remote location; and

one or more interchained physical machines ordered for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [EN D]

|^W W^^^W ^^H¾¾— This Roman Numeral Section, And the Corresponding Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods and Systems for Visual Imaging Arrays

···

BRIEF DESCRIPTION OF THE FIGURES

High-Level System Architecture

[00142] Fig. 1, including Figs. 1-A-l-AN, shows partial views that, when assembled, form a complete view of an entire system, of which at least a portion will be described in more detail. An overview of the entire system of Fig. 1 is now described herein, with a more specific reference to at least one subsystem of Fig. 1 to be described later with respect to Figs. 2-14D.

[00143] Fig. 1 shows various implementations of the overall system. At a high level, Fig. 1 shows various implementations of a multiple user video imaging array (hereinafter

199 interchangeably referred to as a "MUVIA"). It is noted that the designation "MUVIA" is merely shorthand and descriptive of an exemplary embodiment, and not a limiting term. Although "multiple user" appears in the name MUVIA, multiple users or even a single user are not required. Further, "video" is used in the designation "MUVIA," but MUVIA systems also may capture still images, multiple images, audio data, electromagnetic waves outside the visible spectrum, and other data as will be described herein. Further, "imaging array" may be used in the MUVIA designation, but the image sensor in MUVIA is not necessarily an array or even multiple sensors (although commonly implemented as larger groups of image sensors, single-sensor implementations are also contemplated), and "array" here does not necessarily imply any specific structure, but rather any grouping of one or more sensors.

[00144] Generally, although not necessarily required, a MUVIA system may include one or more of a user device (e.g., hereinafter interchangeably referred to as a "client device," in recognition that a user may not necessarily be a human, living, or organic"), a server, and an image sensor array. A "server" in the context of this application may refer to any device, program, or module that is not directly connected to the image sensor array or to the client device, including any and all "cloud" storage, applications, and/or processing.

[00145] For example, in an embodiment, e.g., as shown in Fig. 1-A, Fig. 1-K, Fig. 1-U, Fig. 1-AE, and Fig. 1-AF, in an embodiment, the system may include one or more of image sensor array 3200, array local storage and processing module 3300, server 4000, and user device 5200. Each of these portions will be discussed in more detail herein.

200 [00146] Referring now to Fig. 1-A, Fig. 1-A depicts user device 5200, which is a device that may be operated or controlled by a user of a MUVIA system. It is noted here that "user" is merely provided as a designation for ease of understanding, and does not imply control by a human or other organism, sentient or otherwise. In an embodiment, for example, in a security-type embodiment, the user device 5200 may be mostly or completely unmonitored, or may be monitored by an artificial intelligence, or by a combination of artificial intelligence, pseudo-artificial intelligence (e.g., that is intelligence amplification) and human intelligence.

[00147] User device 5200 may be, but is not limited to, a wearable device (e.g., glasses, goggles, headgear, a watch, clothing), an implant (e.g., a retinal-implant display), a computer of any kind (e.g., a laptop computer, desktop computer, mainframe, server, etc.), a tablet or other portable device, a phone or other similar device (e.g., smartphone, personal digital assistant), a personal electronic device (e.g., music player, CD player), a home appliance (e.g., a television, a refrigerator, or any other so-called "smart" device), a piece of office equipment (e.g., a copier, scanner, fax device, etc.), a camera or other camera-like device, a video game system, an entertainment/media center, or any other electrical equipment that has a functionality of presenting an image (whether visual or by other means, e.g., a screen, but also other sensory stimulating work).

[00148] User device 5200 may be capable of presenting an image, which, for purposes of clarity and conciseness will be referred to as displaying an image, although

communication through forms other than generating light waves through the visible light spectrum, although the image is not required to be presented at all times or even at all. For example, in an embodiment, user device 5200 may receive images from server 4000 (or directly from the image sensor array 3200, as will be discussed herein), and may store the images for later viewing, or for processing internally, or for any other reason.

[00149] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection accepting module 5210. User selection accepting module 5210 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-A in the exemplary interface 5212, the user selection accepting module 5210

201 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00150] In an embodiment, the user selection accepting module may accept a selection of a particular thing- e.g., a building, an animal, or any other object whose representation is present on the screen. Moreover, a user may use a text box to "search" the image for a particular thing, and processing, done at the user device 5200 or at the server 4000, may determine the image and the zoom level for viewing that thing. The search for a particular thing may include a generic search, e.g., "search for people," or "search for penguins," or a more specific search, e.g., "search for the Space Needle," or "search for the White House." The search for a particular thing may take on any known contextual search, e.g., an address, a text string, or any other input.

[00151] In an embodiment, the "user selection" facilitated by the user selection accepting module 5210 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00152] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection transmitting module 5220. The user selection transmitting module 5220 may take the user selection from user selection transmitting module 5220, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5200 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server

202 4000. Following the thick- line arrow leftward from user selection transmitting module 5220 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00153] Referring again to Fig. 1-A, Fig. 1-A also includes a selected image receiving module 5230 and a user selection presenting module 5240, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00154] Referring now to Fig. 1-K, Figs. 1-K and 1-U show an embodiment of a server 4000 that communicates with one or both of user device 5200 and array local storage and processing module 3300. Sever 4000 may be a single computing device, or may be many computing devices, which may or may not be in proximity with each other.

[00155] Referring again to Fig. 1-K, server 4000 may include a user request reception module 4010. The user request reception module 4010 may receive the transmitted request from user selection transmitting module 5220. The user request reception module 4010 may then turn over processing to user request validation module 4020, which may perform, among other things, a check to make sure the user is not requesting more resolution than what their device can handle. For example, if the server has learned (e.g., through gathered information, or through information that was transmitted with the user request or in a same session as the user request), that the user is requesting a 1900x1080 resolution image, and the maximum resolution for the device is 1334x750, then the request will be modified so that no more than the maximum resolution that can be handled by the device is requested. In an embodiment, this may conserve the bandwidth required to transmit from the MUVIA to the server 4000 and/or the user device 3200

[00156] Referring again to Fig. 1-K, in an embodiment, server 4000 may include a user request latency management module 4030. User request latency management module 4030 may, in conjunction with user device 3200, attempt to reduce the latency from the

203 time a specific image is requested by user device 3200 to the time the request is acted upon and data is transmitted to the user. The details for this latency management will be described in more detail herein, with varying techniques that may be carried out by any or all of the devices in the chain (e.g., user device, camera array, and server). As an example, in an embodiment, a lower resolution version of the image, e.g., that is stored locally or on the server, may be sent to the user immediately upon the request, and then that image is updated with the actual image taken by the camera. In an embodiment, user request latency management module 4030 also may handle static gap-filling, that is, if the image captured by the camera is unchanging, e.g., has not changed for a particular period of time, then a new image is not necessary to be captured, and an older image, that may be stored on server 4000, may be transmitted to the user device 3200. This process also will be discussed in more detail herein.

[00157] Referring now to Fig. 1-U, which shows more of server 4000, in an

embodiment, server 4000 may include a consolidated user request transmission module 4040, which may be configured to consolidate all the user requests, perform any necessary pre-processing on those requests, and send the request for particular sets of pixels to the array local storage and processing module 3300. The process for

consolidating the user requests and performing pre-processing will be described in more detail herein with respect to some of the other exemplary embodiments. In this embodiment, however, server consolidated user request transmission module 4040 transmits the request (exiting leftward from Fig. 1-U and traveling downward to Fig. 1- AE, through a pathway identified in Fig. 1-AE as lower-bandwidth communication from remote server 3515. It is noted here that "lower bandwidth communication" does not necessarily mean "low bandwidth" or imply any specific number about the bandwidth— it is simply lower than the relatively higher bandwidth communication from the actual image sensor array 3505 to the array local storage and processing module 3300, which will be discussed in more detail herein.

[00158] Referring again to Fig. 1-U, server 4000 also may include requested pixel reception module 4050, user request preparation module 4060, and user request

204 transmission module 4070 (shown in Fig. 1-T), which will be discussed in more detail herein, with respect to the dataflow of this embodiment

[00159] Referring now to Figs. 1-AE and 1-AF, Figs. 1-AE and 1-AF show an image sensor array ("ISA") 3200 and an array local storage and processing module 3300, each of which will now be described in more detail.

[00160] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00161] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3300. In an embodiment, array local storage and processing module 3300 is integrated into the image sensor array 3200. In another embodiment, the array local storage and processing module 3300 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3300 to the remote server, which may be, but is not required to be, located further away temporally.

[00162] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically

205 changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00163] Referring again to Fig. 1-AE, the image sensor array 3300 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3310. Consolidated user request reception module 3310 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00164] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests.

[00165] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

206 [00166] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3300 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3300 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00167] Referring back to Fig. 1-U, the transmitted pixels transmitted from selected pixel transmission module 3340 of array local processing module 3300 may be received by server 4000, e.g., at requested pixel reception module 4050. Requested pixel reception module 4050 may receive the requested pixels and turn them over to user request preparation module 4060, which may "unpack" the requested pixels, e.g., determining which pixels go to which user, and at what resolutions, along with any postprocessing, including image adjustment, adding in missing cached data, or adding additional data to the images (e.g., advertisements or other data). In an embodiment, server 4000 also may include a user request transmission module 4070, which may be configured to transmit the requested pixels back to the user device 5200.

[00168] Referring again to Fig. 1-A, user device 5200 may include a selected image receiving module 5230, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

207 [00169] Figs. 1-B, 1-C, 1-M, 1-W, 1-AG, and 1-AH show another embodiment of the MUVIA system, in which multiple user devices 5510, 5520, and 5530 may request images captured by the same image sensor array 3200.

[00170] Referring now to Figs. 1-B and 1-C, user device 5510, user device 5520, and user device 5530 are shown. In an embodiment, user devices 5510, 5520, and 5530 may have some or all of the same components as user device 5200, but are not shown here for clarity and ease of understanding the drawing. For each of user devices 5510, 5520, and 5530, exemplary screen resolutions have been chosen. There is nothing specific about these numbers that have been chosen, however, they are merely illustrated for exemplary purposes, and any other numbers could have been chosen in their place.

[00171] For example, in an embodiment, referring to Fig. 1-B, user device 5510 may have a screen resolution of 1920x1080 (e.g., colloquially referred to as "HD quality"). User device 5510 may send an image request to the server 4000, and may also send data regarding the screen resolution of the device.

[00172] Referring now to Fig. 1-C, user device 5520 may have a screen resolution of 1334x750. User device 5520 may send another image request to the server 4000, and, in an embodiment, instead of sending data regarding the screen resolution of the device, may send data that identifies what kind of device it is (e.g., an Apple-branded

smartphone). Server 4000 may use this data to determine the screen resolution for user device 5520 through an internal database, or through contacting an external source, e.g., a manufacturer of the device or a third party supplier of data about devices.

[00173] Referring again to Fig. 1-C, user device 5530 may have a screen resolution of 640x480, and may send the request by itself to the server 4000, without any additional data. In addition, server 4000 may receive independent requests from various users to change their current viewing area on the device.

[00174] Referring now to Fig. 1-M, server 4000 may include user request reception module 4110. User request reception module 4110 may receive requests from multiple user devices, e.g., user devices 5510, 5520, and 5530. Server 4000 also may include an

208 independent user view change request reception module 4115, which, in an embodiment, may be a part of user request reception module 4110, and may be configured to receive requests from users that are already connected to the system, to change the view of what they are currently seeing.

[00175] Referring again to Fig. 1-M, server 4000 may include relevant pixel selection module 4120 configured to combine the user selections into a single area, as shown in Fig. 1-M. It is noted that, in an embodiment, the different user devices may request areas that overlap each other. In this case, there may be one or more overlapping areas, e.g., overlapping areas 4122. In an embodiment, the overlapping areas are only transmitted once, in order to save data/transmission costs and increase efficiency.

[00176] Referring now to Fig. 1-W, server 4000 may include selected pixel transmission to ISA module 4130. Module 4130 may take the relevant selected pixels, and transmit them to the array local processing module 3400 of image sensor array 3200. Selected pixel transmission to ISA module 4130 may include communication components, which may be shared with other transmission and/or reception modules.

[00177] Referring now to Fig. 1-AG, array local processing module 3400 may communicate with image sensor array 3200. Similarly to Figs. 1-AE and 1-AF, Figs. 1- AG and 1-AH show array local processing module 3400 and image sensor array 3200, respectively.

[00178] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

209 [00179] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3400. In an embodiment, array local storage and processing module 3400 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3400 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3400 to the remote server, which may be, but is not required to be, located further away temporally.

[00180] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00181] Referring again to Fig. 1-AG, the image sensor array 3200 may capture an image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3410. Consolidated user request reception module 3410 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3420 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00182] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3430. In an

210 embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3417. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3415. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3400, or may be subject to other manipulations or processing separate from the user requests.

[00183] Referring gain to Fig. 1-AG, array local processing module 3400 may include flagged selected pixel transmission module 3440, which takes the pixels identified as requested (e.g., "flagged") and transmits them back to the server 4000 for further processing. Similarly to as previously described, this transmission may utilize a lower- bandwidth channel, and module 3440 may include all necessary hardware to effect that lower-bandwidth transmission to server 4000.

[00184] Referring again to Fig. 1-W, the flagged selected pixel transmission module 3440 of array local processing module 3400 may transmit the flagged pixels to server 4000. Specifically, flagged selected pixel transmission module 3440 may transmit the pixels to flagged selected pixel reception from ISA module 4140 of server 4000, as shown in Fig. 1-W.

[00185] Referring again to Fig. 1-W, server 4000 also may include flagged selected pixel separation and duplication module 4150, which may, effectively, reverse the process of combining the pixels from the various selections, duplicating overlapping areas where necessary, and creating the requested images for each of the user devices that requested images. Flagged selected pixel separation and duplication module 4150 also may include the post-processing done to the image, including filling in cached versions of images, image adjustments based on the device preferences and/or the user preferences, and any other image post-processing.

[00186] Referring now to Fig. 1-M (as data flows "northward" from Fig. 1-W from module 4150), server 4000 may include pixel transmission to user device module 4160, which may be configured to transmit the pixels that have been separated out and processed to the specific users that requested the image. Pixel transmission to user

211 device module 4160 may handle the transmission of images to the user devices 5510, 5520, and 5530. In an embodiment, pixel transmission to user device module 4160 may have some or all components in common with user request reception module 4110.

[00187] Following the arrow of data flow to the right and upward from module 4160 of server 4000, the requested user images arrive at user device 5510, user device 5520, and user device 5530, as shown in Figs. 1-B and 1-C. The user devices 5510, 5520, and 5530 may present the received images as previously discussed and/or as further discussed herein.

[00188] Referring again to Fig. 1, Figs. 1-E, l-O, 1-Y, 1-AH, and 1-AI depict a MUVIA implementation according to an embodiment. In an embodiment, referring now to Fig. 1- E, a user device 5600 may include a target selection reception module 5610. Target selection reception module 5610 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA array is pointed at a football stadium, e.g., CenturyLink Field. As an example, a user may select one of the football players visible on the field as a "target." This may be facilitated by a target presentation module, e.g., target presentation module 5612, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the football player.

[00189] In an embodiment, target selection reception module 5610 may include an audible target selection module 5614 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00190] Referring again to Fig. 1, e.g., Fig. 1-E, in an embodiment, user device 5600 may include selected target transmission module 5620. Selected target transmission module 5620 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00191] Referring now to Fig. l-O, Fig. 1-0 (and Fig. 1-Y to the direct "south" of Fig. 1- O) shows an embodiment of server 4000. For example, in an embodiment, server 4000

46 may include a selected target reception module 4210. In an embodiment, selected target reception module 4210 of server 4000 may receive the selected target from the user device 5600. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00192] Referring again to Fig. l-O, in an embodiment, server 4000 may include selected target identification module 4220, which may be configured to take the target data received by selected target reception module 4210 and determine an image that needs to be captured in order to obtain an image that contains the selected target (e.g., in the shown example, the football player). In an embodiment, selected target identification module 4220 may use images previously received (or, in an embodiment, current images) from the image sensor array 3200 to determine the parameters of an image that contains the selected target. For example, in an embodiment, lower-resolution images from the image sensor array 3200 may be transmitted to server 4000 for determining where the target is located within the image, and then specific requests for portions of the image may be transmitted to the image sensor array 3200, as will be discussed herein.

[00193] In an embodiment, server 4000 may perform processing on the selected target data, and/or on image data that is received, in order to create a request that is to be transmitted to the image sensor array 3200. For example, in the given example, the selected target data is a football player. The server 4000, that is, selected target identification module 4220 may perform image recognition on one or more images captured from the image sensor array to determine a "sector" of the entire scene that contains the selected target. In another embodiment, the selected target identification module 4220 may use other, external sources of data to determine where the target is. In yet another embodiment, the selected target data was selected by the user from the scene displayed by the image sensor array, so such processing may not be necessary.

47 [00194] Referring again to Fig. 1-0, in an embodiment, server 4000 may include pixel information selection module 4230, which may select the pixels needed to capture the target, and which may determine the size of the image that should be transmitted from the image sensor array. The size of the image may be determined based on a type of target that is selected, one or more parameters (set by the user, by the device, or by the server, which may or may not be based on the selected target), by the screen resolution of the device, or by any other algorithm. Pixel information selection module 4230 may determine the pixels to be captured in order to express the target, and may update based on changes in the target's status (e.g., if the target is moving, e.g., in the football example, once a play has started and the football player is moving in a certain direction).

[00195] Referring now to Fig. 1-Y, Fig. 1Y includes more of server 4000 according to an embodiment. In an embodiment, server 4000 may include pixel information transmission to ISA module 4240. Pixel information transmission to ISA module 4240 may transmit the selected pixels to the array local processing module 3500 associated with image sensor array 3200.

[00196] Referring now to Figs. 1-AH and 1-AI, Fig. 1-AH depicts an image sensor array 3200, which in this example is pointed at a football stadium, e.g., CenturyLink field. Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00197] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3500. In an embodiment, array local storage and processing module

48 3500 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3500 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3500 to the remote server, which may be, but is not required to be, located further away temporally.

[00198] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00199] Referring again to Fig. 1-AE, the image sensor array 3200 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3510. Consolidated user request reception module 3510 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00200] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels

49 may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module 3330 may include or communicate with a lower resolution module 3314, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[00201] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00202] Referring now again to Fig. 1-Y, server 4000 may include a requested image reception from ISA module 4250. Requested image reception from ISA module 4250 may receive the image data from the array local processing module 3500 (e.g., in the arrow coming "north" from Fig. 1-AI. That image, as depicted in Fig. 1-Y, may include the target (e.g., the football player), as well as some surrounding area (e.g., the area of the field around the football player). The "surrounding area" and the specifics of what is included/transmitted from the array local processing module may be specified by the user (directly or indirectly, e.g., through a set of preferences), or may be determined by the server, e.g., in the pixel information selection module 4230 (shown in Fig. l-O).

[00203] Referring again to Fig. 1-Y, server 4000 may also include a requested image transmission to user device module 4260. Requested image transmission to user device module 4260 may transmit the requested image to the user device 5600. Requested

50 image transmission to user device module 4260 may include components necessary to communicate with user device 5600 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00204] Referring again to Fig. 1-Y, server 4000 may include a server cached image updating module 4270. Server cached image updating module 4270 may take the images received from the array local processing module 3500 (e.g., which may include the image to be sent to the user), and compare the received images with stored or "cached" images on the server, in order to determine if the cached images should be updated. This process may happen frequently or infrequently, depending on embodiments, and may be continuously ongoing as long as there is a data connection, in some embodiments. In some embodiments, the frequency of the process may depend on the available bandwidth to the array local processing module 3500, e.g., that is, at off-peak times, the frequency may be increased. In an embodiment, server cached image updating module 4270 compares an image received from the array local processing module 3500, and, if the image has changes, replaces the cached version of the image with the newer image.

[00205] Referring now again to Fig. 1-E, Fig. 1-E shows user device 5600. In an embodiment, user device 5600 includes image containing selected target receiving module 5630 that may be configured to receive the image from server 4000, e.g., requested image transmission to user device module 4260 of server 4000 (e.g., depicted in Fig. 1-Y, with the data transmission indicated by a rightward-upward arrow passing through Fig. 1-Y and Fig. 1-0 (to the north) before arriving at Fig. 1-E.

[00206] Referring again to Fig. 1-E, Fig. 1-E shows received image presentation module 5640, which may display the requested pixels that include the selected target to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through an exemplary interface that allows the user to monitor the target, and which also may display information about the target (e.g., in an embodiment, as shown in the figures, the game statistics for the football player also may

51 be shown), which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[00207] Referring again to Fig. 1, Figs. 1-F, 1-P, 1-Z, and 1-AJ depict a MUVIA implementation according to an embodiment. This embodiment may be colloquially known as "live street view" in which one or more MUVIA systems allow for a user to move through an area similarly to the well known Google-branded Maps (or Google- Street), except with the cameras working in real time. For example, in an embodiment, referring now to Fig. 1-F, a user device 5700 may include a target selection reception module 5710. Target selection reception module 5710 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA may be focused on a city, and the target may be an address, a building, a car, or a person in the city. As an example, a user may select a street address as a "target." This may be facilitated by a target presentation module, e.g., image selection presentation module 5712, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the street address. In an embodiment, image selection presentation module 5712 may use static images that may or may not be sourced by the MUVIA system, and, in another embodiment, image selection presentation module 5712 may use current or cached views from the MUVIA system.

[00208] In an embodiment, image selection presentation module 5712 may include an audible target selection module 5714 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00209] Referring again to Fig. 1, e.g., Fig. 1-F, in an embodiment, user device 5700 may include selected target transmission module 5720. Selected target transmission module 5720 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00210] Referring now to Fig. 1-P, Fig. 1-P depicts a server 4000 of the MUVIA system according to embodiments. In an embodiment, server 4000 may include a selected target

52 reception module 4310. Selected target reception module 4310 may receive the selected target from the user device 3700. In an embodiment, server 4000 may provide all or most of the data that facilitates the selection of the target, that is, the images and the interface, which may be provided, e.g., through a web portal.

[00211] Referring again to Fig. 1-P, in an embodiment, server 4000 may include a selected image pre-processing module 4320. Selected image pre-processing module 4320 may perform one or more tasks of pre-processing the image, some of which are described herein for exemplary purposes. For example, in an embodiment, selected image pre-processing module 4320 may include a resolution determination module 4322 which may be configured to determine the resolution for the image in order to show the target (and here, resolution is merely a stand-in for any facet of the image, e.g., color depth, size, shadow, pixelation, filter, etc.). In an embodiment, selected image preprocessing module 4320 may include a cached pixel fill-in module 4324. Cached pixel fill-in module 4324 may be configured to manage which portions of the requested image are recovered from a cache, and which are updated, in order to improve performance. For example, if a view of a street is requested, certain features of the street (e.g., buildings, trees, etc., may not need to be retrieved each time, but can be filled in with a cached version, or, in another embodiment, can be filled in by an earlier version. A check can be done to see if a red parked car is still in the same spot as it was in an hour ago; if so, that part of the image may not need to be updated. Using lower

resolution/prior images stored in a memory 4215, as well as other image processing techniques, cached pixel fill-in module determines which portions of the image do not need to be retrieved, thus reducing bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00212] Referring again to Fig. 1-P, in an embodiment, selected image pre-processing module 4320 of server 4000 may include a static object obtaining module 4326, which may operate similarly to cached pixel fill-in module 4324. For example, as in the example shown in Fig. 1-B, static object obtaining module 4326 may obtain prior versions of static objects, e.g., buildings, trees, fixtures, landmarks, etc., which may save

53 bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00213] Referring again to Fig. 1-P, in an embodiment, pixel information transmission to ISA module 4330 may transmit the request for pixels (e.g., an image, after the preprocessing) to the array local processing module 3600 (e.g., as shown in Figs. 1-Z and 1- AI, with the downward extending dataflow arrow).

[00214] Referring now to Figs. 1-Z and 1-AI, in an embodiment, an array local processing module 3600, that may be connected by a higher bandwidth connection to an image sensor array 3200, may be present.

[00215] . Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00216] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3605 to the array local storage and processing module 3600. In an embodiment, array local storage and processing module 3600 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3600 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3605" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3600 to the remote server, which may be, but is not required to be, located further away temporally.

54 [00217] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00218] Referring again to Fig. 1-AJ and Fig. 1-Z, the image sensor array 3200 may capture an image that is received by image capturing module 3605. Image capturing module 3605 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3610.

Consolidated user request reception module 3610 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3620 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00219] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3630. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3617. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3615. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3600, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module may include or communicate with a lower resolution module 3614, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

55 [00220] Referring again to Fig. 1-AJ, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3640. Selected pixel transmission module 3640 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00221] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3600 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3600 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00222] Referring now again to Fig. 1-P, in an embodiment, server 4000 may include image receiving from ISA module 4340. Image receiving from ISA module 4340 may receive the image data from the array local processing module 3600 (e.g., in the arrow coming "north" from Fig. 1-AJ via Fig. 1-Z). The image may include the pixels that were requested from the image sensor array 3200. In an embodiment, server 4000 also may include received image post-processing module 4350, which may, among other postprocessing tasks, fill in objects and pixels into the image that were determined not to be needed by selected image pre-processing module 4320, as previously described. In an embodiment, server 4000 may include received image transmission to user device module 4360, which may be configured to transmit the requested image to the user

56 device 5700. Requested image transmission to user device module 4360 may include components necessary to communicate with user device 5700 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00223] Referring now again to Fig. 1-F, user device 5700 may include a server image reception module 5730. Server image reception module 5730 may receive an image from sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-F.

[00224] In an embodiment, as shown in Figs. 1-F and 1-G, server image reception module 5730 may include an audio stream reception module 5732 and a video stream reception module 5734. In an embodiment, as discussed throughout this application, the MUVIA system may capture still images, video, and also sound, as well as other electromagnetic waves and other signals and data. In an embodiment, the audio signals and the video signals may be handled together, or they may be handled separately, as separate streams. Although not every module in the instant diagram separately shows audio streams and video streams, it is noted here that all implementations of MUVIA contemplate both audio and video coverage, as well as still image and other data collection.

[00225] Referring now to Fig. 1-G, which shows another portion of user device 5700, Fig. 1-G may include a display 5755 and a memory 5765, which may be used to facilitate presentation and/or storage of the received images.

[00226] Figs. 1-H, 1-R, 1-AA, and 1-AB show an embodiment of a MUVIA

implementation. For example, referring now to Fig. 1-H, Fig. 1-H shows an embodiment of a user device 5800. For exemplary purposes, the user device 5800 may be an augmented reality device that shows a user looking down a "street" at which the user is not actually present, e.g., a "virtual tourism" where the user may use their augmented

57 reality device (e.g., googles, e.g., an Oculus Rift-type headgear device) which may be a wearable computer. It is noted that this embodiment is not limited to wearable computers or augmented reality, but as in all of the embodiments described in this disclosure, may be any device. The use of a wearable augmented/virtual reality device is merely used to for illustrative and exemplary purposes.

[00227] In an embodiment, user device 5800 may have a field of view 5810, as shown in Fig. 1-H. The field of view for the user 5810 may be illustrated in Fig. 1-H as follows. The most internal rectangle, shown by the dot hatching, represents the user's "field of view" as they look at their "virtual world." The second most internal rectangle, with the straight line hatching, represents the "nearest" objects to the user, that is, a range where the user is likely to "look" next, by turning their head or moving their eyes. In an embodiment, this area of the image may already be loaded on the device, e.g., through use of a particular codec, which will be discussed in more detail herein. The outermost rectangle, which is the image without hatching, represents further outside the user's viewpoint. This area, too, may already be loaded on the device. By loading areas where the user may eventually look, the system can reduce latency and make a user's motions, e.g., movement of head, eyes, and body, appear "natural" to the system.

[00228] Referring now to Figs. 1-AA and 1-AB, these figures show an array local processing module 3700 that is connected to an image sensor array 3200 (e.g., as shown in Fig. 1-AK, and "viewing" a city as shown in Fig. 1-AJ). The image sensor array 3200 may operate as previously described in this document. In an embodiment, array local processing module 3700 may include a captured image receiving module 3710, which may receive the entire scene captured by the image sensor array 3200, through the higher-bandwidth communication channel 3505. As described previously in this application, these pixels may be "cropped" or "decimated" into the relevant portion of the captured image, as described by one or more of the user device 5800, the server 4000, and the processing done at the array local processing module 3700. This process may occur as previously described. The relevant pixels may be handled by relevant portion of captured image receiving module 3720.

58 [00229] Referring now to Fig. 1-AB, in an embodiment, the relevant pixels for the image that are processed by relevant portion of captured image receiving module 3720 may be encoded using a particular codec at relevant portion encoding module 3730. In an embodiment, the codec may be configured to encode the innermost rectangle, e.g., the portion that represents the current user's field of view, e.g., portion 3716, at a higher resolution, or a different compression, or a combination of both. The codec may be further configured to encode the second rectangle, e.g., with the vertical line hashing, e.g., portion 3714, at a different resolution and/or a different (e.g., a higher) compression. Similarly, the outermost portion of the image, e.g., the clear portion 3712, may again be coded at still another resolution and/or a different compression. In an embodiment, the codec itself handles the algorithm for encoding the image, and as such, in an

embodiment, the codec may include information about user device 5800.

[00230] As shown in Fig. 1-AB, the encoded portion of the image, including portions 3716, 3714, and 3712, may be transmitted using encoded relevant portion transmitting module 3740. It is noted that "lower compression," "more compression," and "higher compression," are merely used as one example for the kind of processing done by the codec. For example, instead of lower compression, a different sampling algorithm or compacting algorithm may be used, or a lossier algorithm may be implemented for various parts of the encoded relevant portion.

[00231] Referring now to Fig. 1-R, Fig. 1-R depicts a server 4000 in a MUVIA system according to an embodiment. For example, as shown in Fig. 1-R, server 4000 may include, in addition to portions previously described, an encoded image receiving module 4410. Encoded image receiving module 4410 may receive the encoded image, encoded as previously described, from encoded relevant portion transmitting module 3740 of array local processing module 3700.

[00232] Referring again to Fig. 1-R, server 4000 may include an encoded image transmission controlling module 4420. Encoded image transmission controlling module 4420 may transmit portions of the image to the user device 5800. In an embodiment, at least partially depending on the bandwidth and the particulars of the user device, the

59 server may send all of the encoded image to the user device, and let the user device decode the portions as needed, or may decode the image and send portions in piecemeal, or with a different encoding, depending on the needs of the user device, and the complexity that can be handled by the user device.

[00233] Referring again to Fig. 1-H, user device 5800 may include an encoded image transmission receiving module 5720, which may be configured to receive the image that is coded in a particular way, e.g., as will be disclosed in more detail herein. Fig. 1-H also may include an encoded image processing module 5830 that may handle the processing of the image, that is, encoding and decoding portions of the image, or other processing necessary to provide the image to the user.

[00234] Referring now to Fig. 1-AL, Fig. 1-AL shows an implementation of an

Application Programming Interface (API) for the various MUVIA components.

Specifically, image sensor array API 7800 may include, among other elements, a programming specification 7810, that may include, for example, libraries, classes, specifications, templates, or other coding elements that generally make up an API, and an access authentication module 7820 that governs API access to the various image sensor arrays. The API allows third party developers to access the workings of the image sensor array and the array local processing module 3700, so that the third party developers can write applications for the array local processing module 3700, as well as determine which data captured by the image sensor array 3200 (which often may be multiple gigabytes or more of data per second) should be kept or stored or transmitted. In an embodiment, API access to certain functions may be limited. For example, a tiered system may allow a certain number of API calls to the MUVIA data per second, per minute, per hour, or per day. In an embodiment, a third party might pay fees or perform a registration that would allow more or less access to the MUVIA data. In an embodiment, the third party could host their application on a separate web site, and let that web site access the image sensor array 3200 and/or the array local processing module 3700 directly.

60 [00235] Referring again to Fig. 1, Figs. 1-1, 1-J, 1-S, 1-T, 1-AC, 1-AD, 1-AM, and 1- AN, in an embodiment, show a MUVIA implementation that allows insertion of advertising (or other context-sensitive material) into the images displayed to the user.

[00236] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection accepting module 5910. User selection accepting module 5910 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-1, the user selection accepting module 5910 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00237] In an embodiment, the "user selection" facilitated by the user selection accepting module 5910 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00238] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection transmitting module 5920. The user selection transmitting module 5920 may take the user selection from user selection transmitting module 5920, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5900 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server 4000. Following the thick- line arrow leftward from user selection transmitting module 5920 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed

61 herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00239] Referring again to Fig. 1-1, Fig. 1-1 also includes a selected image receiving module 5930 and a user selection presenting module 5940, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00240] Referring now to Fig. 1-T (graphically represented as "down" and "to the right" of Fig. 1-1), in an embodiment, a server 4000 may include a selected image reception module 4510. In an embodiment, selected image reception module 4510 of server 4000 may receive the selected target from the user device 5900. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00241] Referring again to Fig. 1-T, in an embodiment, server 4000 may include selected image pre-processing module 4520. Selected image pre-processing module 4520 may perform one or more tasks of pre-processing the image, some of which have been previously described with respect to other embodiments. In an embodiment, server 4000 also may include pixel information transmission to ISA module 4330 configured to transmit the image request data to the image search array 3200, as has been previously described.

[00242] Referring now to Figs. 1-AD and 1-AN, array local processing module 3700 may be connected to an image sensor array 3200 through a higher-bandwidth

communication link 3505, e.g., a USB or PCI port. In an embodiment, image sensor array 3200 may include a request reception module 3710. Request reception module 3710 may receive the request for an image from the server 4000, as previously described.

62 Request reception module 3710 may transmit the data to a pixel selection module 3720, which may receive the pixels captured from image sensor array 3200, and select the ones that are to be kept. That is, in an embodiment, through use of the (sometimes

consolidated) user requests and the captured image, pixel selection module 3720 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00243] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3730. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3717. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3715. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3700, or may be subject to other manipulations or processing separate from the user requests, as described in previous embodiments. In an embodiment, unused pixel decimation module 3730 may be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to fulfill the request of the user.

[00244] Referring again to Fig. 1-AN, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3740. Selected pixel transmission module 3740 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3710. Similarly to lower-bandwidth

communication 3715, the lower-bandwidth communication 3710 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00245] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module

63 3700 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3700 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00246] Referring now again to Fig. 1-T, in an embodiment, server 4000 may include received image post-processing module 4550. Received image post-processing module 4550 may receive the image data from the array local processing module 3700 (e.g., in the arrow coming "north" from Fig. 1-AN via Fig. 1-AD). The image may include the pixels that were requested from the image sensor array 3200.

[00247] In an embodiment, server 4000 also may include advertisement insertion module 4560. Advertisement insertion module 4560 may insert an advertisement into the received image. The advertisement may be based one or more of the contents of the image, a characteristic of a user or the user device, or a setting of the advertisement server component 7700 (see, e.g., Fig. 1-AC, as will be discussed in more detail herein). The advertisement insertion module 4560 may place the advertisement into the image using any known image combination techniques, or, in another embodiment, the advertisement image may be in a separate layer, overlay, or any other data structure. In an embodiment, advertisement insertion module 4560 may include context-based advertisement insertion module 4562, which may be configured to add advertisements that are based on the context of the image. For example, if the image is a live street view of a department store, the context of the image may show advertisements related to products sold by that department store, e.g., clothing, cosmetics, or power tools.

[00248] Referring again to Fig. 1-T, server 4000 may include a received image with advertisement transmission to user device module 4570 configured to transmit the ima

64 5900 Received image with advertisement transmission to user device module 4570 may include components necessary to communicate with user device 5900 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00249] Referring again to Fig. 1-1, user device 5900 may include a selected image receiving module 5930, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5940, which may display the requested pixels to the user, including the advertisement, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-1.

[00250] Referring now to Fig. 1-AC, Fig. 1-AC shows an advertisement server component 7700 configured to deliver advertisements to the server 4000 for insertion into the images prior to delivery to the user. In an embodiment, advertisement server component 7700 may be integrated with server 4000. In another embodiment,

advertisement server component may be separate from server 4000 and may

communicate with server 4000. In yet another embodiment, rather than interacting with server 4000, advertisement server component 7700 may interact directly with the user device 5900, and insert the advertisement into the image after the image has been received, or, in another embodiment, cause the user device to display the advertisement concurrently with the image (e.g., overlapping or adjacent to). In such embodiments, some of the described modules of server 4000 may be incorporated into user device 5900, but the functionality of those modules would operate similarly to as previously described.

[00251] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a user data collection module 7705. User data collection module 7705 may collect data from user device 5900, and use that data to drive placement of advertisements (e.g., based on a user's browser history, e.g., to sports sites, and the like).

65 [00252] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include advertisement database 7715 which includes

advertisements that are ready to be inserted into images. In an embodiment, these advertisements may be created on the fly.

[00253] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include an advertisement request reception module 7710 which receives a request to add an advertisement into the drawing (the receipt of the request is not shown to ease understanding of the drawings). In an embodiment, advertisement server component 7700 may include advertisement selection module 7720, which may include an image analysis module 7722 configured to analyze the image to determine the best context-based advertisement to place into the image. In an embodiment, that decision may be made by the server 4000, or partly at the server 4000 and partly at the advertisement server component 7700 (e.g., the advertisement server component may have a set of advertisements from which a particular one may be chosen). In an embodiment, various third parties may compensate the operators of server component 7700, server 4000, or any other component of the system, in order to receive preferential treatment.

[00254] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a selected advertisement transmission module 7730, which may transmit the selected advertisement (or a set of selected advertisements) to the server 4000. In an embodiment, selected advertisement transmission module 7730 may send the complete image with the advertisement overlaid, e.g., in an implementation in which the advertisement server component 7700 also handles the placement of the advertisement. In an embodiment in which advertisement server component 7700 is integrated with server 4000, this module may be an internal transmission module, as may all such transmission/reception modules.

Exemplary Environment 200

[00255] Referring now to Fig. 2A, Fig. 2A illustrates an example environment 200 in which methods, systems, circuitry, articles of manufacture, and computer program

66 products and architecture, in accordance with various embodiments, may be implemented by at least one image device 220. Image device 220 may include a number of individual sensors that capture data. Although commonly referred to throughout this application as "image data," this is merely shorthand for data that can be collected by the sensors.

Other data, including video data, audio data, electromagnetic spectrum data (e.g., infrared, ultraviolet, radio, microwave data), thermal data, and the like, may be collected by the sensors.

[00256] Referring again to Fig. 2A, in an embodiment, image device 220 may operate in an environment 200. Specifically, in an embodiment, image device 220 may capture a scene 215. The scene 215 may be captured by a number of sensors 243. Sensors 243 may be grouped in an array, which in this context means they may be grouped in any pattern, on any plane, but have a fixed position relative to one another. Sensors 243 may capture the image in parts, which may be stitched back together by processor 222. There may be overlap in the images captured by sensors 243 of scene 215, which may be removed.

[00257] Upon capture of the scene in image device 220, in processes and systems that will be described in more detail herein, the requested pixels are selected. Specifically, pixels that have been identified by a remote user, by a server, by the local device, by another device, by a program written by an outside user with an API, by a component or other hardware or software in communication with the image device, and the like, are transmitted to a remote location via a communications network 240. The pixels that are to be transmitted may be illustrated in Fig. 2A as selected portion 255, however this is a simplified expression meant for illustrative purposes only.

[00258] Referring again to Fig. 2A, in an embodiment, server device 230 may be any device or group of devices that is connected to a communication network. Although in some examples, server device 230 is distant from image device 220, that is not required. Server device 230 may be "remote" from image device 220, which may be that they are separate components, but does not necessarily imply a specific distance. The

communications network may be a local transmission component, e.g., a PCI bus. Server

67 device 230 may include a request handling module 232 that handles requests for images from user devices, e.g., user device 250A and 240B. Request handling module 232 also may handle other remote computers and/or users that want to take active control of the image device, e.g., through an API, or through more direct control.

[00259] Server device 230 mlso may include an image device management module, which may perform some of the processing to determine which of the captured pixels of image device 220 are kept. For example, image device management module 234 may do some pattern recognition, e.g., to recognize objects of interest in the scene, e.g., a particular football player, as shown in the example of Fig. 2A. In other embodiments, this processing may be handled at the image device 220 or at the user device 250. In an embodiment, server device 230 limits a size of the selected portion by a screen resolution of the requesting user device.

[00260] Server device 230 then may transmit the requested portions to the user devices, e.g., user device 250A and user device 250B. In another embodiment, the user device or devices may directly communicate with image device 220, cutting out server device 230 from the system.

[00261] In an embodiment, user device 250A and 250B are shown, however user devices may be any electronic device or combination of devices, which may be located together or spread across multiple devices and/or locations. Image device 220 may be a server device, or may be a user-level device, e.g., including, but not limited to, a cellular phone, a network phone, a smartphone, a tablet, a music player, a walkie-talkie, a radio, an augmented reality device (e.g., augmented reality glasses and/or headphones), wearable electronics, e.g., watches, belts, earphones, or "smart" clothing, earphones, headphones, audio/visual equipment, media player, television, projection screen, flat screen, monitor, clock, appliance (e.g., microwave, convection oven, stove, refrigerator, freezer), a navigation system (e.g., a Global Positioning System ("GPS") system), a medical alert device, a remote control, a peripheral, an electronic safe, an electronic lock, an electronic security system, a video camera, a personal video recorder, a personal audio recorder, and the like. Device 220 may include a device interface 243 which may allow the device 220

68 to output data to the client in sensory (e.g., visual or any other sense) form, and/or allow the device 220 to receive data from the client, e.g., through touch, typing, or moving a pointing device (e.g., a mouse). User device 250 may include a viewfinder or a viewport that allows a user to "look" through the lens of image device 220, regardless of whether the user device 250 is spatially close to the image device 220.

[00262] Referring again to Fig. 2A, in various embodiments, the communication network 240 may include one or more of a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a wireless local area network (WLAN), a personal area network (PAN), a Worldwide Interoperability for Microwave Access (WiMAX), public switched telephone network (PTSN), a general packet radio service (GPRS) network, a cellular network, and so forth. The communication networks 240 may be wired, wireless, or a combination of wired and wireless networks. It is noted that "communication network" as it is used in this application refers to one or more communication networks, which may or may not interact with each other.

[00263] Referring now to Fig. 2B, Fig. 2B shows a more detailed version of image device 220, according to an embodiment. The image device 220 may include a device memory 245. In an embodiment, device memory 245 may include memory, random access memory ("RAM"), read only memory ("ROM"), flash memory, hard drives, disk- based media, disc-based media, magnetic storage, optical storage, volatile memory, nonvolatile memory, and any combination thereof. In an embodiment, device memory 245 may be separated from the device, e.g., available on a different device on a network, or over the air. For example, in a networked system, there may be more than one image device 220 whose device memories 245 may be located at a central server that may be a few feet away or located across an ocean. In an embodiment, device memory 245 may include of one or more of one or more mass storage devices, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), cache memory such as random access memory (RAM), flash memory, synchronous random access memory (SRAM), dynamic random access memory

(DRAM), and/or other types of memory devices. In an embodiment, memory 245 may

69 be located at a single network site. In an embodiment, memory 245 may be located at multiple network sites, including sites that are distant from each other.

[00264] Referring again to Fig. 2B, in an embodiment, image device 220 may include one or more image sensors 243 and may communicate with a communication network 240. Although sensors 243 are referred to as "image" sensors, this is merely shorthand for sensors that collect dat, including image data, video data, sound data, electromagnetic spectrum data, and other data. The image sensors 243 may be in an array, which merely means that the image sensors may have a specific location relative to each other.

[00265] Referring again to Fig. 2B, Fig. 2B shows a more detailed description of image device 220. In an embodiment, device 220 may include a processor 222. Processor 222 may include one or more microprocessors, Central Processing Units ("CPU"), a Graphics Processing Units ("GPU"), Physics Processing Units, Digital Signal Processors, Network Processors, Floating Point Processors, and the like. In an embodiment, processor 222 may be a server. In an embodiment, processor 222 may be a distributed-core processor. Although processor 222 is as a single processor that is part of a single device 220, processor 222 may be multiple processors distributed over one or many devices 220, which may or may not be configured to operate together.

[00266] Processor 222 is illustrated as being configured to execute computer readable instructions in order to execute one or more operations described above, and as illustrated in Fig. 10, Figs. 11A-11F, Figs. 12A-12C, Figs. 13A-13D, and Figs. 14A-14D. In an embodiment, processor 222 is designed to be configured to operate as processing module 250, which may include one or more of a multiple image sensor based scene capturing module 252 configured to capture a scene that includes one or more images, through use of more than one image sensor, a scene particular portion selecting module 254 configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene, a selected particular portion transmitting module 256 configured to transmit the selected particular portion from the scene to a remote location, and a scene pixel de-emphasizing module 258

70 configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

Exemplary Environment 300

[00267] Referring now to Fig. 3, Fig. 3 shows an exemplary embodiment of an image device, e.g., image device 220A operating in an environment 300. In an embodiment, image device 220A may include an array 310 of image sensors 312 as shown in Fig. 3. The array of image sensors in this image is shown in a rectangular grid, however this is merely exemplary to show that image sensors 312 may be arranged in any format. In an embodiment, each image sensor 312 may capture a portion of scene 315, which portions are then processed by processor 350. Although processor 350 is shown as local to image device 220A, it may be remote to image device 220A, with a sufficiently high-bandwidth connection to receive all of the data from the array of image sensors (e.g., multiple USB 3.0 lines). In an embodiment, the selected portions from the scene (e.g., the portions shown in the shaded box, e.g., selected portion 315), may be transmitted to a remote device 330, which may be a user device or a server device, as previously described. In an embodiment, the pixels that are not transmitted to remote device 330 may be stored in a local memory 340 or discarded.

Exemplary Environment 400

[00268] Referring now to Fig. 4, Fig. 4 shows an exemplary embodiment of an image device, e.g., image device 420 operating in an environment 400. In an embodiment, image device 420 may include an image sensor array 420, e.g., an array of image sensors, which, in this example, are arranged around a polygon to increase the field of view that can be captured, that is, they can capture scene 415, illustrated in Fig. 4 as a natural landmark that can be viewed in a virtual tourism setting. Processor 422 receives the scene 415 and selects the pixels from the scene 415 that have been requested by a user, e.g., requested portions 417. Requested portions 417 may include an overlapping area 424 that is only transmitted once. In an embodiment, the requested portions 417 may be transmitted to a remote location via communications network 240.

71 Exemplary Environment 500

[00269] Referring now to Fig. 5, Fig. 5 shows an exemplary embodiment of an image device, e.g., image device 520 operating in an environment 500. In an embodiment, image device 520 may capture a scene, of which a part of the scene, e.g., scene portion

515, as previously described in other embodiments (e.g., some parts of image device 520 are omitted for simplicity of drawing). In an embodiment, e.g., scene portion 515 may show a street-level view of a busy road, e.g., for a virtual tourism or virtual reality simulator. In an embodiment, different portions of the scene portion 515 may be transmitted at different resolutions or at different times . For example, in an embodiment, a central part of the scene portion 515, e.g., portion 516, which may correspond to what a user's eyes would see, is transmitted at a first resolution, or "full" resolution relative to what the user's device can handle. In an embodiment, an outer border outside portion

516, e.g., portion 514, may be transmitted at a second resolution, which may be lower, e.g., lower than the first resolution. In another embodiment, a further outside portion, e.g., portion 512, may be discarded, transmitted at a still lower rate, or transmitted asynchronously.

Exemplary Embodiments of the Various Modules of Portions of Processor 250

[00270] Figs. 6-9 illustrate exemplary embodiments of the various modules that form portions of processor 250. In an embodiment, the modules represent hardware, either that is hard-coded, e.g., as in an application- specific integrated circuit ("ASIC") or that is physically reconfigured through gate activation described by computer instructions, e.g., as in a central processing unit.

[00271] Referring now to Fig. 6, Fig. 6 illustrates an exemplary implementation of the multiple image sensor based scene capturing module 252. As illustrated in Fig. 6, the multiple image sensor based scene capturing module may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 6, e.g., Fig. 6A, in an embodiment, module 252 may include one or more of multiple image sensor based scene that includes the one or more images capturing through use of an array of image sensors module 602, multiple image sensor based scene that includes the one or more images capturing through use of two image sensors

72 arranged side by side and angled toward each other module 604, multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a grid pattern module 606, and multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line pattern module 608. In an embodiment, module 608 may include one or more of multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line such that a field of view is greater than 120 degrees module 610 and multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line such that a field of view is 180 degrees module 612.

[00272] Referring again to Fig. 6, e.g., Fig. 6B, as described above, in an embodiment, module 252 may include one or more of multiple image sensor based scene that includes the one or more images capturing through use of more than one stationary image sensor module 614, multiple image sensor based scene that includes the one or more images capturing through use of more than one static image sensor module 616, multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted in a fixed location module 620, and multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on a movable platform module 622. In an embodiment, module 616 may include multiple image sensor based scene that includes the one or more images capturing through use of more than one static image sensor that has a fixed focal length and a fixed field of view module 618. In an embodiment, module 622 may include multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on an unmanned aerial vehicle module 624.

[00273] Referring again to Fig. 6, e.g., Fig. 6C, in an embodiment, module 252 may include one or more of multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor 626, multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the scene, through use

73 of more than one image sensor 628, particular image from each image sensor acquiring module 632, and acquired particular image from each image sensor combining into the scene module 634. In an embodiment, module 628 may include multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the sene that are configured to be stitched together, through use or more than one image sensor module 630. In an embodiment, module 632 may include particular image with at least partial other image overlap from each image sensor acquiring module 636. In an embodiment, module 636 may include particular image with at least partial adjacent image overlap from each image sensor acquiring module 638.

[00274] Referring again to Fig. 6, e.g., Fig. 6D, in an embodiment, module 252 may include one or more of multiple image sensor based scene that is larger than a bandwidth for remote transmission of the scene capturing through use of more than one image sensor module 640, multiple image sensor based scene of a tourist destination capturing through use of more than one image sensor module 646, multiple image sensor based scene of a highway bridge capturing through use of more than one image sensor module 648, and multiple image sensor based scene of a home interior capturing through use of more than one image sensor module 650. In an embodiment, module 640 may include multiple image sensor based scene that contains more image data than a bandwidth for remote transmission of the scene capturing through use of more than one image sensor module 642. In an embodiment, module 642 may include multiple image sensor based scene that contains more image data than a bandwidth for remote transmission of the scene by a factor of ten capturing through use of more than one image sensor module 644.

[00275] Referring again to Fig. 6, e.g., Fig. 6E, in an embodiment, module 252 may include one or more of multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors module 652, multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors that direct image data to a common collector module 654, and multiple image sensor based scene that includes the one or more images capturing

74 through use of an grouping of image sensors that include charge-coupled devices and complementary metal-oxide-semiconductor devices module 656.

[00276] Referring again to Fig. 6, e.g., Fig. 6F, in an embodiment, module 252 may include one or more of multiple image and sound data based scene that includes the one or more images capturing through use of image and sound sensors module 658, multiple image sensor based scene that includes sound wave image data capturing through use of an array of sound wave image sensors module 662, and multiple video capture sensor based scene that includes the one or more images capturing through use of one or more video capture sensors module 664. In an embodiment, module 658 may include multiple image and sound data based scene that includes the one or more images capturing through use of image and sound microphones module 660.

[00277] Referring now to Fig. 7, Fig. 7 illustrates an exemplary implementation of scene particular portion selecting module 254. As illustrated in Fig. 7, the scene particular portion selecting module 254 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 7, e.g., Fig. 7 A, in an embodiment, module 254 may include one or more of scene particular portion that is smaller than the scene and that includes at least one image selecting module 702, scene particular portion that is smaller than the scene and that includes at least one requested image selecting module 706, particular image request receiving module 710, and received request for particular image selecting from scene module 712. In an embodiment, module 702 may include scene particular portion that is smaller than the scene and that includes at least one remote user-requested image selecting module 704. In an embodiment, module 706 may include scene particular portion that is smaller than the scene and that includes at least one remote- operator requested image selecting module 708.

[00278] Referring again to Fig. 7, e.g., Fig. 7B, in an embodiment, module 254 may include one or more of first request for a first particular image and second request for a second particular image receiving module 714 and scene particular portion that is first particular image and second particular image selecting module 716. In an embodiment,

75 module 714 may include one or more of first request for a first particular image and second request for a second particular image that does not overlap the first particular image receiving module 718 and first request for a first particular image and second request for a second particular image that overlaps the first particular image receiving module 720. In an embodiment, module 720 may include first request for a first particular image and second request for a second particular image that overlaps the first particular image and an overlapping portion is configured to be transmitted once only receiving module 722.

[00279] Referring again to Fig. 7, e.g., Fig. 7C, in an embodiment, module 254 may include scene particular portion that is smaller than the scene and that contains a particular image object selecting module 724. In an embodiment, module 724 may include one or more of scene particular portion that is smaller than the scene and that contains a particular image object that is a person selecting module 726 and scene particular portion that is smaller than the scene and that contains a particular image object that is a vehicle selecting module 728.

[00280] Referring now to Fig. 8, Fig. 8 illustrates an exemplary implementation of selected particular portion transmitting module 256. As illustrated in Fig. 8A, the selected particular portion transmitting module 256 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 8, e.g., Fig. 8A, in an embodiment, module 256 may include selected particular portion transmitting to a remote server module 802. In an embodiment, module 802 may include one or more of selected particular portion transmitting to a remote server configured to receive particular image requests module 804 and selected particular portion transmitting to a remote server that requested a particular image module 808. In an embodiment, module 804 may include selected particular portion transmitting to a remote server configured to receive particular discrete image requests module 806.

[00281] Referring again to Fig. 8, e.g., Fig. 8B, in an embodiment, module 256 may include selected particular portion transmitting at a particular resolution module 810. In

76 an embodiment, module 810 may include one or more of available bandwidth to remote location determining module 812, selected particular portion transmitting at a particular resolution based on determined available bandwidth module 814, selected particular portion transmitting at a particular resolution less than a scene resolution module 816, and selected particular portion transmitting at a particular resolution less than a captured particular portion resolution module 818.

[00282] Referring again to Fig. 8, e.g., Fig. 8C, in an embodiment, module 256 may include one or more of first segment of selected particular portion transmitting at a first resolution module 820 and second segment of selected particular portion transmitting at a second resolution module 822. In an embodiment, module 820 may include one or more of first segment of selected particular portion that surrounds the second segment transmitting at a first resolution module 823, first segment of selected particular portion that borders the second segment transmitting at a first resolution module 824, first segment of selected particular portion that is determined by selected particular portion content transmitting at a first resolution module and 826. In an embodiment, module 826 may include first segment of selected particular portion that is determined as not containing an item of interest transmitting at a first resolution module 828. In an embodiment, module 828 may include one or more of first segment of selected particular portion that is determined as not containing a person of interest transmitting at a first resolution module 830 and first segment of selected particular portion that is determined as not containing an object designated for tracking transmitting at a first resolution module 832. In an embodiment, module 822 may include one or more of second segment that is a user-selected area of selected particular portion transmitting at a second resolution that is higher than the first resolution module 834 and second segment that is a surrounded by the first segment transmitting at a second resolution that is higher than the first resolution module 836.

[00283] Referring again to Fig. 8, e.g., Fig. 8D, in an embodiment, module 256 may include modules 820 and 822, as previously described. In an embodiment, module 822 may include second segment of selected particular portion that contains an object of interest transmitting at the second resolution module 838. In an embodiment, module

77 838 may include one or more of second segment of selected particular portion that contains an object of interest that is a football player transmitting at the second resolution module 840, second segment of selected particular portion that contains an object of interest that is a landmark transmitting at the second resolution module 842, second segment of selected particular portion that contains an object of interest that is an animal transmitting at the second resolution module 844, and second segment of selected particular portion that contains an object of interest that is a selected object in a dwelling transmitting at the second resolution module 846.

[00284] Referring now to Fig. 9, Fig. 9 illustrates an exemplary implementation of scene pixel de-emphasizing module 258. As illustrated in Fig. 9 A, the scene pixel de- emphasizing module 258 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 9, e.g., Fig. 9 A, in an embodiment, module 258 may include one or more of scene pixel nonselected nontransmitting module 902, scene pixel exclusive pixels selected from the particular portion transmitting to a particular device module 904, and scene nonselected pixel deleting module 910. In an embodiment, module 904 may include one or more of scene pixel exclusive pixels selected from the particular portion transmitting to a particular device that requested the particular portion module 906 and scene pixel exclusive pixels selected from the particular portion transmitting to a particular device user that requested the particular portion module 908.

[00285] Referring again to Fig. 9, e.g., Fig. 9B, in an embodiment, module 258 may include one or more of scene nonselected pixels indicating as nontransmitted module 912, scene nonselected pixels discarding module 916, and scene nonselected retention preventing module 918. In an embodiment, module 912 may include scene nonselected pixels appending data that indicates nontransmission module 914.

[00286] Referring again to Fig. 9, e.g., Fig. 9C, in an embodiment, module 258 may include scene pixel subset retaining module 920. In an embodiment, module 920 may include one or more of scene pixel ten percent subset retaining module 922, scene pixel targeted object subset retaining module 924, and scene pixel targeted automation

78 identified object subset retaining module 928. In an embodiment, module 924 may include scene pixel targeted scenic landmark object subset retaining module 926.

[00287] Referring again to Fig. 9, e.g., Fig. 9D, in an embodiment, module 258 may include scene pixel subset storing in separate storage module 930. In an embodiment, module 930 may include scene pixel subset storing in separate local storage module 932 and scene pixel subset storing in separate storage for separate transmission module 934. In an embodiment, module 934 may include one or more of scene pixel subset storing in separate storage for separate transmission at off-peak time module 936 and scene pixel subset storing in separate storage for separate transmission as lower-priority data module 938.

[00288] In some implementations described herein, logic and similar implementations may include software or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of existing software or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software, firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

[00289] Following are a series of flowcharts depicting implementations. For ease of understanding, the flowcharts are organized such that the initial flowcharts present implementations via an example implementation and thereafter the following flowcharts present alternate implementations and/or expansions of the initial flowchart(s) as either sub-component operations or additional component operations building on one or more

79 earlier-presented flowcharts. Those having skill in the art will appreciate that the style of presentation utilized herein (e.g., beginning with a presentation of a flowchart(s) presenting an example implementation and thereafter providing additions to and/or further details in subsequent flowcharts) generally allows for a rapid and easy

understanding of the various process implementations. In addition, those skilled in the art will further appreciate that the style of presentation used herein also lends itself well to modular and/or object-oriented program design paradigms.

Exemplary Operational Implementation of Processor 250 and Exemplary Variants

[00290] Further, in Fig. 10 and in the figures to follow thereafter, various operations may be depicted in a box-within-a-box manner. Such depictions may indicate that an operation in an internal box may comprise an optional example embodiment of the operational step illustrated in one or more external boxes. However, it should be understood that internal box operations may be viewed as independent operations separate from any associated external boxes and may be performed in any sequence with respect to all other illustrated operations, or may be performed concurrently. Still further, these operations illustrated in Fig. 10 as well as the other operations to be described herein may be performed by at least one of a machine, an article of manufacture, or a composition of matter.

[00291] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software, and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other

technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some

80 combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software, and or firmware.

[00292] Throughout this application, examples and lists are given, with parentheses, the abbreviation "e.g.," or both. Unless explicitly otherwise stated, these examples and lists are merely exemplary and are non-exhaustive. In most cases, it would be prohibitive to list every example and every combination. Thus, smaller, illustrative lists and examples are used, with focus on imparting understanding of the claim terms rather than limiting the scope of such terms.

[00293] Referring now to Fig. 10, Fig. 10 shows operation 1000, e.g., an example operation of message processing device 230 operating in an environment 200. In an embodiment, operation 1000 may include operation 1002 depicting capturing a scene that includes one or more images, through use of an array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene capturing module 252 capturing (e.g., collecting data, that includes visual data, e.g., pixel data, sound data, electromagnetic data, nonvisible spectrum data, and the like) that includes one or more images (e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), through use of an array (e.g., any grouping configured to work together in unison, regardless of arrangement, symmetry, or appearance) of more than one image sensor (e.g., a device, component, or collection of components configured to collect light, sound, or other electromagnetic spectrum data, and/or to convert the collected into digital data, or perform at least a portion of the foregoing actions).

[00294] Referring again to Fig. 10, operation 1000 may include operation 1004 depicting selecting a particular portion of the scene that includes at least one image, wherein the

81 selected particular portion is smaller than the scene. For example, Fig. 2, e.g., Fig. 2B, shows scene particular portion selecting module 254 selecting (e.g., whether actively or passively, choosing, flagging, designating, denoting, signifying, marking for, taking some action with regard to, changing a setting in a database, creating a pointer to, storing in a particular memory or part/address of a memory, etc.) a particular portion (e.g., some subset of the entire scene that includes some pixel data, whether pre-or post-processing, which may or may not include data from multiple of the array of more than one image sensor) of the scene (e.g., the data, e.g., image data or otherwise (e.g., sound,

electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, or post), that includes at least one image (e.g., a portion of pixel or other data that is related temporally or spatially (e.g., contiguous or partly contiguous), wherein the selected particular portion is smaller (e.g., some objectively measurable feature has a lower value, e.g., size, resolution, color, color depth, pixel data granularity, number of colors, hue, saturation, alpha value, shading) than the scene scene (e.g., the data, e.g., image data or otherwise (e.g., sound,

electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, orpost).

[00295] Referring again to Fig. 10, operation 1000 may include operation 1006 depicting transmitting only the selected particular portion from the scene to a remote location. For example, Fig. 2 e.g., Fig. 2B shows selected particular portion transmitting module 256 transmitting only (e.g., not transmitting the parts of the scene that are not part of the selected particular portion) the selected particular portion (e.g., the designated pixel data) from the scene (e.g., the data, e.g., image data or otherwise (e.g., sound, electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, or post) to a remote location (e.g., a device or other component that is separate from the image device, "remote" here not necessarily implying or excluding any particular distance, e.g., the remote device may be a server device, some combination of cloud devices, an individual user device, or some combination of devices).

82 [00296] Referring again to Fig. 10, operation 1000 may include operation 1008 depicting de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 2, e.g., Fig. 2B, shows scene pixel de-emphasizing module 258 de-emphasizing (e.g., whether actively or passively, taking some action to separate pixels from the scene that are not part of the selected particular portion, including deleting, marking for deletion, storing in a separate location or memory address, flagging, moving, or, in an embodiment, simply not saving the pixels in an area in which they can be readily retained.

[00297] Figs. 11A-11F depict various implementations of operation 1002, depicting capturing a scene that includes one or more images, through use of an array of more than one image sensor according to embodiments. Referring now to Fig. 11 A, operation 1002 may include operation 1102 depicting capturing the scene that includes the one or more images, through use of an array of image sensors. For example, Fig. 6, e.g., Fig. 6A shows multiple image sensor based scene that includes the one or more images capturing through use of an array of image sensors module 402 capturing the scene that includes the one or more images, through use of an array of image sensors (e.g., one hundred three-megapixel image sensors attached to a metal plate and arranged in a consistent pattern).

[00298] Figs. 11A-11E depict various implementations of operation 1002, depicting capturing a scene that includes one or more images, through use of an array of more than one image sensor according to embodiments. Referring now to Fig. 11 A, operation 1002 may include operation 1102 depicting capturing the scene that includes the one or more images, through use of an array of image sensors. For example, Fig. 6, e.g., Fig. 6A shows multiple image sensor based scene that includes the one or more images capturing through use of an array of image sensors module 402 capturing the scene (e.g., a live street view of a busy intersection in Alexandria, VA) that includes the one or more images (e.g., images of cars passing by in the live street view, images of shops on the intersection, images of people in the crosswalk, images of trees growing on the side, images of a particular person that has been designated for watching).

83 [00299] Referring again to Fig. 11 A, operation 1002 may include operation 1104 depicting capturing the scene that includes the one or more images, through use of two image sensors arranged side by side and angled toward each other. For example, Fig. 6, e.g., Fig. 6A, shows multiple image sensor based scene that includes the one or more images capturing through use of two image sensors arranged side by side and angled toward each other module 604 capturing the scene (e.g., a virtual tourism scene, e.g., a camera array pointed at the Great Pyramids) that includes the one or more images (e.g., images that have been selected by users that are using the virtual tourism scene, e.g., to view the pyramid entrance or the top vents in the pyramid) through use of two image sensors (e.g., two digital cameras with a 100 megapixel rating) arranged side by side (e.g., in a line, when viewed from a particular perspective) and angled toward each other (e.g., the digital cameras are pointed toward each other).

[00300] Referring again to Fig. 11 A, operation 1002 may include operation 1106 depicting capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a grid. For example, Fig. 6, e.g., Fig. 6A, shows multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a grid pattern module 606 capturing the scene (e.g., a football stadium area) that includes the one or more images (e.g., images of the field, images of a person in the crowd, images of a specific football player), through use of the array of image sensors (e.g., one hundred image sensors) arranged in a grid (e.g., the image sensors are attached to a rigid object and formed in a 10x10 gridpattern).

[00301] Referring again to Fig. 11 A, operation 1002 may include operation 1108 depicting capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line. For example, Fig. 6, e.g., Fig. 6 A, shows multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line pattern module 608 capturing the scene (e.g., a checkout line at a grocery) that includes the one or more images (e.g., images of the shoppers and the items in the shoppers'

84 [00302] Referring again to Fig. 11 A, operation 1108 may include operation 1110 depicting capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is greater than 120 degrees. For example, Fig. 6, e.g., Fig. 6A, shows multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line such that a field of view is greater than 120 degrees module 610 capturing the scene (e.g., a highway bridge) that includes the one or more images (e.g., automatically tracked images of the license plates of every car that crosses the highway bridge), through use of the array of image sensors arranged in a line such that a field of view (e.g., the viewable area of the camera array) is greater than 120 degrees.

[00303] Referring again to Fig. 11 A, operation 1108 may include operation 1112 depicting capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is 180 degrees. For example, Fig. 6, e.g., Fig. 6A, shows multiple image sensor based scene that includes the one or more images capturing through use of image sensors aligned in a line such that a field of view is 180 degrees module 612 capturing the scene (e.g., a room of a home) that includes the one or more images (e.g., images of the appliances in the home and recordation of when those appliances are used, e.g., when a refrigerator door is opened, when a microwave is used, when a load of laundry is placed into a washing machine) arranged in a line such that a field of view is 180 degrees.

[00304] Referring now to Fig. 11B, operation 1002 may include operation 1114 depicting capturing the scene that includes one or more images, through use of an array of more than one stationary image sensor. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one stationary image sensor module 614 capturing the scene (e.g., a warehouse that is a target for break-ins) that includes one or more images (e.g., images of each person that walks past the warehouse), through use of an array of more than one stationary image sensor (e.g., the image sensor does not move independently of the other image sensors).

85 [00305] Referring again to Fig. 11B, operation 1002 may include operation 1116 depicting capturing the scene that includes one or more images, through use of an array of more than one static image sensor. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one static image sensor module 616 capturing the scene (e.g., a virtual tourism scene of a waterfall and lake) that includes one or more images (e.g., images of the waterfall and of the animals that collect there), through use of an array of more than one static image sensor (e.g., an image sensor that does not change its position, zoom, or pan).

[00306] Referring again to Fig. 11B, operation 1116 may include operation 1118 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one static image sensor that has a fixed focal length and a fixed field of view module 618 capturing the scene (e.g., a virtual tourism scene of a mountain trail) that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view.

[00307] Referring again to Fig. 11B, operation 1002 may include operation 1120 depicting capturing the scene that includes the one or more images, through use of an array of more than one image sensor mounted in a stationary location. For example, Fig. 6, e.g., Fig 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted in a fixed location module 620 capturing the scene (e.g., a juvenile soccer field) that includes the one or more images (e.g., images of each player on the youth soccer team), through use of an array of more than one image sensor mounted in a stationary location (e.g., on a pole, or on the side of a building our structure).

[00308] Referring again to Fig. 11B, operation 1002 may include operation 1122 depicting capturing the scene that includes one or more images, through use of an array

86 of image sensors mounted on a movable platform. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on a movable platform module 622 capturing the scene (e.g., a line of people leaving a warehouse-type store with a set of items that has to be compared against a receipt) that includes one or more images (e.g., images of the person's receipt that is visible in the cart, and images of the items in the cart, through use of an array of image sensors mounted on a movable platform (e.g., like a rotating camera, or on a drone, or on a remote controlled vehicle, or simply on a portable stand that can be picked up and set down).

[00309] Referring again to Fig. 11B, operation 1122 may include operation 1124 depicting capturing the scene that includes one or more images, through use of an array of image sensors mounted on an unmanned aerial vehicle. For example, Fig. 6, e.g., Fig. 6B, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on an unmanned aerial vehicle module 624 capturing the scene (e.g., a view of a campsite where soldiers are gathered) that includes one or more images (e.g., images of the soldiers and their equipment caches), through use of an array of image sensors mounted on an unmanned aerial vehicle (e.g., a drone).

[00310] Referring now to Fig. 11C, operation 1002 may include operation 1126 depicting capturing the scene that includes one or more images, through use of an array of image sensors mounted on a satellite. For example, Fig. 6, e.g., Fig. 6C, shows multiple image sensor based scene that includes the one or more images capturing through use of more than one image sensor mounted on a satellite module 626 capturing the scene (e.g., a live street view inside the city of Seattle, WA) that includes one or more images, through use of an array of image sensors mounted on a satellite.

[00311] Referring again to Fig. 11C, operation 1002 may include operation 1128 depicting capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene. For example, Fig. 6, e.g., Fig. 6C, shows

87 multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the scene, through use of more than one image sensor 628 capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors (e.g., two megapixel CCM sensors) each capture an image that represents a portion of th scene (e.g., a live street view of a parade downtown).

[00312] Referring again to Fig. 11C, operation 1128 may include operation 1130 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together. For example, Fig. 6, e.g., Fig. 6C, shows multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the scene that are configured to be stitched together, through use of more than one image sensor 630 capturing the scene (e.g., a protest in a city square) that includes one or more images (e.g., images of people in the protest), through use of the array of more than one image sensor (e.g., twenty- five ten megapixel sensors), wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together (e.g., after capturing, an automated process lines up the individually captured images, removes overlap, and merges them into a single image).

[00313] Referring again to Fig. 11C, operation 1002 may include operation 1132 depicting acquiring an image from each image sensor of the more than one image sensors. For example, Fig. 6, e.g., Fig. 6C, shows particular image from each image sensor acquiring module 632 acquiring an image from each image sensor of the more than one image sensors.

[00314] Referring again to Fig. 11C, operation 1002 may include operation 1134, which may appear in conjunction with operation 1132, operation 1134 depicting combining the acquired images from the more than one image sensors into the scene. For example, Fig. 6, e.g., Fig. 6C, shows acquired particular image from each image sensor combining into

88 the scene module 634 combining the acquired images (e.g., images from a scene of a plains oasis) from the more than one image sensors (e.g., three thousand one-megapixel sensors) into the scene (e.g., the scene of a plains oasis).

[00315] Referring again to Fig. 11C, operation 1132 may include operation 1136 depicting acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image. For example, Fig. 6, e.g., Fig. 6C, shows particular image with at least partial other image overlap from each image sensor acquiring module 636 acquiring images (e.g., a portion of the overall image, e.g., which is a view of a football stadium during a game) from each image sensor (e.g., a CMOS sensor), wherein each acquired image at least partially overlaps at least one other image (e.g., each image overlaps its neighboring image by five percent).

[00316] Referring again to Fig. 11C, operation 1136 may include operation 1138 depicting acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image captured by an adjacent image sensor. For example, Fig. 6, e.g., Fig. 6C, shows particular image with at least partial adjacent image overlap from each image sensor acquiring module 638 acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image (e.g., each image is a portion of a street corner that will form a live street view) at least partially overlaps at least one other image captured by an adjacent (e.g., there are no intervening image sensors in between) image sensor.

[00317] Referring now to Fig. 11D, operation 1002 may include operation 1140 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene that is larger than a bandwidth for remote transmission of the scene capturing through use of more than one image sensor module 640 capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a

89 size of the scene (e.g., 1 gigapixel, captured 60 times per second) is greater than a capacity to transmit the scene (e.g., the capacity to transmit may be 30

megapixels/second) .

[00318] Referring again to Fig. 11D, operation 1140 may include operation 1142 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene that contains more image data than a bandwidth for remote transmission of the scene capturing through use of more than one image sensor module 642 capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured (e.g., 750 megapixels/second) exceeds a bandwidth for transmitting the image data to a remote location (e.g., the capacity to transmit may be 25 megapixels/second).

[00319] Referring again to Fig. 11D, operation 1142 may include operation 1144 depicting capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the amount of image data captured exceeds the bandwidth for transmitting the image data to the remote location by a factor of ten. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene that contains more image data than a bandwidth for remote transmission of the scene by a factor of ten capturing through use of more than one image sensor module 644 capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured (e.g., 750 megapixels/second) exceeds a bandwidth for transmitting the image data to a remote location by a factor of ten (e.g., the capacity to transmit may be 75 megapixels/second).

[00320] Referring again to Fig. 11D, operation 1002 may include operation 1146 depicting capturing a scene of a tourist destination that includes one or more images, through use of an array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene of a tourist destination capturing through use of more than one image sensor module 646 capturing a scene of a tourist destination

90 that includes one or more images (e.g., images of Mount Rushmore) through use of an array of more than one image sensor (e.g., fifteen CMOS 10 megapixel sensors aligned in two staggered arc- shaped rows).

[00321] Referring again to Fig. 11D, operation 1002 may include operation 1148 depicting capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include one or more images of cars crossing the highway across the bridge. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene of a highway bridge capturing through use of more than one image sensor module 648 capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor (e.g., twenty-two inexpensive, lightweight 2 megapixel sensors, arranged in a circular grid), wherein the one or more images include one or more images of cars crossing the highway across the bridge).

[00322] Referring again to Fig. 11D, operation 1002 may include operation 1150 depicting capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include an image of an appliance in the home. For example, Fig. 6, e.g., Fig. 6D, shows multiple image sensor based scene of a home interior capturing through use of more than one image sensor module 650 capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor (e.g., multiple sensors working in connection but placed at different angles and placements across the house, and which may feed into a same source, and which may provide larger views of the house), wherein the one or more images include an image of an appliance (e.g., a refrigerator) in the home.

[00323] Referring now to Fig. HE, operation 1002 may include operation 1152 depicting capturing the scene that includes one or more images, through use of a grouping of more than one image sensor. For example, Fig. 6, e.g., Fig. 6E, shows multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors module 652 capturing the scene (e.g., a

91 historic landmark) that includes one or more images (e.g., images of the entrance to the historic landmark and points of interest at the historic landmark), through use of a grouping of more than one image sensor (e.g., a grouping of one thousand two-megapixel camera sensors arranged on a concave surface.

[00324] Referring again to Fig. HE, operation 1002 may include operation 1154 depicting capturing the scene that includes one or more images, through use of multiple image sensors whose data is directed to a common source. For example, Fig. 6, e.g., Fig. 6E, shows multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors that direct image data to a common collector module 654 capturing the scene that includes one or more images, through use of multiple image sensors (e.g., 15 CMOS sensors) whose data is directed to a common source (e.g., a common processor, e.g., the fifteen CMOS sensors all transmit their digitized data to be processed by a common processor, or a common architecture (e.g., multi processor or multi-core processors).

[00325] Referring again to Fig. HE, operation 1002 may include operation 1156 depicting capturing the scene that includes one or more images, through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal-oxide-semiconductor devices. For example, Fig. 6, e.g., Fig. 6E, shows multiple image sensor based scene that includes the one or more images capturing through use of an grouping of image sensors that include charge-coupled devices and complementary metal- oxide- semiconductor devices module 656 capturing the scene (e.g., a museum interior for a virtual tourism site) that includes one or more images (e.g., images of one or more exhibits and/or artifacts in the museum), through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal-oxide semiconductor devices.

[00326] Referring now to Fig. 11F, operation 1002 may include operation 1158 depicting capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is

92 configured to capture image data and sound data. For example, Fig. 6, e.g., Fig. 6F, shows multiple image and sound data based scene that includes the one or more images capturing through use of image and sound sensors module 658 capturing the scene that includes image data (e.g., images of a waterfall and oasis for animals) and sound data (e.g., sound of the waterfall and the cries and calls of the various animals at the oasis), through use of an array of more than one image sensor (e.g., a grouping of twelve sensors each rated at 25 megapixels, and twelve further sensors, each rated at 2 megapixels), wherein the array of more than one image sensor is configured to capture image data (e.g., the images at the waterfall) and sound data (e.g., the sounds from the waterfall and the animals that are there).

[00327] Referring again to Fig. 11F, operation 1158 may include operation 1160 depicting capturing the scene that includes image data and sound data, through use of the array of more than one image sensor, wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data. For example, Fig. 6, e.g., Fig. 6F, shows multiple image and sound data based scene that includes the one or more images capturing through use of image and sound microphones module 660 capturing the scene (e.g., images of an old battlefield for virtual tourism) that includes image data and sound data, through use of the array of more than one image sensor (e.g., alternatively placed CMOS sensors and microphones), wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data.

[00328] Referring again to Fig. 11F, operation 1002 may include operation 1162 depicting capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor that captures soundwave data. For example, Fig. 6, e.g., Fig. 6F, shows multiple image sensor based scene that includes sound wave image data capturing through use of an array of sound wave image sensors module 662 capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor (e.g., audio sensors) that captures soundwave data (e.g., sound data, whether in visualization form or convertible to visualization form).

93 [00329] Referring again to Fig. 11F, operation 1002 may include operation 1164 depicting capturing a scene that includes video data, through use of an array of more than one video capture device. For example, Fig. 6, e.g., Fig. 6F, shows multiple video capture sensor based scene that includes the one or more images capturing through use of one or more video capture sensors module 664 capturing a scene (e.g., a scene of a watering hole) that includes video data (e.g., a video of a lion sunning itself), through use of an array of more than one video capture device (e.g., a video camera, or a group of CMOS sensors capturing video at 15 frames per second)

[00330] Figs. 12A-12C depict various implementations of operation 1004, depicting selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene, according to embodiments.

Referring now to Fig. 12A, operation 1004 may include operation 1202 depicting selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene. For example, Fig. 7, e.g., Fig. 7A, shows scene particular portion that is smaller than the scene and that includes at least one image selecting module 702 selecting the particular image (e.g., a picture of a bear) from the scene (e.g., a tropical oasis), wherein the selected particular image (e.g., the image of the bear) represents the request for the image that is smaller than the entire scene (e.g., the entire scene, when combined, may be close to 1,000,000 x 1,000,000, but the image of the bear may be matched to the user device's resolution, e.g., 640x480).

[00331] Referring again to Fig.l2A, operation 1202 may include operation 1204 depicting selecting the particular image from the scene, wherein the selected particular image represents a particular remote user-requested image that is smaller than the scene. For example, Fig. 7, e.g., Fig. 7A, shows scene particular portion that is smaller than the scene and that includes at least one remote user-requested image selecting module 704 selecting the particular image (e.g., an image of the quarterback) from the scene (e.g., a football game at a stadium), wherein the selected particular image (e.g., the image of the quarterback) represents a particular remote-user requested image (e.g., a user, watching the game back home, has selected that the images focus on the quarterback) that is

94 smaller than the scene (e.g., the image of the quarterback is transmitted at 1920 x 1080 resolution at 30fps, whereas the scene is captured at approximately 1,000,000 x

1,000,000 at 60 fps).

[00332] Referring again to Fig. 12A, operation 1004 may include operation 1206 depicting selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene. For example, Fig. 7, e.g., Fig. 7A, shows scene particular portion that is smaller than the scene and that includes at least one requested image selecting module 706 selecting the particular portion of the scene (e.g., a particular restaurant, e.g., Ray's Steakhouse, and a particular person, e.g., President Obama) of the scene (e.g., a street corner during live street view) that includes a requested image, wherein the selected particular portion (e.g., the steakhouse and the President) is smaller than the scene (e.g., a city block where the steakhouse is located).

[00333] Referring again to Fig. 12A, operation 1206 may include operation 1208 depicting selecting the particular portion of the scene that includes an image requested by a remote operator of the array of more than one image sensors, wherein the selected particular portion is smaller than the scene. For example, Fig. 7, e.g., Fig. 7A, shows scene particular portion that is smaller than the scene and that includes at least one remote- operator requested image selecting module 708 selecting the particular portion of the scene that includes an image requested by a remote operator of the array (e.g., by "remote operator" here it is meant the remote operator selects from the images available from the scene, giving the illusion of "zooming" and "panning"; in other embodiments, the array of image sensors or the individual image sensors may be moved by remote command, if so equipped (e.g., on movable platforms or mounted on UAVs or satellites, or if each image sensor is wired to hydraulics or servos)) of more than one image sensors, wherein the selected particular portion is smaller than the scene.

[00334] Referring again to Fig. 12A, operation 1004 may include operation 1210 depicting receiving a request for a particular image. For example, Fig. 7, e.g., Fig. 7A,

95 shows particular image request receiving module 710 receiving a request for a particular image (e.g., an image of a soccer player on the field as a game is going on).

[00335] Referring again to Fig. 12A, operation 1004 may include operation 1212, which may appear in conjunction with operation 1210, operation 1212 depicting selecting the particular image from the scene, wherein the particular image is smaller than the scene. For example, Fig. 7, e.g., Fig. 7A, shows received request for particular image selecting from scene module 712 selecting the particular image (e.g., the player on the soccer field) from the scene (e.g., the soccer field and the stadium, and, e.g., the surrounding parking lots), wherein the particular image (e.g., the image of the player) is smaller than (e.g., is expressed in fewer pixels than) the scene (e.g., the soccer field and the stadium, and, e.g., the surrounding parking lots).

[00336] Referring now to Fig. 12B, operation 1004 may include operation 1214 depicting receiving a first request for a first particular image and a second request for a second particular image. For example, Fig. 7, e.g., Fig. 7B, shows first request for a first particular image and second request for a second particular image receiving module 714 receiving a first request for a first particular image (e.g., a quarterback player on the football field) and a second request for a second particular image (e.g., a defensive lineman on the football field).

[00337] Referring again to Fig. 12B, operation 1004 may include operation 1216, which may appear in conjunction with operation 1214, operation 1216 depicting selecting the particular portion of the scene that includes the first particular image and the second particular image, wherein the particular portion of the scene is smaller than the scene. For example, Fig. 7, e.g., Fig. 7B, shows scene particular portion that is first particular image and second particular image selecting module 716 selecting the particular portion of the scene that includes the first particular image (e.g., the area at which the quarterback is standing) and the second particular image (e.g., the area at which the defensive lineman is standing), wherein the particular portion of the scene is smaller than the scene. In an embodiment, the detection of the image that contains the quarterback is done by automation. In another embodiment, the user selects the person they wish to follow (e.g.,

96 by voice), and the system tracks that person as they move through the scene, capturing that person as the "particular image" regardless of their location in the scene.

[00338] Referring again to Fig. 12B, operation 1214 may include operation 1218 depicting receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image do not overlap. For example, Fig. 7, e.g., Fig. 7B, shows first request for a first particular image and second request for a second particular image that does not overlap the first particular image receiving module 718 receiving the first request for the first particular image (e.g., a request to see the first exhibit in a museum) and the second request for the second particular image (e.g., a request to see the last exhibit in the museum), wherein the first particular image and the second particular image do not overlap (e.g., the first particular image and the second particular image are both part of the scene, but do not share any common pixels).

[00339] Referring again to Fig. 12B, operation 1214 may include operation 1220 depicting receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion. For example, Fig. 7, e.g., Fig. 7B, shows first request for a first particular image and second request for a second particular image that overlaps the first particular image receiving module 720 receiving a first request for a first particular image (e.g., a request to watch a particular animal at a watering hole) and a second request for a second particular image (e.g., a request to watch a different animal, e.g., an alligator), wherein the first particular image and the second particular image overlap at an overlapping portion (e.g., a portion of the pixels in the first particular image are the same ones as used in the second particular image).

[00340] Referring again to Fig. 12B, operation 1220 may include operation 1222 depicting receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only. For example, Fig. 7, e.g., Fig. 7B, shows first

97 request for a first particular image and second request for a second particular image that overlaps the first particular image and an overlapping portion is configured to be transmitted once only receiving module 722 first particular image (e.g., a request to watch a particular animal at a watering hole) and a second request for a second particular image (e.g., a request to watch a different animal, e.g., an alligator), wherein the first particular image and the second particular image overlap at an overlapping portion (e.g., a portion of the pixels in the first particular image are the same ones as used in the second particular image), and wherein the overlapping portion is configured to be transmitted once only (e.g., the shared pixels are transmitted once only to a remote server, where they are used to transmit both the first particular image and the second particular image to their ultimate destinations).

[00341] Referring now to Fig. 12C, operation 1004 may include operation 1224 depicting selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and that contains a particular image object selecting module 724 selecting the particular portion (e.g., a person walking down the street) of the scene (e.g., a street view of a busy intersection) that includes at least one image (e.g., the image of the person), wherein the selected particular portion is smaller than the scene and contains a particular image object (e.g., the person walking down the street).

[00342] Referring again to Fig. 12C, operation 1224 may include operation 1226 depicting selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a person. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and that contains a particular image object that is a person selecting module 726 selecting the particular portion (e.g., a person walking down the street) of the scene (e.g., a street view of a busy intersection) that includes at least one image (e.g., the image of the person), wherein the selected particular portion is smaller than the scene and contains an image object of a person (e.g., the person walking down the street).

98 [00343] Referring again to Fig. 12C, operation 1224 may include operation 1228 depicting selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a car. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and that contains a particular image object that is a vehicle selecting module 728 selecting the particular portion (e.g., an area of a bridge) of the scene (e.g., a highway bridge) that includes at least one image (e.g., an image of a car, with license plates and silhouettes of occupants of the car), wherein the selected particular portion is smaller (e.g., occupies less space in memory) than the scene and contains an image object of a car.

[00344] Referring again to Fig. 12C, operation 1004 may include operation 1230 depicting selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting device. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and is size-defined by a characteristic of a requesting device selecting module 730 selecting the particular portion of the scene (e.g., a picture of a lion at a scene of a watering hole) that includes at least one image (e.g., a picture of a lion), wherein a size of the particular portion (e.g., a number of pixels) of the scene (e.g., the watering hole) is at least partially based on a characteristic (e.g., an available bandwidth) of a requesting device (e.g., a smartphone device).

[00345] Referring again to Fig. 12C, operation 1004 may include operation 1232 depicting selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and is size-defined by a screen resolution of a requesting device selecting module 732 selecting the particular portion of the scene (e.g., a rock concert) that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution (e.g., 1920 pixels by 1080 pixels, e.g., "HD" quality) of the requesting device (e.g., a smart TV).

99 [00346] Referring again to Fig. 12C, operation 1004 may include operation 1234 depicting selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device. For example, Fig. 7, e.g., Fig. 7C, shows scene particular portion that is smaller than the scene and is size-defined by a combined screen size of at least one requesting device selecting module 734 selecting the particular portion of the scene (e.g., a historic battlefield) that includes at least one image (e.g., particular areas of the battlefield), wherein the size (e.g., the number of pixels) of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device (e.g., if there are five devices that are requesting 2000x1000 size images, then the particular portion may be 10000x1000 pixels (that is, 5x as large), less any overlap, for example).

[00347] Figs. 13A-13D depict various implementations of operation 1006, depicting transmitting only the selected particular portion from the scene to a remote location, according to embodiments. Referring now to Fig. 13A, operation 1006 may include operation 1302 depicting transmitting only the selected particular portion to a remote server. For example, Fig. 8, e.g., Fig. 8A, shows selected particular portion transmitting to a remote server module 802 transmitting only the selected particular portion (e.g., an image of a drummer at a live show) to a remote server (e.g., a remote location that receives requests for various parts of the scene and transmits the requests to the camera array).

[00348] Referring again to Fig. 13A, operation 1302 may include operation 1304 depicting transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene. For example, Fig. 8, e.g., Fig. 8A, shows selected particular portion transmitting to a remote server configured to receive particular image requests module 804 transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images (e.g., points of interest in a virtual tourism setting) from the scene.

100 [00349] Referring again to Fig. 13A, operation 1304 may include operation 1306 depicting transmitting only the selected particular portion to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene. For example, Fig. 8, e.g., Fig. 8A, shows selected particular portion transmitting to a remote server configured to receive particular discrete image requests module 806 transmitting only the selected particular portion (the portion that includes the areas designated by discrete users as ones to watch) to the remote server that is configured to receive multiple requests from discrete users for multiple images (e.g., each discrete user may want to view a different player in a game or a different area of a field for a football game) from the scene (e.g., a football game played inside a stadium).

[00350] Referring again to Fig. 13A, operation 1302 may include operation 1308 depicting transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image. For example, Fig. 8, e.g., Fig. 8A, shows selected particular portion transmitting to a remote server that requested a particular image module 808 transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image.

[00351] Referring now to Fig. 13B, operation 1006 may include operation 1310 depicting transmitting only the selected particular portion from the scene at a particular resolution. For example, Fig. 8, e.g., Fig. 8B, shows selected particular portion transmitting at a particular resolution module 810 transmitting only the selected particular portion (e.g., an image that the user selected) from the scene (e.g., a virtual tourism scene of the Eiffel Tower) at a particular resolution (e.g., 1920 x 1080 pixels, e.g., "HD" resolution.

[00352] Referring again to Fig. 13B, operation 1310 may include operation 1312 depicting determining an available bandwidth for transmission to the remote location. For example, Fig. 8, e.g., Fig. 8B, shows available bandwidth to remote location determining module 812 determining an available bandwidth (e.g., how much data can be transmitted over a particular network at a particular time, e.g., whether compensating for

101 conditions or component-based) for transmission to the remote location (e.g., a remote server that handles requests from users).

[00353] Referring again to Fig. 13B, operation 1310 may include operation 1314, which may appear in conjunction with operation 1312, operation 1314 depicting transmitting only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth. For example, Fig. 8, e.g., Fig. 8B, shows selected particular portion transmitting at a particular resolution based on determined available bandwidth module 814 transmitting only the selected particular portion (e.g., an image of a lion) from the scene (e.g., a watering hole) at the particular resolution (e.g., resolution the size of a web browser that the user is using to watch the lion) that is at least partially based on the determined available bandwidth (e.g., as the bandwidth decreases, the resolution also decreases).

[00354] Referring again to Fig. 13B, operation 1310 may include operation 1316 depicting transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured. For example, Fig. 8, e.g., Fig. 8B, shows selected particular portion transmitting at a particular resolution less than a scene resolution module 816

transmitting only the selected particular portion from the scene (e.g., a picture of a busy street from a street view), wherein the particular resolution is less than a resolution at which the scene was captured.

[00355] Referring again to Fig. 13B, operation 1310 may include operation 1318 depicting transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured. For example, Fig. 8, e.g., Fig. 8B, shows selected particular portion transmitting at a particular resolution less than a captured particular portion resolution module 818 transmitting only the selected particular portion from the scene (e.g., a picture of a busy street from a live street view), wherein the particular resolution is less than a resolution at which the particular portion (e.g., a specific person

102 walking across the street, or a specific car (e.g., a Lamborghini) parked on the corner) was captured.

[00356] Referring now to Fig. 13C, operation 1006 may include operation 1320 depicting transmitting a first segment of the selected particular portion at a first resolution, to the remote location. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion transmitting at a first resolution module 820 transmitting a first segment (e.g., an exterior portion) of the selected particular portion (e.g., a view of a street) at a first resolution (e.g., at full high-definition resolution), to the remote location (e.g., to a server that is handling user requests for a virtual reality environment).

[00357] Referring again to Fig. 13C, operation 1006 may include operation 1322, which may appear in conjunction with operation 1320, operation 1322 depicting transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location. For example, Fig. 8, e.g., Fig. 8C, shows second segment of selected particular portion transmitting at a second resolution module 822 transmitting a second segment of the selected particular portion (e.g., an interior portion, e.g., the portion that the user selected) that is higher than the first resolution, to the remote location (e.g., a remote server that is handling user requests and specifying to the image device which pixels to be captured).

[00358] Referring again to Fig. 13C, operation 1320 may include operation 1323 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that surrounds the second segment transmitting at a first resolution module 823 transmitting the first segment of the selected particular portion (e.g., a portion of the image that surrounds the portion requested by the user, e.g., a portion that an automated calculation has determined is likely to contain the lion from the scene of the watering hole at some point) at the first resolution, wherein the first seglment of the selected particular portion surrounds (e.g., is around the second particular portion in at least two opposite directions) the second segment of the selected

103 particular portion (e.g., the lion from the watering hole, e.g., the portion requested by the user)

[00359] Referring again to Fig. 13C, operation 1320 may include operation 1324 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that borders the second segment transmitting at a first resolution module 824 transmitting the first segment of the selected particular portion (e.g., an area of a live street view that a person is walking towards) at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion (e.g., the second segment includes the person the user wants to watch, and the first segment, which borders the second segment, is where the device automation is calculating that the person will be, based on a direction the person is moving).

[00360] Referring again to Fig. 13C, operation 1320 may include operation 1326 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that is determined by selected particular portion content transmitting at a first resolution module 826 transmitting the first segment of the selected particular portion (e.g., the segment surrounding a soccer player at a game) at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content (e.g., the first segment does not contain the person of interest, e.g., the soccer player) of the selected particular portion.

[00361] Referring again to Fig. 13C, operation 1326 may include operation 1328 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest. For example, Fig. 8, e.g., Fig. 8C, shows first segment of

104 selected particular portion that is determined as not containing an item of interest transmitting at a first resolution module 828 transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest (e.g., no animal activity at the watering hole in the first segment of the selected particular portion, where the scene is a watering hole).

[00362] Referring again to Fig. 13C, operation 1328 may include operation 1330 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that is determined as not containing a person of interest transmitting at a first resolution module 830 transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest (e.g., a user is watching the football game and wants to see the quarterback, and the first segment does not contain the quarterback (e.g., but may contain a lineman, a receiver, or a running back).

[00363] Referring again to Fig. 13C, operation 1328 may include operation 1332 depicting transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking. For example, Fig. 8, e.g., Fig. 8C, shows first segment of selected particular portion that is determined as not containing an object designated for tracking transmitting at a first resolution module 832 transmitting the first segment of the selected particular portion at the first resolution (e.g., 1920x1080, e.g., HD resolution), wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking (e.g., in a virtual safari, tracking an elephant moving across the plains, and the first segment does not contain the elephant, but may be a prediction about where the elephant will be).

105 [00364] Referring now to Fig. 13D, operation 1322 may include operation 1334 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user. For example, Fig. 8, e.g., Fig. 8C, shows second segment that is a user-selected area of selected particular portion transmitting at a second resolution that is higher than the first resolution module 834 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user.

[00365] Referring again to Fig. 13D, operation 1322 may include operation 1336 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion. For example, Fig. 8, e.g., Fig. 8C, shows second segment that is a surrounded by the first segment transmitting at a second resolution that is higher than the first resolution module 836 transmitting the second segment of the selected particular portion (e.g., a person walking down the street in a live street view) at the second resolution (e.g., 1920x1080 pixel count at 64-bit depth) that is higher than the first resolution (e.g., 1920x1080 pixel count at 8 -bit depth), wherein the seconde segment of the selected particular portion is surrounded by the first segment of the selected particular portion.

[00366] Referring again to Fig. 13D, operation 1322 may include operation 1338 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest. For example, Fig. 8, e.g., Fig. 8C, shows second segment of selected particular portion that contains an object of interest transmitting at the second resolution module 838 transmitting the second segment of the selected particular portion (e.g., on a security

106 camera array, the second segment is the portion that contains the person that is walking around the building at night) at the second resolution (e.g., 640x480 pixel resolution) that is higher than the first resolution (e.g., 320x200 pixel resolution), wherein the second segment of the selected particular portion is determined to contain a selected item of interest (e.g., a person or other moving thing (e.g., animal, robot, car) moving around a perimeter of a building).

[00367] Referring again to Fig. 13D, operation 1338 may include operation 1340 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected football player. For example, Fig. 8, e.g., Fig. 8D, shows second segment of selected particular portion that contains an object of interest that is a football player transmitting at the second resolution module 840 transmitting the second segment of the selected particular portion

transmitting the second segment of the selected particular portion at the second resolution (e.g., 3840x2160, e.g., "4K resolution") that is higher than the first resolution (e.g., 640x480 resolution), wherein the second segment of the selected particular portion is determined to contain a selected football player.

[00368] Referring again to Fig. 13D, operation 1338 may include operation 1342 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected landmark. For example, Fig. 8, e.g., Fig. 8D, shows second segment of selected particular portion that contains an object of interest that is a landmark transmitting at the second resolution module 842 transmitting the second segment of the selected particular portion (e.g., a nose of the Sphinx, where the selected particular portion also includes the first segment which is the area around the Sphinx's nose) at the second resolution (e.g., 2560x1400) that is higher than the first resolution (e.g., 1920x1080), wherein the second segment of the selected particular portion is determined to contain a selected landmark (e.g., the Sphinx's nose).

107 [00369] Referring again to Fig. 13D, operation 1338 may include operation 1344 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation. For example, Fig. 8, e.g., Fig. 8D, shows second segment of selected particular portion that contains an object of interest that is an animal transmitting at the second resolution module 844 transmitting the second segment of the selected particular portion at the second resolution (e.g., 640x480) that is higher than the first resolution (e.g., 320x240), wherein the second segment of the selected particular portion is determined to contain a selected animal (e.g., a panda bear at an oasis) for observation (e.g., a user has requested to see the panda bear).

[00370] Referring again to Fig. 13D, operation 1338 may include operation 1346 depicting transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected object in a dwelling. For example, Fig. 8, e.g., Fig. 8D, shows second segment of selected particular portion that contains an object of interest that is a selected object in a dwelling transmitting at the second resolution module 846 transmitting the second segment of the selected particular portion (e.g., the second segment contains the refrigerator) at the second resolution (e.g., full HD resolution) that is higher than the first resolution (e.g., the first resolution is 60% of the second resolution), wherein the second segment of the selected particular portion is determined to contain a selected object (e.g., a refrigerator) in a dwelling.

[00371] Figs. 14A-14D depict various implementations of operation 1008, depicting de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene, according to embodiments. Referring now to Fig. 14A, operation 1008 may include operation 1402 depicting transmitting only pixels associated with the selected particular portion to the remote location. For example, Fig. 9, e.g., Fig. 9A, shows scene pixel nonselected nontransmitting module 902 transmitting only pixels associated with the selected particular portion to the remote location (e.g., a remote server that handles access requests and determines which portions of the captured scene will be transmitted).

108 [00372] Referring again to Fig. 14A, operation 1008 may include operation 1404 depicting transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion. For example, Fig. 9, e.g., Fig. 9A, shows scene pixel exclusive pixels selected from the particular portion transmitting to a particular device module 904 transmitting only the selected particular portion (e.g., a person wheeling a shopping cart out of a discount store) from the scene (e.g., an exit area of a discount store at which people's receipts are compared against the items they have purchased) to a particular device (e.g., a television screen of a remote manager who is not on site at the discount store) that requested the selected particular portion.

[00373] Referring again to Fig. 14A, operation 1404 may include operation 1406 depicting transmitting only the selected particular portion from the scene to a particular device operated by a user that requested the selected particular portion. For example, Fig. 9, e.g., Fig. 9A, shows scene pixel exclusive pixels selected from the particular portion transmitting to a particular device that requested the particular portion module 906 transmitting only the selected particular portion (e.g., the picture of the panda from the zoo) from the scene (e.g., the panda area at a zoo) to a particular device (e.g., a smartphone device) operated by a user that requested the selected particular portion (e.g., the panda area).

[00374] Referring again to Fig. 14A, operation 1404 may include operation 1408 depicting transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion through selection of the particular portion from the scene. For example, Fig. 9, e.g., Fig. 9 A, shows scene pixel exclusive pixels selected from the particular portion transmitting to a particular device user that requested the particular portion module 908 transmitting only the selected particular portion (e.g., a drummer from a rock concert) from the scene (e.g., a concert venue where a rock concert is taking place) to a particular device (e.g., a television set in a person's house over a thousand miles away) that requested the selected particular portion (e.g., the drummer from the rock concert) through selection of the particular portion from the scene (e.g., the person watching the television used the remote to navigate through the scene and draw a box around the drummer, then gave the television a verbal command to focus

109 on the drummer, where the verbal command was picked up by the television remote and translated into a command to the television, which transmitted the command to a remote server, which caused the drummer of the rock band to be selected as the selected particular portion) .

[00375] Referring again to Fig. 14A, operation 1008 may include operation 1410 depicting deleting pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9A, shows scene nonselected pixel deleting module 910 deleting (e.g., not storing in permanent memory) pixels from the scene that are not part of the selected particular portion of the scene. It is noted here that "deleting" does not necessarily imply an active "deletion" of the pixels out of memory, but rather may include allowing the pixels to be overwritten or otherwise not retained, without an active step.

[00376] Referring now to Fig. 14B, operation 1008 may include operation 1412 depicting indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion. For example, Fig. 9, e.g., Fig. 9B, shows scene nonselected pixels indicating as nontransmitted module 912 indicating that pixels from the scene that are not part of the selected particular portion of the scene (e.g., a concert venue during a show) are not transmitted with the selected particular portion (e.g., a selection of the drummer in a band).

[00377] Referring again to Fig. 14B, operation 1412 may include operation 1414 depicting appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non-transmission of the pixels to which data was appended. For example, Fig. 9, e.g., Fig. 9B, shows scene nonselected pixels appending data that indicates nontransmission module 916 appending data to pixels (e.g., a "flag," e.g., setting a bit to zero rather than one) from the scene that are not part of the selected portion of the scene, said appended data (e.g., the bit is set to zero) configured to indicate non-transmission of the pixels to which data was appended (e.g., only those pixels to which the appended bit is "1" will be transmitted).

110 [00378] Referring again to Fig. 14B, operation 1008 may include operation 1416 depicting discarding pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9B, shows scene nonselected pixels discarding module 916 discarding (e.g., not making any particular attempt to save in a manner which would allow ease of retrieval) pixels from the scene (e.g., a scene of a football stadium) that are not part of the selected particular portion of the scene (e.g., the quarterback has been selected).

[00379] Referring again to Fig. 14B, operation 1008 may include operation 1418 depicting denying retention of pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9B, shows scene nonselected retention preventing module 918 denying retention (e.g., preventing transmission to a remote server or long term storage) of pixels from the scene (e.g., a scene of a desert oasis) that are not part of the selected particular portion of the scene (e.g., are not being "watched" by any users or that have not been instructed to be retained).

[00380] Referring now to Fig. 14C, operation 1008 may include operation 1420 depicting retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel subset retaining module 920 retaining a subset of pixels (e.g., edge pixels and 25% sampling inside color blocks) from the scene (e.g., a virtual tourism of the Sphinx in Egypt) that are not part of the selected particular portion of the scene (e.g., the nose of the Sphinx).

[00381] Referring again to Fig. 14C, operation 1420 may include operation 1422 depicting retaining ten percent of the pixels from the scene that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel ten percent subset retaining module 922 retaining ten percent (e.g., one in ten) of the pixels from the scene (e.g., a scene of an interior of a grocery store) that are not part of the selected particular portion of the scene (e.g., a specific shopper whom the owner has identified as a potential shoplifter).

Il l [00382] Referring again to Fig. 14C, operation 1420 may include operation 1424 depicting retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel targeted object subset retaining module 924 retaining pixels from the scene that have been identified as part of a targeted object (e.g., a particular person of interest, e.g., a popular football player on a football field, or a person flagged by the government as a spy outside of a warehouse) that are not part of the selected particular portion of the scene (e.g., a person using the camera array is not watching that particular person).

[00383] Referring again to Fig. 14C, operation 1424 may include operation 1426 depicting retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel targeted scenic landmark object subset retaining module 926 retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene (e.g., a live street view of Yellowstone National Park).

[00384] Referring again to Fig. 14C, operation 1424 may include operation 1428 retaining pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9C, shows scene pixel targeted automation identified object subset retaining module 928 retaining pixels from the scene (e.g., a virtual tourism scene through a national monument) that have been identified as part of a targeted object (e.g., part of the monument) through automated pattern recognition (e.g., automated analysis performed on the image to recognize known shapes and/or patterns, including faces, bodies, persons, objects, cars, structures, tools, etc.) that are not part of the selected particular portion (e.g., the selected particular portion was a different part of the monument) of the scene.

[00385] Referring now to Fig. 14D, operation 1008 may include operation 1430 depicting storing pixels from the scene that are not part of the selected particular portion

112 of the scene in a separate storage. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate storage module 930 storing pixels from the scene that are not part of the selected particular portion of the scene (e.g., pixels that are not part of the areas selected by the user for transmission to the user screen) in a separate storage (e.g., a local storage attached to a device that houses the array of image sensors).

[00386] Referring again to Fig. 14D, operation 1430 may include operation 1432 depicting storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than one image sensor. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate local storage module 932 storing pixels from the scene (e.g., a live street view of a busy intersection in Washington, DC) that are not part of the selected particular portion of the scene (e.g., that are not part of the selected person or area of interest) in a separate storage (e.g., a local memory, e.g., a hard drive that is connected to a processor that receives data from the array of image sensors) that is local to the array of more than one image sensor (e.g., is in the same general vicinity, without specifying a specific type of connection).

[00387] Referring again to Fig. 14D, operation 1430 may include operation 1434 depicting storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate storage for separate transmission module 934 storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location (e.g., a remote server) separately from the selected particular portion of the scene (e.g., at a different time, or through a different communications channel or medium, or at a different bandwidth, transmission speed, error checking rate, etc.).

[00388] Referring again to Fig. 14D, operation 1434 may include operation 1436 depicting storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are

113 configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate storage for separate transmission at off-peak time module 936 storing pixels from the scene (e.g., a waterfall scene) that are not part of the selected particular portion of the scene (e.g., parts that are not the selected animals drinking at the bottom of the waterfall) in a separate storage (e.g., a local memory, e.g., a solid state memory that is local to the array of image sensors), wherein pixels stored in the separate storage (e.g., the solid state memory) are configured to be transmitted to the remote location (e.g., a remote server, which will determine if the pixels have use for training pattern detection algorithms or using as caching copies, or determining hue and saturation values for various parts of the scene) at a time when there are no selected particular portions to be transmitted.

[00389] Referring again to Fig. 14D, operation 1434 may include operation 1438 depicting storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels. For example, Fig. 9, e.g., Fig. 9D, shows scene pixel subset storing in separate storage for separate transmission as lower-priority data module 938 storing (e.g., in a separate storage, e.g., in a local disk drive or memory that is local to the array of image sensors) pixels from the scene that are not part of the selected particular portion (e.g., not requested to be transmitted to an external location) of the scene in a separate storage (e.g., a local disk drive or other memory that is local to the array of image sensors), wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location (e.g., to a server, which will analyze the content and determine if any of the content is useful for caching or image analysis, e.g., pattern recognition, training automated recognition algorithms, etc.) based on assigning a lower priority to the stored pixels (e.g., the stored pixels will be sent when there is available bandwidth to the remote location that is not being used to transmit requested portions, e.g., the particular portion).

114 [00390] It is noted that, in the foregoing examples, various concrete, real-world examples of terms that appear in the following claims are described. These examples are meant to be exemplary only and non-limiting. Moreover, any example of any term may be combined or added to any example of the same term in a different place, or a different term in a different place, unless context dictates otherwise.

[00391] All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in any Application Data Sheet, are incorporated herein by reference, to the extent not inconsistent herewith.

[00392] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware, or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101. In an embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101, and that designing the circuitry and/or writing the code for the software (e.g., a high-level computer program serving as a hardware specification) and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that

115 the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.)

[00393] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[00394] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the

introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least

116 one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[00395] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[00396] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

117 Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[00397] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[00398] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must

118 exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[00399] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

STRUCTURED DISCLOSURE DIRECTED TOWARD ONE OF SKILL IN THE ART [START]

Those skilled in the art will appreciate the indentations and cross-referencing of the following as instructive. In general, each immediately following numbered paragraph in this "Structured Disclosure Directed Toward One Of Skill In The Art Section" should be treated as preceded by the word "Clause," as can be seen by the Clauses that cross- reference against them, and each cross-referencing paragraph is to be understood as permissively incorporating its parent clause(s) by reference.

119 As used in the herein, and in particular the following, thing/operation disclosures, the word "comprising" can generally be interpreted as "including but not limited to":

120 1. A computationally-implemented thing/operation disclosure, comprising:

capturing a scene that includes one or more images, through use of an array of more than one image sensor;

selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

transmitting only the selected particular portion from the scene to a remote location; and

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

2. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of an array of image sensors.

3. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of two image sensors arranged side by side and angled toward each other.

4. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a grid.

121 5. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line.

6. The computationally-implemented thing/operation disclosure of clause 5, wherein said capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line comprises:

capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is greater than 120 degrees.

7. The computationally-implemented thing/operation disclosure of clause 5, wherein said capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line comprises:

capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is 180 degrees.

8. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of more than one stationary image sensor.

9. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of more than one static image sensor.

122 10. The computationally-implemented thing/operation disclosure of clause 9, wherein said capturing the scene that includes one or more images, through use of an array of more than one static image sensor comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view.

11. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes the one or more images, through use of an array of more than one image sensor mounted in a stationary location.

12. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform.

13. The computationally-implemented thing/operation disclosure of clause 12, wherein said capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform comprises:

capturing the scene that includes one or more images, through use of an array of image sensors mounted on an unmanned aerial vehicle.

14. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of image sensors mounted on a satellite.

123 15. The computationally-implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene.

16. The computationally-implemented thing/operation disclosure of clause 15, wherein said capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together.

17. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

acquiring an image from each image sensor of the more than one image sensors; and

combining the acquired images from the more than one image sensors into the scene.

18. The computationally-implemented thing/operation disclosure of clause 17, wherein said acquiring an image from each image sensor of the more than one image sensors comprises:

acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image.

124 19. The computationally- implemented thing/operation disclosure of clause 18, wherein said acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image comprises:

acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image captured by an adjacent image sensor.

20. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene.

21. The computationally- implemented thing/operation disclosure of clause 20, wherein said capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location.

22. The computationally- implemented thing/operation disclosure of clause 21, wherein said capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location comprises:

capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the amount of image data captured exceeds the bandwidth for transmitting the image data to the remote location by a factor of ten.

125 23. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing a scene of a tourist destination that includes one or more images, through use of an array of more than one image sensor.

24. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include one or more images of cars crossing the highway across the bridge.

25. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include an image of an appliance in the home.

26. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of a grouping of more than one image sensor.

27. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

126 capturing the scene that includes one or more images, through use of multiple image sensors whose data is directed to a common source.

28. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes one or more images, through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal-oxide- semiconductor devices.

29. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data.

30. The computationally-implemented thing/operation disclosure of clause 29, wherein said capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data comprises:

capturing the scene that includes image data and sound data, through use of the array of more than one image sensor, wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data.

31. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

127 capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor that captures soundwave data.

32. The computationally- implemented thing/operation disclosure of clause 1, wherein said capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

capturing a scene that includes video data, through use of an array of more than one video capture device.

33. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene.

34. The computationally-implemented thing/operation disclosure of clause 33, wherein said selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene comprises:

selecting the particular image from the scene, wherein the selected particular image represents a particular remote user-requested image that is smaller than the scene.

35. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene.

36. The computationally-implemented thing/operation disclosure of clause 35, wherein said selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene comprises:

128 selecting the particular portion of the scene that includes an image requested by a remote operator of the array of more than one image sensors, wherein the selected particular portion is smaller than the scene.

37. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

receiving a request for a particular image; and

selecting the particular image from the scene, wherein the particular image is smaller than the scene.

38. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

receiving a first request for a first particular image and a second request for a second particular image; and

selecting the particular portion of the scene that includes the first particular image and the second particular image, wherein the particular portion of the scene is smaller than the scene.

39. The computationally-implemented thing/operation disclosure of clause 38, wherein said receiving a first request for a first particular image and a second request for a second particular image comprises:

receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image do not overlap.

40. The computationally-implemented thing/operation disclosure of clause 38, wherein said receiving a first request for a first particular image and a second request for a second particular image comprises:

129 receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion.

41. The computationally- implemented thing/operation disclosure of clause 40, wherein said receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion comprises:

receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only.

42. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object.

43. The computationally- implemented thing/operation disclosure of clause 42, wherein said selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a person.

44. The computationally-implemented thing/operation disclosure of clause 42, wherein said selecting the particular portion of the scene that includes at least one image, wherein the selected

130 particular portion is smaller than the scene and contains a particular image object comprises:

selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a car.

45. The computationally- implemented thing/operation disclosure of clause 1, wherein said selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation.

46. The computationally-implemented thing/operation disclosure of clause 45, wherein said selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosurecomprises:

selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device.

47. The computationally-implemented thing/operation disclosure of clause 45, wherein said selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosurecomprises:

selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting thing/operation.

48. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

131 transmitting only the selected particular portion to a remote server.

49. The computationally-implemented thing/operation disclosure of clause 48, wherein said transmitting only the selected particular portion to a remote server comprises:

transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene.

50. The computationally-implemented thing/operation disclosure of clause 49, wherein said transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene comprises:

transmitting only the selected particular portion to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene.

51. The computationally-implemented thing/operation disclosure of clause 48, wherein said transmitting only the selected particular portion to a remote server comprises:

transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image.

52. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

transmitting only the selected particular portion from the scene at a particular resolution.

53. The computationally- implemented thing/operation disclosure of clause 52, wherein said transmitting only the selected particular portion from the scene at a particular resolution comprises:

132 determining an available bandwidth for transmission to the remote location; and transmitting only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth.

133 54. The computationally-implemented thing/operation disclosure of clause 52, wherein said transmitting only the selected particular portion from the scene at a particular resolution comprises:

transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured.

55. The computationally- implemented thing/operation disclosure of clause 52, wherein said transmitting only the selected particular portion from the scene at a particular resolution comprises:

transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured.

56. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

transmitting a first segment of the selected particular portion at a first resolution, to the remote location; and

transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location.

57. The computationally- implemented thing/operation disclosure of clause 56, wherein said transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion.

58. The computationally- implemented thing/operation disclosure of clause 56, wherein said transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

134 transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion.

59. The computationally-implemented thing/operation disclosure of clause 56, wherein said transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion.

60. The computationally-implemented thing/operation disclosure of clause 59, wherein said transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion comprises:

transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest.

61. The computationally- implemented thing/operation disclosure of clause 60, wherein said transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest.

62. The computationally-implemented thing/operation disclosure of clause 60, wherein said transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

135 transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking.

63. The computationally- implemented thing/operation disclosure of clause 56, wherein said transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user.

64. The computationally-implemented thing/operation disclosure of clause 56, wherein said transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion.

65. The computationally- implemented thing/operation disclosure of clause 56, wherein said transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest.

66. The computationally-implemented thing/operation disclosure of clause 65, wherein said transmitting the second segment of the selected particular portion at the second resolution that is

136 higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected football player.

67. The computationally-implemented thing/operation disclosure of clause 65, wherein said transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected landmark.

68. The computationally-implemented thing/operation disclosure of clause 65, wherein said transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation.

69. The computationally-implemented thing/operation disclosure of clause 65, wherein said transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of

137 the selected particular portion is determined to contain a selected object in a dwelling.

138 70. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

transmitting only pixels associated with the selected particular portion to the remote location.

71. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting only the selected particular portion from the scene to a remote location comprises:

transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion.

72. The computationally- implemented thing/operation disclosure of clause 71, wherein said transmitting only the selected particular portion from the scene to a particular thing/operation disclosurethat requested the selected particular portion comprises:

transmitting only the selected particular portion from the scene to a particular device operated by a user that requested the selected particular portion.

73. The computationally- implemented thing/operation disclosure of clause 71, wherein said transmitting only the selected particular portion from the scene to a particular thing/operation disclosurethat requested the selected particular portion comprises:

transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion through selection of the particular portion from the scene.

74. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

deleting pixels from the scene that are not part of the selected particular portion of

139 the scene.

140 75. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion.

76. The computationally-implemented thing/operation disclosure of clause 75, wherein said indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion comprises:

appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non-transmission of the pixels to which data was appended.

77. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

discarding pixels from the scene that are not part of the selected particular portion of the scene.

78. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

denying retention of pixels from the scene that are not part of the selected particular portion of the scene.

79. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene.

141 80. The computationally-implemented thing/operation disclosure of clause 79, wherein said retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

retaining ten percent of the pixels from the scene that are not part of the selected particular portion of the scene.

81. The computationally- implemented thing/operation disclosure of clause 79, wherein said retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene.

82. The computationally-implemented thing/operation disclosure of clause 81, wherein said retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene comprises:

retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene.

83. The computationally- implemented thing/operation disclosure of clause 79, wherein said retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

retaining pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene.

84. The computationally- implemented thing/operation disclosure of clause 1, wherein said de- emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage.

142 85. The computationally- implemented thing/operation disclosure of clause 84, wherein said storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than one image sensor.

86. The computationally-implemented thing/operation disclosure of clause 84, wherein said storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene.

87. The computationally- implemented thing/operation disclosure of clause 86, wherein said storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted.

88. The computationally- implemented thing/operation disclosure of clause 86, wherein said storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

143 storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels.

89. A computationally-implemented thing/operation, comprising

means for capturing a scene that includes one or more images, through use of an array of more than one image sensor;

means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

means for transmitting only the selected particular portion from the scene to a remote location; and

means for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

90. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes the one or more images, through use of an array of image sensors.

91. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes the one or more images, through use of two image sensors arranged side by side and angled toward each other.

92. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

144 means for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a grid

93. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line.

94. The computationally-implemented thing/operation disclosure of clause 93, wherein means for capturing the scene that includes the one or more images,

through use of the array of image sensors arranged in a line comprises:

means for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is greater than 120 degrees.

95. The computationally-implemented thing/operation disclosure of clause 93, wherein means for capturing the scene that includes the one or more images,

through use of the array of image sensors arranged in a line comprises:

means for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is 180 degrees.

96. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of more than one stationary image sensor.

145 97. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of more than one static image sensor.

98. The computationally-implemented thing/operation disclosure of clause 97, wherein said means for capturing the scene that includes one or more images, through use of an array of more than one static image sensor comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view.

99. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes the one or more images, through use of an array of more than one image sensor mounted in a stationary location.

100. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform.

101. The computationally-implemented thing/operation disclosure of clause 100, wherein said means for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform comprises:

means for capturing the scene that includes one or more images, through use of an array of image sensors mounted on an unmanned aerial vehicle.

146 102. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a satellite.

103. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene.

104. The computationally-implemented thing/operation disclosure of clause 103, wherein said means for capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together.

105. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for acquiring an image from each image sensor of the more than one image sensors; and

means for combining the acquired images from the more than one image sensors into the scene.

147 106. The computationally-implemented thing/operation disclosure of clause 105, wherein said means for acquiring an image from each image sensor of the more than one image sensors comprises:

means for acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image.

107. The computationally-implemented thing/operation disclosure of clause 106, wherein said means for acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image comprises:

means for acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image captured by an adjacent image sensor.

108. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene.

109. The computationally-implemented thing/operation disclosure of clause 108, wherein said means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location.

148 110. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location comprises:

means for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the amount of image data captured exceeds the bandwidth for transmitting the image data to the remote location by a factor of ten.

111. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing a scene of a tourist destination that includes one or more images, through use of an array of more than one image sensor.

112. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include one or more images of cars crossing the highway across the bridge.

113. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include an image of an appliance in the home.

149 114. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of a grouping of more than one image sensor.

115. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of multiple image sensors whose data is directed to a common source.

116. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes one or more images, through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal-oxide- semiconductor devices.

117. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data.

118. The computationally- implemented thing/operation disclosure of clause 117, wherein said means for capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data comprises:

150 means for capturing the scene that includes image data and sound data, through use of the array of more than one image sensor, wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data

119. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor that captures soundwave data.

120. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

means for capturing a scene that includes video data, through use of an array of more than one video capture device.

121. The computationally- implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene.

122. The computationally- implemented thing/operation disclosure of clause 121, wherein said means for selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene comprises:

means for selecting the particular image from the scene, wherein the selected particular image represents a particular remote user-requested image that is

151 smaller than the scene.

152 123. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene.

124. The computationally-implemented thing/operation disclosure of clause 123, wherein said means for selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular portion of the scene that includes an image requested by a remote operator of the array of more than one image sensors, wherein the selected particular portion is smaller than the scene.

125. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for receiving a request for a particular image; and

means for selecting the particular image from the scene, wherein the particular image is smaller than the scene.

126. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for receiving a first request for a first particular image and a second request for a second particular image; and

means for selecting the particular portion of the scene that includes the first particular image and the second particular image, wherein the particular portion of

153 the scene is smaller than the scene.

154 127. The computationally-implemented thing/operation disclosure of clause 126, wherein said means for receiving a first request for a first particular image and a second request for a second particular image comprises:

means for receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image do not overlap.

128. The computationally-implemented thing/operation disclosure of clause 126, wherein said means for receiving a first request for a first particular image and a second request for a second particular image comprises:

means for receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion.

129. The computationally-implemented thing/operation disclosure of clause 128, wherein said means for receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion comprises:

means for receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only.

130. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object.

155 131. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a person.

132. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a car.

133. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

means for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting device.

134. The computationally-implemented thing/operation disclosure of clause 133, wherein said means for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosurecomprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device.

156 135. The computationally-implemented thing/operation disclosure of clause 133, wherein said means for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosurecomprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device.

136. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for transmitting only the selected particular portion from the scene to a remote location comprises:

means for transmitting only the selected particular portion to a remote server.

137. The computationally-implemented thing/operation disclosure of clause 136, wherein said means for transmitting only the selected particular portion to a remote server comprises:

means for transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene.

138. The computationally-implemented thing/operation disclosure of clause 137, wherein said means for transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene comprises:

means for transmitting only the selected particular portion to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene.

139. The computationally-implemented thing/operation disclosure of clause 136, wherein said means for transmitting only the selected particular portion to a remote server comprises:

157 means for transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image.

158 140. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for transmitting only the selected particular portion from the scene to a remote location comprises:

means for transmitting only the selected particular portion from the scene at a particular resolution.

141. The computationally-implemented thing/operation disclosure of clause 140, wherein said means for transmitting only the selected particular portion from the scene at a particular resolution comprises:

means for determining an available bandwidth for transmission to the remote location; and

means for transmitting only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth.

142. The computationally-implemented thing/operation disclosure of clause 140, wherein said means for transmitting only the selected particular portion from the scene at a particular resolution comprises:

means for transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured.

143. The computationally-implemented thing/operation disclosure of clause 140, wherein said means for transmitting only the selected particular portion from the scene at a particular resolution comprises:

means for transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured.

159 144. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for transmitting only the selected particular portion from the scene to a remote location comprises:

means for transmitting a first segment of the selected particular portion at a first resolution, to the remote location; and

means for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location.

145. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion.

146. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion.

147. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion.

148. The computationally-implemented thing/operation disclosure of clause 147, wherein said means for transmitting the first segment of the selected particular portion at the first resolution,

160 wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest.

149. The computationally-implemented thing/operation disclosure of clause 148, wherein said means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest.

150. The computationally-implemented thing/operation disclosure of clause 148, wherein said means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

means for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking.

151. The computationally- implemented thing/operation disclosure of clause 144, wherein said means for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user.

161 152. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion.

153. The computationally-implemented thing/operation disclosure of clause 144, wherein said means for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest.

154. The computationally-implemented thing/operation disclosure of clause 153, wherein said means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected football player.

155. The computationally-implemented thing/operation disclosure of clause 153, wherein said means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of

162 interest comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second

163 segment of the selected particular portion is determined to contain a selected landmark.

156. The computationally-implemented thing/operation disclosure of clause 153, wherein said means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation.

157. The computationally-implemented thing/operation disclosure of clause 153, wherein said means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

means for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected object in a dwelling.

158. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for transmitting only pixels associated with the selected particular portion to the remote location.

159. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

164 de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

165 means for transmitting only the selected particular portion from the scene to a particular thing/operation disclosurethat requested the selected particular portion.

160. The computationally-implemented thing/operation disclosure of clause 159, wherein said means for transmitting only the selected particular portion from the scene to a particular thing/operation disclosurethat requested the selected particular portion comprises:

means for transmitting only the selected particular portion from the scene to a particular device operated by a user that requested the selected particular portion.

161. The computationally-implemented thing/operation disclosure of clause 159, wherein said means for transmitting only the selected particular portion from the scene to a particular thing/operation disclosurethat requested the selected particular portion comprises:

means for transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion through selection of the particular portion from the scene.

162. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for deleting pixels from the scene that are not part of the selected particular portion of the scene.

163. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular

166 portion.

167 164. The computationally-implemented thing/operation disclosure of clause 163, wherein said means for indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion comprises:

means for appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non-transmission of the pixels to which data was appended.

165. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for discarding pixels from the scene that are not part of the selected particular portion of the scene.

166. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for denying retention of pixels from the scene that are not part of the selected particular portion of the scene.

167. The computationally-implemented thing/operation disclosure of clause 166, wherein said means for denying retention of pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene.

168. The computationally-implemented thing/operation disclosure of clause 167, wherein said means for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for retaining ten percent of the pixels from the scene that are not part of the

168 selected particular portion of the scene.

169 169. The computationally-implemented thing/operation disclosure of clause 167, wherein said means for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene.

170. The computationally- implemented thing/operation disclosure of clause 169, wherein said means for retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene comprises:

means for retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene.

171. The computationally-implemented thing/operation disclosure of clause 167, wherein said means for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for retaining pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene.

172. The computationally-implemented thing/operation disclosure of clause 89, wherein said means for

de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage.

173. The computationally-implemented thing/operation disclosure of clause 172, wherein said means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than

170 one image sensor.

171 174. The computationally-implemented thing/operation disclosure of clause 172, wherein said means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene.

175. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted.

176. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

means for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels.

177. A computationally-implemented thing/operation, comprising

172 circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor;

circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

circuitry for transmitting only the selected particular portion from the scene to a remote location; and

circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

178. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes the one or more images, through use of an array of image sensors.

179. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes the one or more images, through use of two image sensors arranged side by side and angled toward each other.

180. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a grid

181. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

173 circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line.

182. The computationally-implemented thing/operation disclosure of clause 181, wherein circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line comprises:

circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is greater than 120 degrees.

183. The computationally-implemented thing/operation disclosure of clause 181, wherein circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line comprises:

circuitry for capturing the scene that includes the one or more images, through use of the array of image sensors arranged in a line such that a field of view is 180 degrees.

184. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of more than one stationary image sensor.

185. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of more than one static image sensor.

174 186. The computationally-implemented thing/operation disclosure of clause 185, wherein said circuitry for capturing the scene that includes one or more images, through use of an array of more than one static image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor that has a fixed focal length and a fixed field of view.

187. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes the one or more images, through use of an array of more than one image sensor mounted in a stationary location.

188. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform.

189. The computationally-implemented thing/operation disclosure of clause 188, wherein said circuitry for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a movable platform comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of image sensors mounted on an unmanned aerial vehicle.

190. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of image sensors mounted on a satellite.

175 191. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene.

192. The computationally-implemented thing/operation disclosure of clause 191, wherein said circuitry for capturing the scene that includes one or more images, through use of an array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the more than one image sensors each capture an image that represents a portion of the scene, and wherein the images are stitched together.

193. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for acquiring an image from each image sensor of the more than one image sensors; and

circuitry for combining the acquired images from the more than one image sensors into the scene.

194. The computationally-implemented thing/operation disclosure of clause 193, wherein said circuitry for acquiring an image from each image sensor of the more than one image sensors comprises:

circuitry for acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image.

176 195. The computationally-implemented thing/operation disclosure of clause 194, wherein said circuitry for acquiring images from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image comprises:

circuitry for acquiring the image from each image sensor of the more than one image sensors, wherein each acquired image at least partially overlaps at least one other image captured by an adjacent image sensor.

196. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene.

197. The computationally-implemented thing/operation disclosure of clause 196, wherein said circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein a size of the scene is greater than a capacity to transmit the scene comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location.

198. The computationally-implemented thing/operation disclosure of clause 197, wherein said circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein an amount of image data captured exceeds a bandwidth for transmitting the image data to a remote location comprises:

circuitry for capturing the scene that includes one or more images, through use of the array of more than one image sensor, wherein the amount of image data captured exceeds the bandwidth for transmitting the image data to the remote

177 location by a factor of ten.

178 199. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing a scene of a tourist destination that includes one or more images, through use of an array of more than one image sensor.

200. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing a scene of a highway across a bridge that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include one or more images of cars crossing the highway across the bridge.

201. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing a scene of a home that includes one or more images, through use of an array of more than one image sensor, wherein the one or more images include an image of an appliance in the home.

202. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of a grouping of more than one image sensor.

203. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

179 circuitry for capturing the scene that includes one or more images, through use of multiple image sensors whose data is directed to a common source.

204. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes one or more images, through use of an array of more than one image sensor, the more than one image sensor including one or more of a charge-coupled devices and a complementary metal- oxide- semiconductor devices.

205. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data.

206. The computationally-implemented thing/operation disclosure of clause 205, wherein said circuitry for capturing the scene that includes image data and sound data, through use of an array of more than one image sensor, wherein the array of more than one image sensor is configured to capture image data and sound data comprises:

circuitry for capturing the scene that includes image data and sound data, through use of the array of more than one image sensor, wherein the array of more than one image sensor includes image sensors configured to capture image data and microphones configured to capture sound data

207. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

180 circuitry for capturing the scene that includes soundwave image data, through use of the array of more than one soundwave image sensor that captures soundwave data.

208. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for capturing a scene that includes one or more images, through use of an array of more than one image sensor comprises:

circuitry for capturing a scene that includes video data, through use of an array of more than one video capture device.

209. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene.

210. The computationally- implemented thing/operation disclosure of clause 209, wherein said circuitry for selecting the particular image from the scene, wherein the selected particular image represents the request for the image that is smaller than the entire scene comprises:

circuitry for selecting the particular image from the scene, wherein the selected particular image represents a particular remote user-requested image that is smaller than the scene.

211. The computationally- implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene.

181 212. The computationally- implemented thing/operation disclosure of clause 211, wherein said circuitry for selecting the particular portion of the scene that includes a requested image, wherein the selected particular portion is smaller than the scene comprises:

circuitry for selecting the particular portion of the scene that includes an image requested by a remote operator of the array of more than one image sensors, wherein the selected particular portion is smaller than the scene.

213. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for receiving a request for a particular image; and

circuitry for selecting the particular image from the scene, wherein the particular image is smaller than the scene.

214. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for receiving a first request for a first particular image and a second request for a second particular image; and

circuitry for selecting the particular portion of the scene that includes the first particular image and the second particular image, wherein the particular portion of the scene is smaller than the scene.

215. The computationally-implemented thing/operation disclosure of clause 214, wherein said circuitry for receiving a first request for a first particular image and a second request for a second particular image comprises:

circuitry for receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image do not overlap.

182 216. The computationally-implemented thing/operation disclosure of clause 214, wherein said circuitry for receiving a first request for a first particular image and a second request for a second particular image comprises:

circuitry for receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion.

217. The computationally-implemented thing/operation disclosure of clause 216, wherein said circuitry for receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping portion comprises:

circuitry for receiving the first request for the first particular image and the second request for the second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only.

218. The computationally- implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object.

219. The computationally-implemented thing/operation disclosure of clause 218, wherein said circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a person.

183 220. The computationally- implemented thing/operation disclosure of clause 218, wherein said circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains a particular image object comprises:

circuitry for selecting the particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene and contains an image object of a car.

221. The computationally- implemented thing/operation disclosure of clause 177, wherein said circuitry for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises: circuitry for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting device.

222. The computationally- implemented thing/operation disclosure of clause 221, wherein said circuitry for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosurecomprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device.

223. The computationally- implemented thing/operation disclosure of clause 221, wherein said circuitry for selecting the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosurecomprises:

means for selecting the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device.

184 224. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for transmitting only the selected particular portion from the scene to a remote location comprises:

circuitry for transmitting only the selected particular portion to a remote server.

225. The computationally-implemented thing/operation disclosure of clause 224, wherein said circuitry for transmitting only the selected particular portion to a remote server comprises:

circuitry for transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene.

226. The computationally-implemented thing/operation disclosure of clause 225, wherein said circuitry for transmitting only the selected particular portion to the remote server that is configured to receive one or more requests for one or more particular images from the scene comprises:

circuitry for transmitting only the selected particular portion to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene.

227. The computationally-implemented thing/operation disclosure of clause 224, wherein said circuitry for transmitting only the selected particular portion to a remote server comprises:

circuitry for transmitting only the selected particular portion to the remote server that requested the selected particular portion from the image.

228. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for transmitting only the selected particular portion from the scene to a remote location comprises:

circuitry for transmitting only the selected particular portion from the scene at a

185 particular resolution.

186 229. The computationally-implemented thing/operation disclosure of clause 228, wherein said circuitry for transmitting only the selected particular portion from the scene at a particular resolution comprises:

circuitry for determining an available bandwidth for transmission to the remote location; and

circuitry for transmitting only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth.

230. The computationally-implemented thing/operation disclosure of clause 228, wherein said circuitry for transmitting only the selected particular portion from the scene at a particular resolution comprises:

circuitry for transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured.

231. The computationally- implemented thing/operation disclosure of clause 228, wherein said circuitry for transmitting only the selected particular portion from the scene at a particular resolution comprises:

circuitry for transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured.

232. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for transmitting only the selected particular portion from the scene to a remote location comprises:

circuitry for transmitting a first segment of the selected particular portion at a first resolution, to the remote location; and

circuitry for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location.

187 233. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion.

234. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion.

235. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a first segment of the selected particular portion at a first resolution, to the remote location comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion.

236. The computationally-implemented thing/operation disclosure of clause 235, wherein said circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest.

237. The computationally-implemented thing/operation disclosure of clause 236, wherein said circuitry for transmitting the first segment of the selected particular portion at the first resolution,

188 wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest.

238. The computationally-implemented thing/operation disclosure of clause 236, wherein said circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

circuitry for transmitting the first segment of the selected particular portion at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected object designated for tracking.

239. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user.

240. The computationally-implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion.

189 241. The computationally- implemented thing/operation disclosure of clause 232, wherein said circuitry for transmitting a second segment of the selected particular portion at a second resolution that is higher than the first resolution, to the remote location comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest.

242. The computationally- implemented thing/operation disclosure of clause 241, wherein said circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected football player.

243. The computationally- implemented thing/operation disclosure of clause 241, wherein said circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected landmark.

244. The computationally- implemented thing/operation disclosure of clause 241, wherein said circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second

190 segment of the selected particular portion is determined to contain a selected item of interest comprises:

191 circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation.

245. The computationally- implemented thing/operation disclosure of clause 241, wherein said circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected item of interest comprises:

circuitry for transmitting the second segment of the selected particular portion at the second resolution that is higher than the first resolution, wherein the second segment of the selected particular portion is determined to contain a selected object in a dwelling.

246. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for transmitting only pixels associated with the selected particular portion to the remote location.

247. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion.

248. The computationally-implemented thing/operation disclosure of clause 247, wherein said circuitry for transmitting only the selected particular portion from the scene to a particular thing/operation disclosurethat requested the selected particular portion comprises:

circuitry for transmitting only the selected particular portion from the scene to a

192 particular device operated by a user that requested the selected particular portion.

193 249. The computationally-implemented thing/operation disclosure of clause 247, wherein said circuitry for transmitting only the selected particular portion from the scene to a particular thing/operation disclosurethat requested the selected particular portion comprises:

circuitry for transmitting only the selected particular portion from the scene to a particular device that requested the selected particular portion through selection of the particular portion from the scene.

250. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for deleting pixels from the scene that are not part of the selected particular portion of the scene.

251. The computationally- implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion.

252. The computationally- implemented thing/operation disclosure of clause 251, wherein said circuitry for indicating that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion comprises:

circuitry for appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non- transmission of the pixels to which data was appended.

253. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the

194 selected particular portion of the scene comprises:

195 circuitry for discarding pixels from the scene that are not part of the selected particular portion of the scene.

254. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for denying retention of pixels from the scene that are not part of the selected particular portion of the scene.

255. The computationally-implemented thing/operation disclosure of clause 254, wherein said circuitry for denying retention of pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene.

256. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for retaining ten percent of the pixels from the scene that are not part of the selected particular portion of the scene.

257. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene.

258. The computationally-implemented thing/operation disclosure of clause 257, wherein said circuitry for retaining pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene comprises:

196 circuitry for retaining pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene.

259. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for retaining a subset of pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for retaining pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene.

260. The computationally-implemented thing/operation disclosure of clause 177, wherein said circuitry for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage.

261. The computationally- implemented thing/operation disclosure of clause 260, wherein said circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than one image sensor.

262. The computationally-implemented thing/operation disclosure of clause 260, wherein said circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene.

197 263. The computationally-implemented thing/operation disclosure of clause 262, wherein said circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted.

264. The computationally-implemented thing/operation disclosure of clause 262, wherein said circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

circuitry for storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels.

265. A thing/operation disclosuredisclosure,

comprising: a signal-bearing medium

bearing:

one or more instructions for capturing a scene that includes one or more images, through use of an array of more than one image sensor;

one or more instructions for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

one or more instructions for transmitting only the selected particular portion from the scene to a remote location; and

198 one or more instructions for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

266. A thing/operation disclosure defined by a computational language comprising: one or more interchained physical machines ordered for capturing a scene that includes one or more images, through use of an array of more than one image sensor; one or more interchained physical machines ordered for selecting a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

one or more interchained physical machines ordered for transmitting only the selected particular portion from the scene to a remote location; and

one or more interchained physical machines ordered for de-emphasizing pixels from the scene that are not part of the selected particular portion of the scene.

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [EN D]

199 PRELIMINARY AMENDMENT - 1114-003-006-000000

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [START]

200 Those skilled in the art will appreciate the indentations and cross-referencing of the following as instructive. In general, each immediately following numbered paragraph in this "Structured Disclosure Directed Toward One Of Skill In The Art Section" should be treated as preceded by the word "Clause," as can be seen by the Clauses that cross- reference against them, and each cross-referencing paragraph is to be understood as permissively incorporating its parent clause(s) by reference.

201 As used i n the herein, and i n pa rticu la r the following, thing/operation disclosures, the word "com prising" ca n genera lly be interpreted as " incl uding but not limited to" :

267. (NEW) A thing/operation disclosure, comprising:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor;

a scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

a selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location; and

a scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

268. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of an array of image sensors.

269. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

202 a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images through use of two image sensors arranged side by side and angled toward each other.

270. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of image sensors aligned in a grid pattern module.

271. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of image sensors aligned in a line pattern.

272. (NEW) The thing/operation disclosure of clause 271, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of image sensors aligned in a line pattern comprises: a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of image sensors aligned in a line pattern such that a field of view is greater than 120 degrees.

273. (NEW) The thing/operation disclosure of clause 271, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of image sensors aligned in a line pattern comprises: a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of image sensors aligned in a line pattern such that a field of view is 180 degrees.

203 274. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one stationary image sensor.

275. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one static image sensor.

276. (NEW) The thing/operation disclosure of clause 275, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one static image sensor comprises: a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one image sensor that has a fixed focal length and a fixed field of view.

277. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one image sensor mounted in a fixed location.

278. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

204 a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one image sensor mounted on a movable platform.

279. (NEW) The thing/operation disclosure of clause 278, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one image sensor mounted on a movable platform comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one image sensor mounted on an unmanned aerial vehicle.

280. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes the one or more images through use of more than one image sensor mounted on a satellite.

281. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the scene, through use of more than one image sensor.

282. (NEW) The thing/operation disclosure of clause 281, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images that represent portions of the scene, through use of more than one image sensor comprises:

a together.

205 283. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a particular image from each image sensor acquiring module; and

an acquired particular image from each image sensor combining into the scene module configured to combine the acquired particular images into the scene.

284. (NEW) The thing/operation disclosure of clause 283, wherein said particular image from each image sensor acquiring module comprises:

a particular images from each image sensor acquiring module, wherein at least one of the acquired particular images at least partially overlaps another of the acquired particular images.

285. (NEW) The thing/operation disclosure of clause 284, wherein said particular images from each image sensor acquiring module comprises:

a particular images from each image sensor acquiring module, wherein at least one of the acquired particular images at least partially overlaps another of the acquired particular images captured by an adjacent image sensor.

286. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor, wherein the scene is larger than a bandwidth capacity for scene transmission to a remote location.

287. (NEW) The thing/operation disclosure of clause 286, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor, wherein the scene is larger than a bandwidth capacity for scene transmission to a remote location comprises:

206 a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor, wherein the scene contains more image data than available bandwidth for image data transmission to a remote location.

288. (NEW) The thing/operation disclosure of clause 287, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor, wherein the scene contains more image data than available bandwidth for image data transmission to a remote location comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor, wherein the scene contains more image data than available bandwidth for image data transmission to a remote location by a factor of ten.

289. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images of a tourist destination, through use of more than one image sensor.

290. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images of a one or more cars on a highway bridge.

291. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

207 a multiple image sensor based scene capturing module configured to capture a scene that includes one or more images of a home interior.

292. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple grouped image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor.

293. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple grouped image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of a group of image sensors whose data is directed to a common collector.

294. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple grouped image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of a group of image sensors include one or more of a charge-coupled devices and a complementary metal-oxide-semiconductor devices.

295. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple image and sound data based scene that includes the one or more images capturing through use of image and sound sensors module, wherein the array of more than one image sensor is configured to capture image data and sound data.

208 296. (NEW) The thing/operation disclosure of clause 295, wherein said multiple image and sound data based scene that includes the one or more images capturing through use of image and sound sensors module, wherein the array of more than one image sensor is configured to capture image data and sound data comprises:

a multiple image and sound data based scene that includes the one or more images capturing through use of image and sound sensors module, wherein the array of more than one image sensor is configured to capture image data and sound data through use of one or more image capture devices and microphone devices.

297. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a multiple soundwave image sensor based scene capturing module configured to capture a scene that includes soundwave data, through use of more than one soundwave image sensor that captures soundwave data.

298. (NEW) The thing/operation disclosure of clause 267, wherein said multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor comprises:

a video capture sensor based scene capturing module configured to capture a scene that includes video data, through use of more than one video capture sensor device.

299. (NEW) The thing/operation disclosure of clause 267, wherein said scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

a scene particular portion that is smaller than the scene selecting module configured to select a particular portion of the scene that includes at least one image.

209 300. (NEW) The thing/operation disclosure of clause 299, wherein said scene particular portion that is smaller than the scene selecting module configured to select a particular portion of the scene that includes at least one image comprises:

a scene user-requested particular portion scene selecting module configured to select a particular portion of the scene that includes at least one image that was requested by a remote user.

301. (NEW) The thing/operation disclosure of clause 267, wherein said scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

a scene user-requested particular portion scene selecting module configured to select a particular portion of the scene that includes at least one requested image that is smaller than the scene.

302. (NEW) The thing/operation disclosure of clause 301,wherein said scene user- requested particular portion scene selecting module configured to select a particular portion of the scene that includes at least one requested image that is smaller than the scene comprises:

a scene user-requested particular portion scene selecting module configured to select a particular portion of the scene that includes at least one requested image that was requested by a remote operator of the more than one image sensors.

303. (NEW) The thing/operation disclosure of clause 267, wherein said scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

a particular image request receiving module; and

a received request for particular image selecting from scene module configured to select the particular image from the scene, wherein the particular image is smaller than the scene.

210 304. (NEW) The thing/operation disclosure of clause 267, wherein said scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

211 a first request for a first particular image and second request for a second particular image receiving module; and

a scene particular portion that is first particular image and second particular image selecting module, wherein the particular portion of the scene is smaller than the scene.

305. (NEW) The thing/operation disclosure of clause 304, wherein said first request for a first particular image and second request for a second particular image receiving module comprises:

a first request for a first particular image and second request for a second particular image that does not overlap the first particular image receiving module, wherein the first particular image and the second particular image do not overlap.

306. (NEW) The thing/operation disclosure of clause 304, wherein said first request for a first particular image and second request for a second particular image receiving module comprises:

a receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping image portion.

307. (NEW) The thing/operation disclosure of clause 306, wherein said receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at an overlapping image portion comprises:

a receiving a first request for a first particular image and a second request for a second particular image, wherein the first particular image and the second particular image overlap at the overlapping portion, and wherein the overlapping portion is configured to be transmitted once only.

308. (NEW) The thing/operation disclosure of clause 267, wherein said scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

212 a scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion contains a particular image object.

309. (NEW) The thing/operation disclosure of clause 308, wherein said scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion contains a particular image object comprises:

a scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion contains a particular image object that is a person.

310. (NEW) The thing/operation disclosure of clause 308, wherein said scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion contains a particular image object comprises:

a scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion contains a particular image object that is a car.

311. (NEW) The thing/operation disclosure of clause 267, wherein said scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene comprises:

a scene particular portion that is smaller than the scene and is size-defined by a characteristic of a requesting device selecting module configured to select the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting device.

312. (NEW) The thing/operation disclosure of clause 311, wherein said scene particular portion that is smaller than the scene and is size-defined by a characteristic of a requesting device selecting module configured to select the particular portion of

213 the scene that includes at

214 least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosure comprises:

a scene particular portion that is smaller than the scene and is size-defined by a screen resolution of a requesting device selecting module configured to select the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a screen resolution of the requesting device.

313. (NEW) The thing/operation disclosure of clause 311, wherein said scene particular portion that is smaller than the scene and is size-defined by a characteristic of a requesting thing/operation disclosure selecting module configured to select the particular portion of the scene that includes at least one image, wherein a size of the particular portion of the scene is at least partially based on a characteristic of a requesting thing/operation disclosure comprises:

a scene particular portion that is smaller than the scene and is size-defined by a combined screen size of at least one requesting device selecting module configured to select the particular portion of the scene that includes at least one image, wherein the size of the particular portion of the scene is at least partially based on a combined size of screens of at least one requesting device.

314. (NEW) The thing/operation disclosure of clause 267, wherein said selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location comprises:

a selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote server.

315. (NEW) The thing/operation disclosure of clause 314, wherein said selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote server comprises:

a selected particular portion transmitting module configured to transmit the selected particular portion from the scene to the remote server that is configured to receive one or

215 more requests for one or more particular images from the scene.

216 316. (NEW) The thing/operation disclosure of clause 315, wherein said selected particular portion transmitting module configured to transmit the selected particular portion from the scene to the remote server that is configured to receive one or more requests for one or more particular images from the scene comprises:

a selected particular portion transmitting module configured to transmit the selected particular portion from the scene to the remote server that is configured to receive multiple requests from discrete users for multiple particular images from the scene.

317. (NEW) The thing/operation disclosure of clause 314, wherein said selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote server comprises:

a selected particular portion transmitting module configured to transmit the selected particular portion from the scene to the remote server that requested the selected particular portion from the image.

318. (NEW) The thing/operation disclosure of clause 267, wherein said selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location comprises:

a selected particular portion transmitting at a particular resolution module.

319. (NEW) The thing/operation disclosure of clause 318, wherein said selected particular portion transmitting at a particular resolution module

comprises:

an available bandwidth to remote location determining module; and

a selected particular portion transmitting at a particular resolution based on determined available bandwidth module configured to transmit only the selected particular portion from the scene at the particular resolution that is at least partially based on the determined available bandwidth.

320. (NEW) The thing/operation disclosure of clause 318, wherein said selected particular portion transmitting at a particular resolution module

217 comprises:

218 a selected particular portion transmitting at a particular resolution less than a scene resolution module configured to transmit only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the scene was captured.

321. (NEW) The thing/operation disclosure of clause 318, wherein said

selected particular portion transmitting at a particular resolution module

comprises:

a selected particular portion transmitting at a lower particular resolution module configured to transmit transmitting only the selected particular portion from the scene at the particular resolution, wherein the particular resolution is less than a resolution at which the selected particular portion was captured.

322. (NEW) The thing/operation disclosure of clause 267, wherein said selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location comprises:

a first segment of selected particular portion transmitting at a first resolution module configured to transmit a first segment of the selected particular portion to the remote location at the first resolution; and

a second segment of selected particular portion transmitting at a second resolution module configured to transmit a second segment of the selected particular portion to the remote location at a second resolution that is higher than the first resolution.

323. (NEW) The thing/operation disclosure of clause 322, wherein said first segment of selected particular portion transmitting at a first resolution module configured to transmit a first segment of the selected particular portion to the remote location at the first resolution comprises:

a first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the first segment of the selected particular portion surrounds the second segment of the selected particular portion.

219 324. (NEW) The thing/operation disclosure of clause 322, wherein said first segment of selected particular portion transmitting at a first resolution module configured to transmit a first segment of the selected particular portion to the remote location at the first resolution comprises:

a first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the first segment of the selected particular portion borders the second segment of the selected particular portion.

325. (NEW) The thing/operation disclosure of clause 322, wherein said first segment of selected particular portion transmitting at a first resolution module configured to transmit a first segment of the selected particular portion to the remote location at the first resolution comprises:

a first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion.

326. (NEW) The thing/operation disclosure of clause 325, wherein said first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the first segment of the selected particular portion is determined at least in part by a content of the selected particular portion comprises:

a first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest.

327. (NEW) The thing/operation disclosure of clause 326, wherein said first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the

220 first segment of the selected particular portion is an area that does not contain an item of interest comprises:

a first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected person of interest.

328. (NEW) The thing/operation disclosure of clause 326, wherein said first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain an item of interest comprises:

a first segment of selected particular portion transmitting at the first resolution module configured to transmit the first segment of the selected particular portion to the remote location at the first resolution, wherein the first segment of the selected particular portion is an area that does not contain a selected item of interest.

329. (NEW) The thing/operation disclosure of clause 322, wherein said second segment of selected particular portion transmitting at a second resolution module configured to transmit a second segment of the selected particular portion to the remote location at a second resolution that is higher than the first resolution

comprises:

a second segment that is a user-selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion represents an area selected by a user, and the first segment of the selected particular portion represents a border area that borders the area selected by the user, and wherein the second resolution is greater than the first resolution.

330. (NEW) The thing/operation disclosure of clause 322, wherein said second segment of selected particular portion transmitting at a second resolution module configured to transmit a

221 second segment of the selected particular portion to the remote location at a second resolution that is higher than the first resolution comprises:

a second segment that is a user-selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is surrounded by the first segment of the selected particular portion, and wherein , and wherein the second resolution is greater than the first resolution.

331. (NEW) The thing/operation disclosure of clause 322, wherein said second segment of selected particular portion transmitting at a second resolution module configured to transmit a second segment of the selected particular portion to the remote location at a second resolution that is higher than the first resolution

comprises:

a second segment that is a user-selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain an identified object of interest.

332. (NEW) The thing/operation disclosure of clause 331, wherein said second segment that is a user- selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain an identified object of interest comprises:

a second segment that is a user-selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain a selected football player.

333. (NEW) The thing/operation disclosure of clause 331, wherein said second segment that is a user- selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain an identified object of interest comprisesA

a second segment that is a user-selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain a selected landmark.

222 334. (NEW) The thing/operation disclosure of clause 331, wherein said second segment that is a user- selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain an identified object of interest comprises:

a second segment that is a user-selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain a selected animal for observation.

335. (NEW) The thing/operation disclosure of clause 331, wherein said second segment that is a user- selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain an identified object of interest comprises:

a second segment that is a user-selected area of selected particular portion transmitting at the second resolution module, wherein the second segment of the selected particular portion is determined to contain a selected object in a dwelling.

336. (NEW) The thing/operation disclosure of clause 267, wherein said selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location comprises:

a scene pixel nonselected nontransmitting module configured to transmit only pixels associated with the selected particular portion to the remote location.

337. (NEW) The thing/operation disclosure of clause 267, wherein said selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location comprises:

a scene pixel exclusive pixels selected from the particular portion transmitting to a particular thing/operation disclosure module configured to transmit only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion.

223 338. (NEW) The thing/operation disclosure of clause 337, wherein said scene pixel exclusive pixels selected from the particular portion transmitting to a particular thing/operation disclosure module configured to transmit only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion comprises:

a scene pixel exclusive pixels selected from the particular portion transmitting to the particular thing/operation disclosure that requested the particular portion module configured to transmit only the selected particular portion from the scene to the particular thing/operation disclosure operated by a user that requested the selected particular portion.

339. (NEW) The thing/operation disclosure of clause 337, wherein said scene pixel exclusive pixels selected from the particular portion transmitting to a particular thing/operation disclosure module configured to transmit only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion comprises:

a scene pixel exclusive pixels selected from the particular portion transmitting to a particular thing/operation disclosure user that requested the particular portion module configured to transmit only the selected particular portion from the scene to a particular thing/operation disclosure that requested the selected particular portion through selection of the particular portion from the scene.

340. (NEW) The thing/operation disclosure of clause 267, wherein said scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene comprises:

a scene nonselected pixel deleting module configured to delete pixels from the scene that are not part of the selected particular portion of the scene.

341. (NEW) The thing/operation disclosure of clause 267, wherein said scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene comprises:

a scene nonselected pixels indicating as nontransmitted module configured to indicate

224 that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion.

225 342. (NEW) The thing/operation disclosure of clause 341, wherein said scene nonselected pixels indicating as nontransmitted module configured to indicate that pixels from the scene that are not part of the selected particular portion of the scene are not transmitted with the selected particular portion comprises:

an appending data to pixels from the scene that are not part of the selected portion of the scene, said appended data configured to indicate non-transmission of the pixels to which data was appended.

343. (NEW) The thing/operation disclosure of clause 267, wherein said scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene comprises:

a scene nonselected pixels discarding module configured to discard pixels from the scene that are not part of the selected particular portion of the scene.

344. (NEW) The thing/operation disclosure of clause 267, wherein said scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene comprises:

a scene nonselected retention preventing module configured to prevent retention of pixels from the scene that are not part of the selected particular portion of the scene.

345. (NEW) The thing/operation disclosure of clause 267, wherein said scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene comprises:

a retaining a sampling of pixels from the scene that are not part of the selected

particular portion of the scene.

346. (NEW) The thing/operation disclosure of clause 345, wherein said retaining a sampling of pixels from the scene that are not part of the selected particular portion of the scene comprises:

a scene pixel ten percent subset retaining module configured to retain ten percent of the pixels from the scene that are not part of the selected particular portion of the scene.

226 347. (NEW) The thing/operation disclosure of clause 345, wherein said retaining a sampling of pixels from the scene that are not part of the selected particular portion of the scene comprises:

a scene pixel targeted object subset retaining module configured to retain pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene.

348. (NEW) The thing/operation disclosure of clause 347, wherein said scene pixel targeted object subset retaining module configured to retain pixels from the scene that have been identified as part of a targeted object that are not part of the selected particular portion of the scene comprises:

a scene pixel targeted scenic landmark object subset retaining module configured to retain pixels from the scene that have been identified as a scenic landmark of interest that is not part of the selected particular portion of the scene.

349. (NEW) The thing/operation disclosure of clause 345, wherein said retaining a sampling of pixels from the scene that are not part of the selected particular portion of the scene comprises:

a scene pixel targeted automation identified object subset retaining module configured to retain pixels from the scene that have been identified as part of a targeted object through automated pattern recognition that are not part of the selected particular portion of the scene.

350. (NEW) The thing/operation disclosure of clause 267, wherein said scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene comprises:

a scene pixel subset storing in separate storage module configured to store pixels from the scene that are not part of the selected particular portion of the scene in a separate storage.

351. (NEW) The thing/operation disclosure of clause 350, wherein said scene pixel

227 subset storing in separate storage module configured to store pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

228 a storing pixels from the scene that are not part of the selected particular portion of the scene in a separate storage that is local to the array of more than one image sensor.

352. (NEW) The thing/operation disclosure of clause 350, wherein said scene pixel subset storing in separate storage module configured to store pixels from the scene that are not part of the selected particular portion of the scene in a separate storage comprises:

a scene pixel subset storing in separate storage for separate transmission module configured to store pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene.

353. (NEW) The thing/operation disclosure of clause 352, wherein said scene pixel subset storing in separate storage for separate transmission module configured to store pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

a scene pixel subset storing in separate storage for separate transmission at off- peak time module configured to store pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location at a time when there are no selected particular portions to be transmitted.

354. (NEW) The thing/operation disclosure of clause 352, wherein said scene pixel subset storing in separate storage for separate transmission module configured to store pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels stored in the separate storage are configured to be transmitted to the remote location separately from the selected particular portion of the scene comprises:

a scene pixel subset storing in separate storage for separate transmission as lower- priority

229 data module configured to store pixels from the scene that are not part of the selected particular portion of the scene in a separate storage, wherein pixels

230 stored in the separate storage are configured to be separately transmitted to the remote location based on assigning a lower priority to the stored pixels.

355. (NEW) A thing/operation disclosure defined by a computational language comprising:

one or more interchained physical machines ordered for multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor;

one or more interchained physical machines ordered for scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene;

one or more interchained physical machines ordered for selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location; and

one or more interchained physical machines ordered for scene pixel de- emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

356. (NEW) A thing/operation disclosure, comprising:

one or more general purpose integrated circuits configured to receive instructions to configure as an multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor at one or more first particular times;

one or more general purpose integrated circuits configured to receive instructions to configure as a scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene at one or more second particular times;

one or more general purpose integrated circuits configured to receive instructions to configure as a selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location at one or more third

231 particular times; and

232 one or more general purpose integrated circuits configured to receive instructions to configure as a scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene at one or more fourth particular times.

357. (NEW) The thing/operation disclosure of clause 356, wherein said one or more second particular times occur prior to the one or more third particular times and one or more fourth particular times and after the one or more first particular times.

358. (NEW) A thing/operation disclosure comprising:

an integrated circuit configured to purpose itself as an multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor at a first time;

the integrated circuit configured to purpose itself as a scene particular portion

selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the scene at a second time;

the integrated circuit configured to purpose itself as an selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location at a third time; and

the integrated circuit configured to purpose itself as a scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene at a fourth time.

359. (NEW) A thing/operation disclosure, comprising:

one or more elements of programmable hardware programmed to function as an multiple image sensor based scene capturing module configured to capture a scene that includes one or more images, through use of more than one image sensor;

the one or more elements of programmable hardware programmed to function as a scene particular portion selecting module configured to select a particular portion of the scene that includes at least one image, wherein the selected particular portion is smaller than the

- 26 - scene;

-27 - the one or more elements of programmable hardware programmed to function as an selected particular portion transmitting module configured to transmit the selected particular portion from the scene to a remote location; and

the one or more elements of programmable hardware programmed to function as a scene pixel de-emphasizing module configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [EN D]

- 28 - — This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods and Systems for Visual Imaging Arrays

DETAILED DESCRIPTION --

High-Level System Architecture

[00151] Fig. 1, including Figs. 1-A-l-AN, shows partial views that, when assembled, form a complete view of an entire system, of which at least a portion will be described in more detail. An overview of the entire system of Fig. 1 is now described herein, with a more specific reference to at least one subsystem of Fig. 1 to be described later with respect to Figs. 2-14D.

[00152] Fig. 1 shows various implementations of the overall system. At a high level, Fig. 1 shows various implementations of a multiple user video imaging array (hereinafter interchangeably referred to as a "MUVIA"). It is noted that the designation "MUVIA" is merely shorthand and descriptive of an exemplary embodiment, and not a limiting term. Although "multiple user" appears in the name MUVIA, multiple users or even a single user are not required. Further, "video" is used in the designation "MUVIA," but MUVIA systems also may capture still images, multiple images, audio data, electromagnetic waves outside the visible spectrum, and other data as will be described herein. Further, "imaging array" may be used in the MUVIA designation, but the image sensor in MUVIA is not necessarily an array or even multiple sensors (although commonly implemented as larger groups of image sensors, single-sensor implementations are also contemplated), and "array" here does not necessarily imply any specific structure, but rather any grouping of one or more sensors.

[00153] Generally, although not necessarily required, a MUVIA system may include one or more of a user device (e.g., hereinafter interchangeably referred to as a "client device,"

- 29 - in recognition that a user may not necessarily be a human, living, or organic"), a server, and an image sensor array. A "server" in the context of this application may refer to any device, program, or module that is not directly connected to the image sensor array or to the client device, including any and all "cloud" storage, applications, and/or processing.

[00154] For example, in an embodiment, e.g., as shown in Fig. 1-A, Fig. 1-K, Fig. 1-U, Fig. 1-AE, and Fig. 1-AF, in an embodiment, the system may include one or more of image sensor array 3200, array local storage and processing module 3300, server 4000, and user device 5200. Each of these portions will be discussed in more detail herein.

- 30 - [00155] Referring now to Fig. 1-A, Fig. 1-A depicts user device 5200, which is a device that may be operated or controlled by a user of a MUVIA system. It is noted here that "user" is merely provided as a designation for ease of understanding, and does not imply control by a human or other organism, sentient or otherwise. In an embodiment, for example, in a security-type embodiment, the user device 5200 may be mostly or completely unmonitored, or may be monitored by an artificial intelligence, or by a combination of artificial intelligence, pseudo-artificial intelligence (e.g., that is intelligence amplification) and human intelligence.

[00156] User device 5200 may be, but is not limited to, a wearable device (e.g., glasses, goggles, headgear, a watch, clothing), an implant (e.g., a retinal-implant display), a computer of any kind (e.g., a laptop computer, desktop computer, mainframe, server, etc.), a tablet or other portable device, a phone or other similar device (e.g., smartphone, personal digital assistant), a personal electronic device (e.g., music player, CD player), a home appliance (e.g., a television, a refrigerator, or any other so-called "smart" device), a piece of office equipment (e.g., a copier, scanner, fax device, etc.), a camera or other camera-like device, a video game system, an entertainment/media center, or any other electrical equipment that has a functionality of presenting an image (whether visual or by other means, e.g., a screen, but also other sensory stimulating work).

[00157] User device 5200 may be capable of presenting an image, which, for purposes of clarity and conciseness will be referred to as displaying an image, although

communication through forms other than generating light waves through the visible light spectrum, although the image is not required to be presented at all times or even at all. For example, in an embodiment, user device 5200 may receive images from server 4000 (or directly from the image sensor array 3200, as will be discussed herein), and may store the images for later viewing, or for processing internally, or for any other reason.

[00158] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection accepting module 5210. User selection accepting module 5210 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-A in the exemplary interface 5212, the user selection accepting module 5210

- 31 - may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00159] In an embodiment, the user selection accepting module may accept a selection of a particular thing- e.g., a building, an animal, or any other object whose representation is present on the screen. Moreover, a user may use a text box to "search" the image for a particular thing, and processing, done at the user device 5200 or at the server 4000, may determine the image and the zoom level for viewing that thing. The search for a particular thing may include a generic search, e.g., "search for people," or "search for penguins," or a more specific search, e.g., "search for the Space Needle," or "search for the White House." The search for a particular thing may take on any known contextual search, e.g., an address, a text string, or any other input.

[00160] In an embodiment, the "user selection" facilitated by the user selection accepting module 5210 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00161] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection transmitting module 5220. The user selection transmitting module 5220 may take the user selection from user selection transmitting module 5220, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5200 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server

- 32 - 4000. Following the thick- line arrow leftward from user selection transmitting module 5220 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00162] Referring again to Fig. 1-A, Fig. 1-A also includes a selected image receiving module 5230 and a user selection presenting module 5240, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00163] Referring now to Fig. 1-K, Figs. 1-K and 1-U show an embodiment of a server 4000 that communicates with one or both of user device 5200 and array local storage and processing module 3300. Sever 4000 may be a single computing device, or may be many computing devices, which may or may not be in proximity with each other.

[00164] Referring again to Fig. 1-K, server 4000 may include a user request reception module 4010. The user request reception module 4010 may receive the transmitted request from user selection transmitting module 5220. The user request reception module 4010 may then turn over processing to user request validation module 4020, which may perform, among other things, a check to make sure the user is not requesting more resolution than what their device can handle. For example, if the server has learned (e.g., through gathered information, or through information that was transmitted with the user request or in a same session as the user request), that the user is requesting a 1900x1080 resolution image, and the maximum resolution for the device is 1334x750, then the request will be modified so that no more than the maximum resolution that can be handled by the device is requested. In an embodiment, this may conserve the bandwidth required to transmit from the MUVIA to the server 4000 and/or the user device 3200

[00165] Referring again to Fig. 1-K, in an embodiment, server 4000 may include a user request latency management module 4030. User request latency management module 4030 may, in conjunction with user device 3200, attempt to reduce the latency from the

- 33 - time a specific image is requested by user device 3200 to the time the request is acted upon and data is transmitted to the user. The details for this latency management will be described in more detail herein, with varying techniques that may be carried out by any or all of the devices in the chain (e.g., user device, camera array, and server). As an example, in an embodiment, a lower resolution version of the image, e.g., that is stored locally or on the server, may be sent to the user immediately upon the request, and then that image is updated with the actual image taken by the camera. In an embodiment, user request latency management module 4030 also may handle static gap-filling, that is, if the image captured by the camera is unchanging, e.g., has not changed for a particular period of time, then a new image is not necessary to be captured, and an older image, that may be stored on server 4000, may be transmitted to the user device 3200. This process also will be discussed in more detail herein.

[00166] Referring now to Fig. 1-U, which shows more of server 4000, in an

embodiment, server 4000 may include a consolidated user request transmission module 4040, which may be configured to consolidate all the user requests, perform any necessary pre-processing on those requests, and send the request for particular sets of pixels to the array local storage and processing module 3300. The process for

consolidating the user requests and performing pre-processing will be described in more detail herein with respect to some of the other exemplary embodiments. In this embodiment, however, server consolidated user request transmission module 4040 transmits the request (exiting leftward from Fig. 1-U and traveling downward to Fig. 1- AE, through a pathway identified in Fig. 1-AE as lower-bandwidth communication from remote server 3515. It is noted here that "lower bandwidth communication" does not necessarily mean "low bandwidth" or imply any specific number about the bandwidth— it is simply lower than the relatively higher bandwidth communication from the actual image sensor array 3505 to the array local storage and processing module 3300, which will be discussed in more detail herein.

[00167] Referring again to Fig. 1-U, server 4000 also may include requested pixel reception module 4050, user request preparation module 4060, and user request

- 34 - transmission module 4070 (shown in Fig. 1-T), which will be discussed in more detail herein, with respect to the dataflow of this embodiment

[00168] Referring now to Figs. 1-AE and 1-AF, Figs. 1-AE and 1-AF show an image sensor array ("ISA") 3200 and an array local storage and processing module 3300, each of which will now be described in more detail.

[00169] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00170] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3300. In an embodiment, array local storage and processing module 3300 is integrated into the image sensor array 3200. In another embodiment, the array local storage and processing module 3300 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3300 to the remote server, which may be, but is not required to be, located further away temporally.

[00171] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically

- 35 - changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00172] Referring again to Fig. 1-AE, the image sensor array 3300 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3310. Consolidated user request reception module 3310 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00173] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests.

[00174] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

- 36 - [00175] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3300 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3300 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00176] Referring back to Fig. 1-U, the transmitted pixels transmitted from selected pixel transmission module 3340 of array local processing module 3300 may be received by server 4000, e.g., at requested pixel reception module 4050. Requested pixel reception module 4050 may receive the requested pixels and turn them over to user request preparation module 4060, which may "unpack" the requested pixels, e.g., determining which pixels go to which user, and at what resolutions, along with any postprocessing, including image adjustment, adding in missing cached data, or adding additional data to the images (e.g., advertisements or other data). In an embodiment, server 4000 also may include a user request transmission module 4070, which may be configured to transmit the requested pixels back to the user device 5200.

[00177] Referring again to Fig. 1-A, user device 5200 may include a selected image receiving module 5230, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

- 37 - [00178] Figs. 1-B, 1-C, 1-M, 1-W, 1-AG, and 1-AH show another embodiment of the MUVIA system, in which multiple user devices 5510, 5520, and 5530 may request images captured by the same image sensor array 3200.

[00179] Referring now to Figs. 1-B and 1-C, user device 5510, user device 5520, and user device 5530 are shown. In an embodiment, user devices 5510, 5520, and 5530 may have some or all of the same components as user device 5200, but are not shown here for clarity and ease of understanding the drawing. For each of user devices 5510, 5520, and 5530, exemplary screen resolutions have been chosen. There is nothing specific about these numbers that have been chosen, however, they are merely illustrated for exemplary purposes, and any other numbers could have been chosen in their place.

[00180] For example, in an embodiment, referring to Fig. 1-B, user device 5510 may have a screen resolution of 1920x1080 (e.g., colloquially referred to as "HD quality"). User device 5510 may send an image request to the server 4000, and may also send data regarding the screen resolution of the device.

[00181] Referring now to Fig. 1-C, user device 5520 may have a screen resolution of 1334x750. User device 5520 may send another image request to the server 4000, and, in an embodiment, instead of sending data regarding the screen resolution of the device, may send data that identifies what kind of device it is (e.g., an Apple-branded

smartphone). Server 4000 may use this data to determine the screen resolution for user device 5520 through an internal database, or through contacting an external source, e.g., a manufacturer of the device or a third party supplier of data about devices.

[00182] Referring again to Fig. 1-C, user device 5530 may have a screen resolution of 640x480, and may send the request by itself to the server 4000, without any additional data. In addition, server 4000 may receive independent requests from various users to change their current viewing area on the device.

[00183] Referring now to Fig. 1-M, server 4000 may include user request reception module 4110. User request reception module 4110 may receive requests from multiple user devices, e.g., user devices 5510, 5520, and 5530. Server 4000 also may include an

- 38 - independent user view change request reception module 4115, which, in an embodiment, may be a part of user request reception module 4110, and may be configured to receive requests from users that are already connected to the system, to change the view of what they are currently seeing.

[00184] Referring again to Fig. 1-M, server 4000 may include relevant pixel selection module 4120 configured to combine the user selections into a single area, as shown in Fig. 1-M. It is noted that, in an embodiment, the different user devices may request areas that overlap each other. In this case, there may be one or more overlapping areas, e.g., overlapping areas 4122. In an embodiment, the overlapping areas are only transmitted once, in order to save data/transmission costs and increase efficiency.

[00185] Referring now to Fig. 1-W, server 4000 may include selected pixel transmission to ISA module 4130. Module 4130 may take the relevant selected pixels, and transmit them to the array local processing module 3400 of image sensor array 3200. Selected pixel transmission to ISA module 4130 may include communication components, which may be shared with other transmission and/or reception modules.

[00186] Referring now to Fig. 1-AG, array local processing module 3400 may communicate with image sensor array 3200. Similarly to Figs. 1-AE and 1-AF, Figs. 1- AG and 1-AH show array local processing module 3400 and image sensor array 3200, respectively.

[00187] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

- 39 - [00188] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3400. In an embodiment, array local storage and processing module 3400 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3400 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3400 to the remote server, which may be, but is not required to be, located further away temporally.

[00189] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00190] Referring again to Fig. 1-AG, the image sensor array 3200 may capture an image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3410. Consolidated user request reception module 3410 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3420 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00191] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3430. In an

- 40 - embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3417. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3415. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3400, or may be subject to other manipulations or processing separate from the user requests.

[00192] Referring gain to Fig. 1-AG, array local processing module 3400 may include flagged selected pixel transmission module 3440, which takes the pixels identified as requested (e.g., "flagged") and transmits them back to the server 4000 for further processing. Similarly to as previously described, this transmission may utilize a lower- bandwidth channel, and module 3440 may include all necessary hardware to effect that lower-bandwidth transmission to server 4000.

[00193] Referring again to Fig. 1-W, the flagged selected pixel transmission module 3440 of array local processing module 3400 may transmit the flagged pixels to server 4000. Specifically, flagged selected pixel transmission module 3440 may transmit the pixels to flagged selected pixel reception from ISA module 4140 of server 4000, as shown in Fig. 1-W.

[00194] Referring again to Fig. 1-W, server 4000 also may include flagged selected pixel separation and duplication module 4150, which may, effectively, reverse the process of combining the pixels from the various selections, duplicating overlapping areas where necessary, and creating the requested images for each of the user devices that requested images. Flagged selected pixel separation and duplication module 4150 also may include the post-processing done to the image, including filling in cached versions of images, image adjustments based on the device preferences and/or the user preferences, and any other image post-processing.

[00195] Referring now to Fig. 1-M (as data flows "northward" from Fig. 1-W from module 4150), server 4000 may include pixel transmission to user device module 4160, which may be configured to transmit the pixels that have been separated out and processed to the specific users that requested the image. Pixel transmission to user

- 41 - device module 4160 may handle the transmission of images to the user devices 5510, 5520, and 5530. In an embodiment, pixel transmission to user device module 4160 may have some or all components in common with user request reception module 4110.

[00196] Following the arrow of data flow to the right and upward from module 4160 of server 4000, the requested user images arrive at user device 5510, user device 5520, and user device 5530, as shown in Figs. 1-B and 1-C. The user devices 5510, 5520, and 5530 may present the received images as previously discussed and/or as further discussed herein.

[00197] Referring again to Fig. 1, Figs. 1-E, l-O, 1-Y, 1-AH, and 1-AI depict a MUVIA implementation according to an embodiment. In an embodiment, referring now to Fig. 1- E, a user device 5600 may include a target selection reception module 5610. Target selection reception module 5610 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA array is pointed at a football stadium, e.g., CenturyLink Field. As an example, a user may select one of the football players visible on the field as a "target." This may be facilitated by a target presentation module, e.g., target presentation module 5612, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the football player.

[00198] In an embodiment, target selection reception module 5610 may include an audible target selection module 5614 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00199] Referring again to Fig. 1, e.g., Fig. 1-E, in an embodiment, user device 5600 may include selected target transmission module 5620. Selected target transmission module 5620 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00200] Referring now to Fig. l-O, Fig. 1-0 (and Fig. 1-Y to the direct "south" of Fig. 1- O) shows an embodiment of server 4000. For example, in an embodiment, server 4000

47 may include a selected target reception module 4210. In an embodiment, selected target reception module 4210 of server 4000 may receive the selected target from the user device 5600. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00201] Referring again to Fig. l-O, in an embodiment, server 4000 may include selected target identification module 4220, which may be configured to take the target data received by selected target reception module 4210 and determine an image that needs to be captured in order to obtain an image that contains the selected target (e.g., in the shown example, the football player). In an embodiment, selected target identification module 4220 may use images previously received (or, in an embodiment, current images) from the image sensor array 3200 to determine the parameters of an image that contains the selected target. For example, in an embodiment, lower-resolution images from the image sensor array 3200 may be transmitted to server 4000 for determining where the target is located within the image, and then specific requests for portions of the image may be transmitted to the image sensor array 3200, as will be discussed herein.

[00202] In an embodiment, server 4000 may perform processing on the selected target data, and/or on image data that is received, in order to create a request that is to be transmitted to the image sensor array 3200. For example, in the given example, the selected target data is a football player. The server 4000, that is, selected target identification module 4220 may perform image recognition on one or more images captured from the image sensor array to determine a "sector" of the entire scene that contains the selected target. In another embodiment, the selected target identification module 4220 may use other, external sources of data to determine where the target is. In yet another embodiment, the selected target data was selected by the user from the scene displayed by the image sensor array, so such processing may not be necessary.

48 [00203] Referring again to Fig. 1-0, in an embodiment, server 4000 may include pixel information selection module 4230, which may select the pixels needed to capture the target, and which may determine the size of the image that should be transmitted from the image sensor array. The size of the image may be determined based on a type of target that is selected, one or more parameters (set by the user, by the device, or by the server, which may or may not be based on the selected target), by the screen resolution of the device, or by any other algorithm. Pixel information selection module 4230 may determine the pixels to be captured in order to express the target, and may update based on changes in the target's status (e.g., if the target is moving, e.g., in the football example, once a play has started and the football player is moving in a certain direction).

[00204] Referring now to Fig. 1-Y, Fig. 1Y includes more of server 4000 according to an embodiment. In an embodiment, server 4000 may include pixel information transmission to ISA module 4240. Pixel information transmission to ISA module 4240 may transmit the selected pixels to the array local processing module 3500 associated with image sensor array 3200.

[00205] Referring now to Figs. 1-AH and 1-AI, Fig. 1-AH depicts an image sensor array 3200, which in this example is pointed at a football stadium, e.g., CenturyLink field. Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00206] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3500. In an embodiment, array local storage and processing module

49 3500 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3500 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3500 to the remote server, which may be, but is not required to be, located further away temporally.

[00207] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00208] Referring again to Fig. 1-AE, the image sensor array 3200 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3510. Consolidated user request reception module 3510 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00209] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels

50 may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module 3330 may include or communicate with a lower resolution module 3314, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[00210] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00211] Referring now again to Fig. 1-Y, server 4000 may include a requested image reception from ISA module 4250. Requested image reception from ISA module 4250 may receive the image data from the array local processing module 3500 (e.g., in the arrow coming "north" from Fig. 1-AI. That image, as depicted in Fig. 1-Y, may include the target (e.g., the football player), as well as some surrounding area (e.g., the area of the field around the football player). The "surrounding area" and the specifics of what is included/transmitted from the array local processing module may be specified by the user (directly or indirectly, e.g., through a set of preferences), or may be determined by the server, e.g., in the pixel information selection module 4230 (shown in Fig. l-O).

[00212] Referring again to Fig. 1-Y, server 4000 may also include a requested image transmission to user device module 4260. Requested image transmission to user device module 4260 may transmit the requested image to the user device 5600. Requested

51 image transmission to user device module 4260 may include components necessary to communicate with user device 5600 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00213] Referring again to Fig. 1-Y, server 4000 may include a server cached image updating module 4270. Server cached image updating module 4270 may take the images received from the array local processing module 3500 (e.g., which may include the image to be sent to the user), and compare the received images with stored or "cached" images on the server, in order to determine if the cached images should be updated. This process may happen frequently or infrequently, depending on embodiments, and may be continuously ongoing as long as there is a data connection, in some embodiments. In some embodiments, the frequency of the process may depend on the available bandwidth to the array local processing module 3500, e.g., that is, at off-peak times, the frequency may be increased. In an embodiment, server cached image updating module 4270 compares an image received from the array local processing module 3500, and, if the image has changes, replaces the cached version of the image with the newer image.

[00214] Referring now again to Fig. 1-E, Fig. 1-E shows user device 5600. In an embodiment, user device 5600 includes image containing selected target receiving module 5630 that may be configured to receive the image from server 4000, e.g., requested image transmission to user device module 4260 of server 4000 (e.g., depicted in Fig. 1-Y, with the data transmission indicated by a rightward-upward arrow passing through Fig. 1-Y and Fig. 1-0 (to the north) before arriving at Fig. 1-E.

[00215] Referring again to Fig. 1-E, Fig. 1-E shows received image presentation module 5640, which may display the requested pixels that include the selected target to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through an exemplary interface that allows the user to monitor the target, and which also may display information about the target (e.g., in an embodiment, as shown in the figures, the game statistics for the football player also may

52 be shown), which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[00216] Referring again to Fig. 1, Figs. 1-F, 1-P, 1-Z, and 1-AJ depict a MUVIA implementation according to an embodiment. This embodiment may be colloquially known as "live street view" in which one or more MUVIA systems allow for a user to move through an area similarly to the well known Google-branded Maps (or Google- Street), except with the cameras working in real time. For example, in an embodiment, referring now to Fig. 1-F, a user device 5700 may include a target selection reception module 5710. Target selection reception module 5710 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA may be focused on a city, and the target may be an address, a building, a car, or a person in the city. As an example, a user may select a street address as a "target." This may be facilitated by a target presentation module, e.g., image selection presentation module 5712, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the street address. In an embodiment, image selection presentation module 5712 may use static images that may or may not be sourced by the MUVIA system, and, in another embodiment, image selection presentation module 5712 may use current or cached views from the MUVIA system.

[00217] In an embodiment, image selection presentation module 5712 may include an audible target selection module 5714 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00218] Referring again to Fig. 1, e.g., Fig. 1-F, in an embodiment, user device 5700 may include selected target transmission module 5720. Selected target transmission module 5720 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00219] Referring now to Fig. 1-P, Fig. 1-P depicts a server 4000 of the MUVIA system according to embodiments. In an embodiment, server 4000 may include a selected target

53 reception module 4310. Selected target reception module 4310 may receive the selected target from the user device 3700. In an embodiment, server 4000 may provide all or most of the data that facilitates the selection of the target, that is, the images and the interface, which may be provided, e.g., through a web portal.

[00220] Referring again to Fig. 1-P, in an embodiment, server 4000 may include a selected image pre-processing module 4320. Selected image pre-processing module 4320 may perform one or more tasks of pre-processing the image, some of which are described herein for exemplary purposes. For example, in an embodiment, selected image pre-processing module 4320 may include a resolution determination module 4322 which may be configured to determine the resolution for the image in order to show the target (and here, resolution is merely a stand-in for any facet of the image, e.g., color depth, size, shadow, pixilation, filter, etc.). In an embodiment, selected image preprocessing module 4320 may include a cached pixel fill-in module 4324. Cached pixel fill-in module 4324 may be configured to manage which portions of the requested image are recovered from a cache, and which are updated, in order to improve performance. For example, if a view of a street is requested, certain features of the street (e.g., buildings, trees, etc., may not need to be retrieved each time, but can be filled in with a cached version, or, in another embodiment, can be filled in by an earlier version. A check can be done to see if a red parked car is still in the same spot as it was in an hour ago; if so, that part of the image may not need to be updated. Using lower

resolution/prior images stored in a memory 4215, as well as other image processing techniques, cached pixel fill-in module determines which portions of the image do not need to be retrieved, thus reducing bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00221] Referring again to Fig. 1-P, in an embodiment, selected image pre-processing module 4320 of server 4000 may include a static object obtaining module 4326, which may operate similarly to cached pixel fill-in module 4324. For example, as in the example shown in Fig. 1-B, static object obtaining module 4326 may obtain prior versions of static objects, e.g., buildings, trees, fixtures, landmarks, etc., which may save

54 bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00222] Referring again to Fig. 1-P, in an embodiment, pixel information transmission to ISA module 4330 may transmit the request for pixels (e.g., an image, after the preprocessing) to the array local processing module 3600 (e.g., as shown in Figs. 1-Z and 1- AI, with the downward extending dataflow arrow).

[00223] Referring now to Figs. 1-Z and 1-AI, in an embodiment, an array local processing module 3600, that may be connected by a higher bandwidth connection to an image sensor array 3200, may be present.

[00224] . Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00225] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3605 to the array local storage and processing module 3600. In an embodiment, array local storage and processing module 3600 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3600 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3605" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3600 to the remote server, which may be, but is not required to be, located further away temporally.

55 [00226] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00227] Referring again to Fig. 1-AJ and Fig. 1-Z, the image sensor array 3200 may capture an image that is received by image capturing module 3605. Image capturing module 3605 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3610.

Consolidated user request reception module 3610 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3620 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00228] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3630. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3617. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3615. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3600, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module may include or communicate with a lower resolution module 3614, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

56 [00229] Referring again to Fig. 1-AJ, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3640. Selected pixel transmission module 3640 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00230] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3600 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3600 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00231] Referring now again to Fig. 1-P, in an embodiment, server 4000 may include image receiving from ISA module 4340. Image receiving from ISA module 4340 may receive the image data from the array local processing module 3600 (e.g., in the arrow coming "north" from Fig. 1-AJ via Fig. 1-Z). The image may include the pixels that were requested from the image sensor array 3200. In an embodiment, server 4000 also may include received image post-processing module 4350, which may, among other postprocessing tasks, fill in objects and pixels into the image that were determined not to be needed by selected image pre-processing module 4320, as previously described. In an embodiment, server 4000 may include received image transmission to user device module 4360, which may be configured to transmit the requested image to the user

57 device 5700. Requested image transmission to user device module 4360 may include components necessary to communicate with user device 5700 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00232] Referring now again to Fig. 1-F, user device 5700 may include a server image reception module 5730. Server image reception module 5730 may receive an image from sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-F.

[00233] In an embodiment, as shown in Figs. 1-F and 1-G, server image reception module 5730 may include an audio stream reception module 5732 and a video stream reception module 5734. In an embodiment, as discussed throughout this application, the MUVIA system may capture still images, video, and also sound, as well as other electromagnetic waves and other signals and data. In an embodiment, the audio signals and the video signals may be handled together, or they may be handled separately, as separate streams. Although not every module in the instant diagram separately shows audio streams and video streams, it is noted here that all implementations of MUVIA contemplate both audio and video coverage, as well as still image and other data collection.

[00234] Referring now to Fig. 1-G, which shows another portion of user device 5700, Fig. 1-G may include a display 5755 and a memory 5765, which may be used to facilitate presentation and/or storage of the received images.

[00235] Figs. 1-H, 1-R, 1-AA, and 1-AB show an embodiment of a MUVIA

implementation. For example, referring now to Fig. 1-H, Fig. 1-H shows an embodiment of a user device 5800. For exemplary purposes, the user device 5800 may be an augmented reality device that shows a user looking down a "street" at which the user is not actually present, e.g., a "virtual tourism" where the user may use their augmented

58 reality device (e.g., googles, e.g., an Oculus Rift-type headgear device) which may be a wearable computer. It is noted that this embodiment is not limited to wearable computers or augmented reality, but as in all of the embodiments described in this disclosure, may be any device. The use of a wearable augmented/virtual reality device is merely used to for illustrative and exemplary purposes.

[00236] In an embodiment, user device 5800 may have a field of view 5810, as shown in Fig. 1-H. The field of view for the user 5810 may be illustrated in Fig. 1-H as follows. The most internal rectangle, shown by the dot hatching, represents the user's "field of view" as they look at their "virtual world." The second most internal rectangle, with the straight line hatching, represents the "nearest" objects to the user, that is, a range where the user is likely to "look" next, by turning their head or moving their eyes. In an embodiment, this area of the image may already be loaded on the device, e.g., through use of a particular codec, which will be discussed in more detail herein. The outermost rectangle, which is the image without hatching, represents further outside the user's viewpoint. This area, too, may already be loaded on the device. By loading areas where the user may eventually look, the system can reduce latency and make a user's motions, e.g., movement of head, eyes, and body, appear "natural" to the system.

[00237] Referring now to Figs. 1-AA and 1-AB, these figures show an array local processing module 3700 that is connected to an image sensor array 3200 (e.g., as shown in Fig. 1-AK, and "viewing" a city as shown in Fig. 1-AJ). The image sensor array 3200 may operate as previously described in this document. In an embodiment, array local processing module 3700 may include a captured image receiving module 3710, which may receive the entire scene captured by the image sensor array 3200, through the higher-bandwidth communication channel 3505. As described previously in this application, these pixels may be "cropped" or "decimated" into the relevant portion of the captured image, as described by one or more of the user device 5800, the server 4000, and the processing done at the array local processing module 3700. This process may occur as previously described. The relevant pixels may be handled by relevant portion of captured image receiving module 3720.

59 [00238] Referring now to Fig. 1-AB, in an embodiment, the relevant pixels for the image that are processed by relevant portion of captured image receiving module 3720 may be encoded using a particular codec at relevant portion encoding module 3730. In an embodiment, the codec may be configured to encode the innermost rectangle, e.g., the portion that represents the current user's field of view, e.g., portion 3716, at a higher resolution, or a different compression, or a combination of both. The codec may be further configured to encode the second rectangle, e.g., with the vertical line hashing, e.g., portion 3714, at a different resolution and/or a different (e.g., a higher) compression. Similarly, the outermost portion of the image, e.g., the clear portion 3712, may again be coded at still another resolution and/or a different compression. In an embodiment, the codec itself handles the algorithm for encoding the image, and as such, in an

embodiment, the codec may include information about user device 5800.

[00239] As shown in Fig. 1-AB, the encoded portion of the image, including portions 3716, 3714, and 3712, may be transmitted using encoded relevant portion transmitting module 3740. It is noted that "lower compression," "more compression," and "higher compression," are merely used as one example for the kind of processing done by the codec. For example, instead of lower compression, a different sampling algorithm or compacting algorithm may be used, or a lossier algorithm may be implemented for various parts of the encoded relevant portion.

[00240] Referring now to Fig. 1-R, Fig. 1-R depicts a server 4000 in a MUVIA system according to an embodiment. For example, as shown in Fig. 1-R, server 4000 may include, in addition to portions previously described, an encoded image receiving module 4410. Encoded image receiving module 4410 may receive the encoded image, encoded as previously described, from encoded relevant portion transmitting module 3740 of array local processing module 3700.

[00241] Referring again to Fig. 1-R, server 4000 may include an encoded image transmission controlling module 4420. Encoded image transmission controlling module 4420 may transmit portions of the image to the user device 5800. In an embodiment, at least partially depending on the bandwidth and the particulars of the user device, the

60 server may send all of the encoded image to the user device, and let the user device decode the portions as needed, or may decode the image and send portions in piecemeal, or with a different encoding, depending on the needs of the user device, and the complexity that can be handled by the user device.

[00242] Referring again to Fig. 1-H, user device 5800 may include an encoded image transmission receiving module 5720, which may be configured to receive the image that is coded in a particular way, e.g., as will be disclosed in more detail herein. Fig. 1-H also may include an encoded image processing module 5830 that may handle the processing of the image, that is, encoding and decoding portions of the image, or other processing necessary to provide the image to the user.

[00243] Referring now to Fig. 1-AL, Fig. 1-AL shows an implementation of an

Application Programming Interface (API) for the various MUVIA components.

Specifically, image sensor array API 7800 may include, among other elements, a programming specification 7810, that may include, for example, libraries, classes, specifications, templates, or other coding elements that generally make up an API, and an access authentication module 7820 that governs API access to the various image sensor arrays. The API allows third party developers to access the workings of the image sensor array and the array local processing module 3700, so that the third party developers can write applications for the array local processing module 3700, as well as determine which data captured by the image sensor array 3200 (which often may be multiple gigabytes or more of data per second) should be kept or stored or transmitted. In an embodiment, API access to certain functions may be limited. For example, a tiered system may allow a certain number of API calls to the MUVIA data per second, per minute, per hour, or per day. In an embodiment, a third party might pay fees or perform a registration that would allow more or less access to the MUVIA data. In an embodiment, the third party could host their application on a separate web site, and let that web site access the image sensor array 3200 and/or the array local processing module 3700 directly.

61 [00244] Referring again to Fig. 1, Figs. 1-1, 1-J, 1-S, 1-T, 1-AC, 1-AD, 1-AM, and 1- AN, in an embodiment, show a MUVIA implementation that allows insertion of advertising (or other context-sensitive material) into the images displayed to the user.

[00245] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection accepting module 5910. User selection accepting module 5910 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-1, the user selection accepting module 5910 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00246] In an embodiment, the "user selection" facilitated by the user selection accepting module 5910 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00247] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection transmitting module 5920. The user selection transmitting module 5920 may take the user selection from user selection transmitting module 5920, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5900 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server 4000. Following the thick- line arrow leftward from user selection transmitting module 5920 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed

62 herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00248] Referring again to Fig. 1-1, Fig. 1-1 also includes a selected image receiving module 5930 and a user selection presenting module 5940, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00249] Referring now to Fig. 1-T (graphically represented as "down" and "to the right" of Fig. 1-1), in an embodiment, a server 4000 may include a selected image reception module 4510. In an embodiment, selected image reception module 4510 of server 4000 may receive the selected target from the user device 5900. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00250] Referring again to Fig. 1-T, in an embodiment, server 4000 may include selected image pre-processing module 4520. Selected image pre-processing module 4520 may perform one or more tasks of pre-processing the image, some of which have been previously described with respect to other embodiments. In an embodiment, server 4000 also may include pixel information transmission to ISA module 4330 configured to transmit the image request data to the image search array 3200, as has been previously described.

[00251] Referring now to Figs. 1-AD and 1-AN, array local processing module 3700 may be connected to an image sensor array 3200 through a higher-bandwidth

communication link 3505, e.g., a USB or PCI port. In an embodiment, image sensor array 3200 may include a request reception module 3710. Request reception module 3710 may receive the request for an image from the server 4000, as previously described.

63 Request reception module 3710 may transmit the data to a pixel selection module 3720, which may receive the pixels captured from image sensor array 3200, and select the ones that are to be kept. That is, in an embodiment, through use of the (sometimes

consolidated) user requests and the captured image, pixel selection module 3720 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00252] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3730. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3717. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3715. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3700, or may be subject to other manipulations or processing separate from the user requests, as described in previous embodiments. In an embodiment, unused pixel decimation module 3730 may be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to fulfill the request of the user.

[00253] Referring again to Fig. 1-AN, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3740. Selected pixel transmission module 3740 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3710. Similarly to lower-bandwidth

communication 3715, the lower-bandwidth communication 3710 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00254] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module

64 3700 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3700 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00255] Referring now again to Fig. 1-T, in an embodiment, server 4000 may include received image post-processing module 4550. Received image post-processing module 4550 may receive the image data from the array local processing module 3700 (e.g., in the arrow coming "north" from Fig. 1-AN via Fig. 1-AD). The image may include the pixels that were requested from the image sensor array 3200.

[00256] In an embodiment, server 4000 also may include advertisement insertion module 4560. Advertisement insertion module 4560 may insert an advertisement into the received image. The advertisement may be based one or more of the contents of the image, a characteristic of a user or the user device, or a setting of the advertisement server component 7700 (see, e.g., Fig. 1-AC, as will be discussed in more detail herein). The advertisement insertion module 4560 may place the advertisement into the image using any known image combination techniques, or, in another embodiment, the advertisement image may be in a separate layer, overlay, or any other data structure. In an embodiment, advertisement insertion module 4560 may include context-based advertisement insertion module 4562, which may be configured to add advertisements that are based on the context of the image. For example, if the image is a live street view of a department store, the context of the image may show advertisements related to products sold by that department store, e.g., clothing, cosmetics, or power tools.

[00257] Referring again to Fig. 1-T, server 4000 may include a received image with advertisement transmission to user device module 4570 configured to transmit the ima

65 5900. Received image with advertisement transmission to user device module 4570 may include components necessary to communicate with user device 5900 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00258] Referring again to Fig. 1-1, user device 5900 may include a selected image receiving module 5930, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5940, which may display the requested pixels to the user, including the advertisement, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-1.

[00259] Referring now to Fig. 1-AC, Fig. 1-AC shows an advertisement server component 7700 configured to deliver advertisements to the server 4000 for insertion into the images prior to delivery to the user. In an embodiment, advertisement server component 7700 may be integrated with server 4000. In another embodiment,

advertisement server component may be separate from server 4000 and may

communicate with server 4000. In yet another embodiment, rather than interacting with server 4000, advertisement server component 7700 may interact directly with the user device 5900, and insert the advertisement into the image after the image has been received, or, in another embodiment, cause the user device to display the advertisement concurrently with the image (e.g., overlapping or adjacent to). In such embodiments, some of the described modules of server 4000 may be incorporated into user device 5900, but the functionality of those modules would operate similarly to as previously described.

[00260] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a user data collection module 7705. User data collection module 7705 may collect data from user device 5900, and use that data to drive placement of advertisements (e.g., based on a user's browser history, e.g., to sports sites, and the like).

66 [00261] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include advertisement database 7715 which includes

advertisements that are ready to be inserted into images. In an embodiment, these advertisements may be created on the fly.

[00262] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include an advertisement request reception module 7710 which receives a request to add an advertisement into the drawing (the receipt of the request is not shown to ease understanding of the drawings). In an embodiment, advertisement server component 7700 may include advertisement selection module 7720, which may include an image analysis module 7722 configured to analyze the image to determine the best context-based advertisement to place into the image. In an embodiment, that decision may be made by the server 4000, or partly at the server 4000 and partly at the advertisement server component 7700 (e.g., the advertisement server component may have a set of advertisements from which a particular one may be chosen). In an embodiment, various third parties may compensate the operators of server component 7700, server 4000, or any other component of the system, in order to receive preferential treatment.

[00263] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a selected advertisement transmission module 7730, which may transmit the selected advertisement (or a set of selected advertisements) to the server 4000. In an embodiment, selected advertisement transmission module 7730 may send the complete image with the advertisement overlaid, e.g., in an implementation in which the advertisement server component 7700 also handles the placement of the advertisement. In an embodiment in which advertisement server component 7700 is integrated with server 4000, this module may be an internal transmission module, as may all such transmission/reception modules.

Exemplary Environment 200

[00264] Referring now to Fig. 2A, Fig. 2A illustrates an example environment 200 in which methods, systems, circuitry, articles of manufacture, and computer program

67 products and architecture, in accordance with various embodiments, may be implemented by at least one server device 230. Image device 220 may include a number of individual sensors that capture data. Although commonly referred to throughout this application as "image data," this is merely shorthand for data that can be collected by the sensors.

Other data, including video data, audio data, electromagnetic spectrum data (e.g., infrared, ultraviolet, radio, microwave data), thermal data, and the like, may be collected by the sensors.

[00265] Referring again to Fig. 2A, in an embodiment, image device 220 may operate in an environment 200. Specifically, in an embodiment, image device 220 may capture a scene 215. The scene 215 may be captured by a number of sensors 243. Sensors 243 may be grouped in an array, which in this context means they may be grouped in any pattern, on any plane, but have a fixed position relative to one another. Sensors 243 may capture the image in parts, which may be stitched back together by processor 222. There may be overlap in the images captured by sensors 243 of scene 215, which may be removed.

[00266] Upon capture of the scene in image device 220, in processes and systems that will be described in more detail herein, the requested pixels are selected. Specifically, pixels that have been identified by a remote user, by a server, by the local device, by another device, by a program written by an outside user with an API, by a component or other hardware or software in communication with the image device, and the like, are transmitted to a remote location via a communications network 240. The pixels that are to be transmitted may be illustrated in Fig. 2A as selected portion 255, however this is a simplified expression meant for illustrative purposes only.

[00267] Referring again to Fig. 2A, in an embodiment, server device 230 may be any device or group of devices that is connected to a communication network. Although in some examples, server device 230 is distant from image device 220, that is not required. Server device 230 may be "remote" from image device 220, which may be that they are separate components, but does not necessarily imply a specific distance. The

communications network may be a local transmission component, e.g., a PCI bus. Server

68 device 230 may include a request handling module 232 that handles requests for images from user devices, e.g., user device 250A and 240B. Request handling module 232 also may handle other remote computers and/or users that want to take active control of the image device, e.g., through an API, or through more direct control.

[00268] Server device 230 also may include an image device management module, which may perform some of the processing to determine which of the captured pixels of image device 220 are kept. For example, image device management module 234 may do some pattern recognition, e.g., to recognize objects of interest in the scene, e.g., a particular football player, as shown in the example of Fig. 2A. In other embodiments, this processing may be handled at the image device 220 or at the user device 250. In an embodiment, server device 230 limits a size of the selected portion by a screen resolution of the requesting user device.

[00269] Server device 230 then may transmit the requested portions to the user devices, e.g., user device 250A and user device 250B. In another embodiment, the user device or devices may directly communicate with image device 220, cutting out server device 230 from the system.

[00270] In an embodiment, user device 250A and 250B are shown, however user devices may be any electronic device or combination of devices, which may be located together or spread across multiple devices and/or locations. Image device 220 may be a server device, or may be a user-level device, e.g., including, but not limited to, a cellular phone, a network phone, a smartphone, a tablet, a music player, a walkie-talkie, a radio, an augmented reality device (e.g., augmented reality glasses and/or headphones), wearable electronics, e.g., watches, belts, earphones, or "smart" clothing, earphones, headphones, audio/visual equipment, media player, television, projection screen, flat screen, monitor, clock, appliance (e.g., microwave, convection oven, stove, refrigerator, freezer), a navigation system (e.g., a Global Positioning System ("GPS") system), a medical alert device, a remote control, a peripheral, an electronic safe, an electronic lock, an electronic security system, a video camera, a personal video recorder, a personal audio recorder, and the like. Device 220 may include a device interface 243 which may allow the device 220

69 to output data to the client in sensory (e.g., visual or any other sense) form, and/or allow the device 220 to receive data from the client, e.g., through touch, typing, or moving a pointing device (e.g., a mouse). User device 250 may include a viewfinder or a viewport that allows a user to "look" through the lens of image device 220, regardless of whether the user device 250 is spatially close to the image device 220.

[00271] Referring again to Fig. 2A, in various embodiments, the communication network 240 may include one or more of a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a wireless local area network (WLAN), a personal area network (PAN), a Worldwide Interoperability for Microwave Access (WiMAX), public switched telephone network (PTSN), a general packet radio service (GPRS) network, a cellular network, and so forth. The communication networks 240 may be wired, wireless, or a combination of wired and wireless networks. It is noted that "communication network" as it is used in this application refers to one or more communication networks, which may or may not interact with each other.

[00272] Referring now to Fig. 2B, Fig. 2B shows a more detailed version of server device 230, according to an embodiment. The server device 230 may include a device memory 245. In an embodiment, device memory 245 may include memory, random access memory ("RAM"), read only memory ("ROM"), flash memory, hard drives, disk- based media, disc-based media, magnetic storage, optical storage, volatile memory, nonvolatile memory, and any combination thereof. In an embodiment, device memory 245 may be separated from the device, e.g., available on a different device on a network, or over the air. For example, in a networked system, there may be more than one server device 230 whose device memories 245 may be located at a central server that may be a few feet away or located across an ocean. In an embodiment, device memory 245 may include of one or more of one or more mass storage devices, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), cache memory such as random access memory (RAM), flash memory, synchronous random access memory (SRAM), dynamic random access memory

(DRAM), and/or other types of memory devices. In an embodiment, memory 245 may be located at a single network site. In an embodiment, memory 245 may be located at

70 multiple network sites, including sites that are distant from each other. In an embodiment, device memory 245 may include one or more of cached images 245A and previously retained image data 245B, as will be discussed in more detail further herein.

[00273] Referring again to Fig. 2B, in an embodiment, server device 230 may include an optional viewport 247, which may be used to view images captured by server device 230. This optional viewport 247 may be physical (e.g., glass) or electrical (e.g., LCD screen), or may be at a distance from server device 230.

[00274] Referring again to Fig. 2B, Fig. 2B shows a more detailed description of server device 230. In an embodiment, device 220 may include a processor 222. Processor 222 may include one or more microprocessors, Central Processing Units ("CPU"), a Graphics Processing Units ("GPU"), Physics Processing Units, Digital Signal Processors, Network Processors, Floating Point Processors, and the like. In an embodiment, processor 222 may be a server. In an embodiment, processor 222 may be a distributed-core processor. Although processor 222 is as a single processor that is part of a single device 220, processor 222 may be multiple processors distributed over one or many devices 220, which may or may not be configured to operate together.

[00275] Processor 222 is illustrated as being configured to execute computer readable instructions in order to execute one or more operations described above, and as illustrated in Fig. 10, Figs. 11A-11G, Figs. 12A-12E, Figs. 13A-13C, and Figs. 14A-14E. In an embodiment, processor 222 is designed to be configured to operate as processing module 250, which may include one or more of a request for particular image data that is part of a scene acquiring module 252, a request for particular image data transmitting to an image sensor array module 254 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, a particular image data from the image sensor array exclusive receiving module 256 configured to transmit the selected particular portion from the scene to a remote location, and a received particular image data transmitting to at least one requestor module 258 configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

71 Exemplary Environment 300A

[00276] Referring now to Fig. 3 A, Fig. 3 A shows an exemplary embodiment of an image device, e.g., image device 220A operating in an environment 300A. In an embodiment, image device 220A may include an array 310 of image sensors 312 as shown in Fig. 3. The array of image sensors in this image is shown in a rectangular grid, however this is merely exemplary to show that image sensors 312 may be arranged in any format. In an embodiment, each image sensor 312 may capture a portion of scene 315, which portions are then processed by processor 350. Although processor 350 is shown as local to image device 220A, it may be remote to image device 220A, with a sufficiently high-bandwidth connection to receive all of the data from the array of image sensors (e.g., multiple USB 3.0 lines). In an embodiment, the selected portions from the scene (e.g., the portions shown in the shaded box, e.g., selected portion 315), may be transmitted to a remote device 330, which may be a user device or a server device, as previously described. In an embodiment, the pixels that are not transmitted to remote device 330 may be stored in a local memory 340 or discarded.

Exemplary Environment 300B

[00277] Referring now to Fig. 4, Fig. 4 shows an exemplary embodiment of an image device, e.g., image device 320B operating in an environment 300B. In an embodiment, image device 320B may include an image sensor array 320B, e.g., an array of image sensors, which, in this example, are arranged around a polygon to increase the field of view that can be captured, that is, they can capture scene 315, illustrated in Fig. 3B as a natural landmark that can be viewed in a virtual tourism setting. Processor 322 receives the scene 315B and selects the pixels from the scene 315B that have been requested by a user, e.g., requested portions 317B. Requested portions 317B may include an

overlapping area 324B that is only transmitted once. In an embodiment, the requested portions 317B may be transmitted to a remote location via communications network 240.

Exemplary Environment 300C

[00278] Referring now to Fig. 3C, Fig. 3C shows an exemplary embodiment of an image device, e.g., image device 320C operating in an environment 300C. In an embodiment, image device 320C may capture a scene, of which a part of the scene, e.g., scene portion

72 315C, as previously described in other embodiments (e.g., some parts of image device 320C are omitted for simplicity of drawing). In an embodiment, e.g., scene portion 315C may show a street- level view of a busy road, e.g., for a virtual tourism or virtual reality simulator. In an embodiment, different portions of the scene portion 315C may be transmitted at different resolutions or at different times. For example, in an embodiment, a central part of the scene portion 315C, e.g., portion 516, which may correspond to what a user's eyes would see, is transmitted at a first resolution, or "full" resolution relative to what the user's device can handle. In an embodiment, an outer border outside portion 316, e.g., portion 314, may be transmitted at a second resolution, which may be lower, e.g., lower than the first resolution. In another embodiment, a further outside portion, e.g., portion 312, may be discarded, transmitted at a still lower rate, or transmitted asynchronously.

Exemplary Environment 400A

[00279] Referring now to Fig. 4A, Fig. 4A shows an exemplary embodiment of a server device, e.g., server device 430A. In an embodiment, an image device, e.g., image device 420 A may capture a scene 415. Scene 415 may be stored in local memory 440. The portions of scene 415 that are requested by the server device 430A may be transmitted (e.g., through requested image transfer 465) to requested pixel reception module 432 of server device 430A. In an embodiment, the requested pixels transmitted to requested pixel reception module 432 may correspond to images that were requested by various users and/or devices (not shown) in communication with server device 430A.

[00280] Referring again to Fig. 4A, in an embodiment, pixels not transmitted from local memory 440 of image device 420A may be stored in untransmitted pixel temporary storage 440B. These untransmitted pixels may be stored and transmitted to the server device 430A at a later time, e.g., an off-peak time for requests for images of scene 415. For example, in an embodiment, the pixels stored in untransmitted pixel temporary storage 440B may be transmitted to the unrequested pixel reception module 434 of server device 430A at night, or when other users are disconnected from the system, or when the available bandwidth to transfer pixels between image device 420A and server device 430A reaches a certain threshold value.

73 [00281] In an embodiment, server device 430A may analyze the pixels received by unrequested pixel reception module 434, for example, to provide a repository of static images from the scene 415 that do not need to be transmitted from the image device 420 A each time certain portions of scene 415 are requested.

Exemplary Environment 400B

[00282] Referring now to Fig. 4B, Fig. 4A shows an exemplary embodiment of a server device, e.g., server device 430B. In an embodiment, an image device, e.g., image device 420B may capture a scene 415B. Scene 415B may be stored in local memory 440B. In an embodiment, image device 420B may capture the same scene 415B multiple times. In an embodiment, scene 415B may include an unchanged area 416 A, which is a portion of the image that has not changed since the last time the scene 415B was captured by the image device 420B. In an embodiment, scene 415B also may include a changed area 416B, which may be a portion of the image that has changed since the last time the scene 415B was captured by the image device 420B. Although changed area 416B is illustrated as polygonal and contiguous in Fig. 4B, this is merely for illustrative purposes, and changed area 416B may be, in some embodiments, nonpolygonal and/or noncontiguous.

[00283] In an embodiment, image device 420B, upon capturing scene 415B use an image previously stored in local memory 440B to compare the previous image, e.g., previous image 441B, to the current image, e.g., current image 442B, and may determine which areas of the scene 415B have been changed. The changed areas may be transmitted to server device 430B, e.g., to changed area reception module 432B. This may occur through a changed area transmission 465, as indicated in Fig. 4B.

[00284] Referring again to Fig. 4B, in an embodiment, server device 430B receives the changed area at changed area reception module 432B. Server device 430B also may include an unchanged area addition module 434B, which adds the unchanged areas that were previously stored in a memory of server device 430B (not shown) from a previous transmission from image device 420B. In an embodiment, server device 430B also may include a complete image transmission module 436B configured to transmit the completed image to a user device, e.g., user device 450B, that requested the image.

74 Exemplary Environment 500A

[00285] Referring now to Fig. 5A, Fig. 5A shows an exemplary embodiment of a server device, e.g., server device 530A. In an embodiment, an image device 520 A may capture a scene 515 through use of an image sensor array 540, as previously described. The image may be temporarily stored in a local memory 540 (as pictured), or may be partially or wholly stored in a local memory before transmission to a server device 530A. In an embodiment, server device 530A may include an image data reception module 532A. Image data reception module 532A may receive the image from image device 520A. In an embodiment, server device 530A may include data addition module 534A, which may add additional data to the received image data. In an embodiment, the additional data may be visible or invisible, e.g., pixel data or metadata, for example. In an embodiment, the additional data may be advertising data. In an embodiment, the additional data may be context-dependent upon the image data, for example, if the image data is of a football player, the additional data may be statistics about that player, or an advertisement for an online shop that sells that player's jersey.

[00286] In an embodiment, the additional data may be stored in a memory of server device 530A (not shown). In another embodiment, the additional data may be retrieved from an advertising server or another data server. In an embodiment, the additional data may be tailored to one or more characteristics of the user or the user device, e.g., the user may have a setting that labels each player displayed on the screen with that player' s last name. Referring again to Fig. 5A, in an embodiment, server device 530A may include a modified data transmission module 536A, which may receive the modified data from data addition module 534A, and transmit the modified data to a user device, e.g., a user device that requested the image data, e.g., user device 550A.to the server device 430A at a later time, e.g., an off-peak time for requests for images of scene 415. For example, in an embodiment, the pixels stored in untransmitted pixel temporary storage 440B may be transmitted to the unrequested pixel reception module 434 of server device 430A at night, or when other users are disconnected from the system, or when the available bandwidth to transfer pixels between image device 420A and server device 430A reaches a certain threshold value.

75 [00287] In an embodiment, server device 430A may analyze the pixels received by unrequested pixel reception module 434, for example, to provide a repository of static images from the scene 415 that do not need to be transmitted from the image device 420 A each time certain portions of scene 415 are requested.

Exemplary Environment 500B

[00288] Referring now to Fig. 5B, Fig. 5B shows an exemplary embodiment of a server device, e.g., server device 530B. In an embodiment, multiple user devices, e.g., user device 502A, user device 502B, and user device 502C, each may send a request for image data from a scene, e.g., scene 515B. Each user device may send a request to a server device, e.g., server device 530B. Server device 530B may consolidate the requests, which may be for various resolutions, shapes, sizes, and other features, into a single combined request 570. Overlapping portions of the request, e.g., as shown in overlapping area 572, may be combined.

[00289] In an embodiment, server device 530B transmits the combined request 570 to the image device 520B. In an embodiment, image device 520B uses the combined request 570 to designate selected pixels 574, which then may be transmitted back to the server device 530B, where the process of combining the requests may be reversed, and each user device 502A, 502B, and 502C may receive the requested image. This process will be discussed in more detail further herein.

Exemplary Embodiments of the Various Modules of Portions of Processor 250

[00290] Figs. 6-9 illustrate exemplary embodiments of the various modules that form portions of processor 250. In an embodiment, the modules represent hardware, either that is hard-coded, e.g., as in an application- specific integrated circuit ("ASIC") or that is physically reconfigured through gate activation described by computer instructions, e.g., as in a central processing unit.

[00291] Referring now to Fig. 6, Fig. 6 illustrates an exemplary implementation of the request for particular image data that is part of a scene acquiring module 252. As illustrated in Fig. 6, the request for particular image data that is part of a scene acquiring module may include one or more sub-logic modules in various alternative

76 implementations and embodiments. For example, as shown in Fig. 6, e.g., Fig. 6A, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene and includes one or more images acquiring module 602 and request for particular image data that is part of a scene receiving module 604. In an embodiment, module 604 may include request for particular image data that is part of a scene receiving from a client device module 606. In an embodiment, module 606 may include one or more of request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene module 608 and request for particular image data that is part of a scene receiving from a client device configured to receive a selection of a particular image module 612. In an embodiment, module 608 may include request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene in a viewfinder module 610. In an embodiment, module 612 may include one or more of request for particular image data that is part of a scene receiving from a client device configured to receive a scene-based selection of a particular image module 614 and request for particular image data that is part of a scene receiving from one or more various devices configured to receive a scene-based selection of a particular image module 616.

[00292] Referring again to Fig. 6, e.g., Fig. 6B, as described above, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene that is the image data collected by the array of more than one image sensor acquiring module 618 and request for particular image data that is part of a scene that a representation of the image data collected by the array of more than one image sensor acquiring module 620. In an embodiment, module 620 may include one or more of request for particular image data that is part of a scene that a sampling of the image data collected by the array of more than one image sensor acquiring module 622, request for particular image data that is part of a scene that is a subset of the image data collected by the array of more than one image sensor acquiring module 624, and request for particular image data that is part of a scene that is a low-resolution version of the image data collected by the array of more than one image sensor acquiring module 626.

77 [00293] Referring again to Fig. 6, e.g., Fig. 6C, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene that is a football game acquiring module 628, request for particular image data that is part of a scene that is an area street view acquiring module 630, request for particular image data that is part of a scene that is a tourist destination acquiring module 632, and request for particular image data that is part of a scene that is inside a home acquiring module 634.

[00294] Referring again to Fig. 6, e.g., Fig. 6D, in an embodiment, module 252 may include request for particular image data that is an image that is a portion of the scene acquiring module 636. In an embodiment, module 636 may include one or more of request for particular image data that is an image that is a particular football player and a scene that is a football field acquiring module 638 and request for particular image data that is an image that is a vehicle license plate and a scene that is a highway bridge acquiring module 640.

[00295] Referring again to Fig. 6, e.g., Fig. 6E, in an embodiment, module 252 may include one or more of request for particular image object located in the scene acquiring module 642 and particular image data of the scene that contains the particular image object determining module 644. In an embodiment, module 642 may include one or more of request for particular person located in the scene acquiring module 646, request for a basketball located in the scene acquiring module 648, request for a motor vehicle located in the scene acquiring module 650, and request for a human object representation located in the scene acquiring module 652.

[00296] Referring again to Fig. 6, e.g., Fig. 6F, in an embodiment, module 252 may include one or more of first request for first particular image data from a first requestor receiving module 662, second request for first particular image data from a different second requestor receiving module 664, first received request for first particular image data and second received request for second particular image data combining module 666, first request for first particular image data and second request for second particular image data receiving module 670, and received first request and received second request combining module 672. In an embodiment, module 666 may include first received

78 request for first particular image data and second received request for second particular image data combining into the request for particular image data module 668. In an embodiment, module 672 may include received first request and received second request common pixel deduplicating module 674.

[00297] Referring again to Fig. 6, e.g., Fig. 6G, in an embodiment, module 252 may include one or more of request for particular video data that is part of a scene acquiring module 676, request for particular audio data that is part of a scene acquiring module 678, request for particular image data that is part of a scene receiving from a user device with an audio interface module 680, and request for particular image data that is part of a scene receiving from a microphone-equipped user device with an audio interface module 682.

[00298] Referring now to Fig. 7, Fig. 7 illustrates an exemplary implementation of request for particular image data transmitting to an image sensor array module 254. As illustrated in Fig. 7, the request for particular image data transmitting to an image sensor array module 254 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 7, e.g., Fig. 7A, in an embodiment, module 254 may include one or more of request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested particular image data 702, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes two angled image sensors and that is configured to capture the scene that is larger than the requested particular image data 704, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a grid and that is configured to capture the scene that is larger than the requested particular image data 706, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a line and that is configured to capture the scene that is larger than the requested particular image data 708, and request

79 for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one nonlinearly arranged stationary image sensor and that is configured to capture the scene that is larger than the requested particular image data 710.

[00299] Referring again to Fig. 7, e.g., Fig. 7B, in an embodiment, module 254 may include one or more of request for particular image data transmitting to an image sensor array that includes an array of static image sensors module 712, request for particular image data transmitting to an image sensor array that includes an array of image sensors mounted on a movable platform module 716, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more data than the requested particular image data 718, and request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents a greater field of view than the requested particular image data 724. In an embodiment, module 712 may include request for particular image data transmitting to an image sensor array that includes an array of static image sensors that have fixed focal length and fixed field of view module 714. In an embodiment, module 718 may include one or more of request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times as much data as the requested particular image data 720 and request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much data as the requested particular image data 722.

[00300] Referring again to Fig. 7, e.g., Fig. 7C, in an embodiment, module 254 may include one or more of request for particular image data modifying module 726 and modified request for particular image data transmitting to an image sensor array module

80 728. In an embodiment, module 726 may include designated image data removing from request for particular image data module 730. In an embodiment, module 730 may include designated image data removing from request for particular image data based on previously stored image data module 732. In an embodiment, module 732 may include one or more of designated image data removing from request for particular image data based on previously stored image data retrieved from the image sensor array module 734 and designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data module 736. In an embodiment, module 736 may include designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data that is a static object module 738.

[00301] Referring again to Fig. 7, e.g., Fig. 7D, in an embodiment, module 254 may include module 726 and module 728, as previously discussed. In an embodiment, module 726 may include one or more of designated image data removing from request for particular image data based on pixel data interpolation/extrapolation module 740, portion of the request for particular image data that was previously stored in memory identifying module 744, and identified portion of the request for the particular image data removing module 746. In an embodiment, module 740 may include designated image data corresponding to one or more static image objects removing from request for particular image data based on pixel data interpolation/extrapolation module 742. In an embodiment, module 744 may include one or more of portion of the request for the particular image data that was previously captured by the image sensor array identifying module 748 and portion of the request for the particular image data that includes at least one static image object that was previously captured by the image sensor array identifying module 750. In an embodiment, module 750 may include portion of the request for the particular image data that includes at least one static image object of a rock outcropping that was previously captured by the image sensor array identifying module 752.

81 [00302] Referring again to Fig. 7, e.g., Fig. 7E, in an embodiment, module 254 may include one or more of size of request for particular image data determining module 754 and determined- size request for particular image data transmitting to the image sensor array module 756. In an embodiment, module 754 may include one or more of size of request for particular image determining at least partially based on user device property module 758, size of request for particular image determining at least partially based on user device access level module 762, size of request for particular image determining at least partially based on available bandwidth module 764, size of request for particular image determining at least partially based on device usage time module 766, and size of request for particular image determining at least partially based on device available bandwidth module 768. In an embodiment, module 758 may include size of request for particular image determining at least partially based on user device resolution module 760.

[00303] Referring now to Fig. 8, Fig. 8 illustrates an exemplary implementation of particular image data from the image sensor array exclusive receiving module 256. As illustrated in Fig. 8 A, the particular image data from the image sensor array exclusive receiving module 256 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 8, e.g., Fig. 8A, in an embodiment, module 256 may include one or more of particular image data from the image sensor array in which other image data is discarded receiving module 802, particular image data from the image sensor array in which other image data is stored at the image sensor array receiving module 804, and particular image data from the image sensor array exclusive near-real-time receiving module 806.

[00304] Referring again to Fig. 8, e.g., Fig. 8B, in an embodiment, module 256 may include one or more of particular image data from the image sensor array exclusive near- real-time receiving module 808 and data from the scene other than the particular image data retrieving at a later time module 810. In an embodiment, module 810 may include one or more of data from the scene other than the particular image data retrieving at a time of available bandwidth module 812, data from the scene other than the particular image data retrieving at an off-peak usage time of the image sensor array module 814,

82 data from the scene other than the particular image data retrieving at a time when no particular image data requests are present at the image sensor array module 816, and data from the scene other than the particular image data retrieving at a time of available image sensor array capacity module 818.

[00305] Referring again to Fig. 8, e.g., Fig. 8C, in an embodiment, module 256 may include one or more of particular image data that includes audio data from the image sensor array exclusive receiving module 820 and particular image data that was determined to contain a particular requested image object from the image sensor array exclusive receiving module 822. In an embodiment, module 822 may include particular image data that was determined to contain a particular requested image object by the image sensor array exclusive receiving module 824.

[00306] Referring now to Fig. 9, Fig. 9 illustrates an exemplary implementation of received particular image data transmitting to at least one requestor module 258. As illustrated in Fig. 9 A, the received particular image data transmitting to at least one requestor module 258 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 9, e.g., Fig. 9A, in an embodiment, module 258 may include one or more of received particular image data transmitting to at least one user device requestor module 902, separation of the received particular data into set of one or more requested images executing module 906, and received particular image data transmitting to at least one user device that requested image data that is part of the received particular image data module 912. In an embodiment, module 902 may include received particular image data transmitting to at least one user device that requested at least a portion of the received particular data requestor module 904. In an embodiment, module 906 may include separation of the received particular data into a first requested image and a second requested image executing module 910.

[00307] Referring again to Fig. 9, e.g., Fig. 9B, in an embodiment, module 258 may include one or more of first portion of received particular image data transmitting to the first requestor module 914, second portion of received particular image data transmitting to a second requestor module 916, and received particular image data unaltered

83 transmitting to at least one requestor module 926. In an embodiment, module 914 may include first portion of received particular image data transmitting to the first requestor that requested the first portion module 918. In an embodiment, module 918 may include portion of received particular image data that includes a particular football player transmitting to a television device that requested the football player from a football game module 920. In an embodiment, module 916 may include second portion of received particular image data transmitting to the second requestor that requested the second portion module 922. In an embodiment, module 922 may include portion that contains a view of a motor vehicle transmitting to the second requestor that is a tablet device that requested the view of the motor vehicle module 924.

[00308] Referring again to Fig. 9, e.g., Fig. 9C, in an embodiment, module 258 may include one or more of supplemental data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 928 and generated transmission image data transmitting to at least one requestor module 930. In an embodiment, module 928 may include one or more of advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 932 and related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 938. In an embodiment, module 932 may include context-based advertisement data addition to at least a portion of the received particular image data to generate

transmission image data facilitating module 934. In an embodiment, module 934 may include animal rights donation fund advertisement data addition to at least a portion of the received particular image data that includes a jungle tiger at an oasis to generate transmission image data facilitating module 936. In an embodiment, module 938 may include related fantasy football statistical data addition to at least a portion of the received particular image data of a quarterback data to generate transmission image data facilitating module 940.

[00309] Referring again to Fig. 9, e.g., Fig. 9D, in an embodiment, module 258 may include one or more of portion of received particular image data modification to generate transmission image data facilitating module 942 and generated transmission image data transmitting to at least one requestor module 944. In an embodiment, module 942 may

84 include one or more of portion of received particular image data image manipulation modification to generate transmission image data facilitating module 946 and portion of received particular image data redaction to generate transmission image data facilitating module 952. In an embodiment, module 946 may include one or more of portion of received particular image data contrast balancing modification to generate transmission image data facilitating module 948 and portion of received particular image data color modification balancing to generate transmission image data facilitating module 950. In an embodiment, module 952 may include portion of received particular image data redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 954. In an embodiment, module 954 may include portion of received satellite image data that includes a tank redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 956.

[00310] Referring again to Fig. 9, e.g., Fig. 9D, in an embodiment, module 258 may include one or more of lower-resolution version of received particular image data transmitting to at least one requestor module 958 and full-resolution version of received particular image data transmitting to at least one requestor module 960.

[00311] In some implementations described herein, logic and similar implementations may include software or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of existing software or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software, firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

85 [00312] Following are a series of flowcharts depicting implementations. For ease of understanding, the flowcharts are organized such that the initial flowcharts present implementations via an example implementation and thereafter the following flowcharts present alternate implementations and/or expansions of the initial flowchart(s) as either sub-component operations or additional component operations building on one or more earlier-presented flowcharts. Those having skill in the art will appreciate that the style of presentation utilized herein (e.g., beginning with a presentation of a flowchart(s) presenting an example implementation and thereafter providing additions to and/or further details in subsequent flowcharts) generally allows for a rapid and easy

understanding of the various process implementations. In addition, those skilled in the art will further appreciate that the style of presentation used herein also lends itself well to modular and/or object-oriented program design paradigms.

Exemplary Operational Implementation of Processor 250 and Exemplary Variants

[00313] Further, in Fig. 10 and in the figures to follow thereafter, various operations may be depicted in a box-within-a-box manner. Such depictions may indicate that an operation in an internal box may comprise an optional example embodiment of the operational step illustrated in one or more external boxes. However, it should be understood that internal box operations may be viewed as independent operations separate from any associated external boxes and may be performed in any sequence with respect to all other illustrated operations, or may be performed concurrently. Still further, these operations illustrated in Fig. 10 as well as the other operations to be described herein may be performed by at least one of a machine, an article of manufacture, or a composition of matter.

[00314] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software, and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle

86 will vary with the context in which the processes and/or systems and/or other

technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software, and or firmware.

[00315] Throughout this application, examples and lists are given, with parentheses, the abbreviation "e.g.," or both. Unless explicitly otherwise stated, these examples and lists are merely exemplary and are non-exhaustive. In most cases, it would be prohibitive to list every example and every combination. Thus, smaller, illustrative lists and examples are used, with focus on imparting understanding of the claim terms rather than limiting the scope of such terms.

[00316] Referring now to Fig. 10, Fig. 10 shows operation 1000, e.g., an example operation of message processing device 230 operating in an environment 200. In an embodiment, operation 1000 may include operation 1002 depicting acquiring a request for particular image data that is part of a scene. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene acquiring module 252 acquiring (e.g., receiving, e.g., from a device that requested an image, that is any device or set of devices capable of displaying, storing, analyzing, or operating upon an image, e.g., television, computer, laptop, smartphone, etc., e.g., or from an entity that requested an image, e.g., a person, an automated monitoring system, an artificial intelligence, an intelligence amplification (e.g., a computer designed to watch for persons appearing on video or still shots), or otherwise obtaining (e.g., acquiring includes receiving, retrieving, creating, generating, generating a portion of, receiving a location of, receiving access

87 instructions for, receiving a password for, etc.) a request (e.g., data, in any format that indicates a computationally-based request for data, e.g., image data, from any source, whether properly-formed or not, and which may come from a communications network or port, or an input/output port, or from a human or other entity, or any device) for particular image data (e.g., a set of image data, e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), or other data, such as audio data and other data on the electromagnetic spectrum, e.g., infrared data, microwave data, etc.) that is part of a scene (e.g., a particular area, and/or data (including graphical data, audio data, and factual/derived data) that makes up the particular area, which may in some embodiments be all of the data, pixel data or otherwise, that is captured by the image sensor array or portions of the image sensor array).

[00317] capturing (e.g., collecting data, that includes visual data, e.g., pixel data, sound data, electromagnetic data, nonvisible spectrum data, and the like) that includes one or more images (e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), through use of an array (e.g., any grouping configured to work together in unison, regardless of arrangement, symmetry, or appearance) of more than one image sensor (e.g., a device, component, or collection of components configured to collect light, sound, or other electromagnetic spectrum data, and/or to convert the collected into digital data, or perform at least a portion of the foregoing actions).

The Following Paragraphs Γ003181-Γ003201 Reflect Changes Made Via i—

003-003-000000

[00318] Referring again to Fig. 10, operation 1000 may include operation 1004 depicting transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data. For example, Fig. 2, e.g., Fig. 2B, shows request for particular image data transmitting to an image sensor array module 254 transmitting the request (e.g., data, in any format that indicates a computationally-based

88 request for data, e.g., image data, from any source, whether properly-formed or not, and which may come from a communications network or port, or an input/output port, or from a human or other entity, or any device) for particular image data (e.g., a set of image data, e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), or other data, such as audio data and other data on the electromagnetic spectrum, e.g., infrared data, microwave data, etc., and which may be some subset of the entire scene that includes some pixel data, whether pre-or post-processing, which may or may not include data from multiple sensors of the array of more than one image sensor) to an image sensor array that includes more than one image sensor (e.g., a device, component, or collection of components configured to collect light, sound, or other electromagnetic spectrum data, and/or to convert the collected into digital data, or perform at least a portion of the foregoing actions) and that is configured to capture the scene (e.g., the data, e.g., image data or otherwise (e.g., sound, electromagnetic, captured by the array of more than one image sensor, which may be or may be capable of being combined or stitched together, at any stage of processing, pre, or post)), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, or post), that is larger (e.g., some objectively measurable feature has a lower value, e.g., size, resolution, color, color depth, pixel data granularity, number of colors, hue, saturation, alpha value, shading) than the requested particular image data.

[00319] Referring again to Fig. 10, operation 1000 may include operation 1006 depicting receiving only the particular image data from the image sensor array. For example, Fig. 2 e.g., Fig. 2B shows particular image data from the image sensor array exclusive receiving module 256 receiving only (e.g., not transmitting the parts of the scene that are not part of the selected particular portion) the particular image data (e.g., the designated pixel data that was transmitted from the image sensor array) from the image sensor array (e.g., a set of one or more image sensors that are grouped together, whether spatially grouped or linked electronically or through a network, in any arrangement or

configuration, whether contiguous or noncontiguous, and whether in a pattern or not, and which image sensors may or may not be uniform throughout the array) .

[00320] Referring again to Fig. 10, operation 1000 may include operation 1008

89 depicting transmitting the received particular image data to at least one requestor. For example, Fig. 2, e.g., Fig. 2B, shows received particular image data transmitting to at least one requestor module 258 transmitting the received particular image data (e.g., at least partially, but not solely, the designated pixel data that was transmitted from the image sensor array, which data may be modified, added to, subtracted from, or changed, as will be discussed herein) to at least one requestor (the particular image data may be separated into requested data and sent to the requesting entity that requested the data, whether that requesting entity is a device, person, artificial intelligence, or part of a computer routine or system, or the like). .

THIS ENDS THE CHANGES MADE BY THE SECOND PRELIMINARY AMENDMENT IN THE 1114-003-003-000000 APPLICATION.

[00321] Figs. 11A-11G depict various implementations of operation 1002, depicting acquiring a request for particular image data that is part of a scene according to embodiments. Referring now to Fig. 11 A, operation 1002 may include operation 1102 depicting acquiring the request for particular image data of the scene that includes one or more images. For example, Fig. 6, e.g., Fig. 6A shows request for particular image data that is part of a scene and includes one or more images acquiring module 602 acquiring (e.g., receiving, e.g., from a device that requested an image, that is any device or set of devices capable of displaying, storing, analyzing, or operating upon an image, e.g., television, computer, laptop, smartphone, etc., e.g., or from an entity that requested an image, e.g., a person, an automated monitoring system, an artificial intelligence, an intelligence amplification (e.g., a computer designed to watch for persons appearing on video or still shots), a request (e.g., data, in any format that requests an image) for particular image data of the scene that includes one or more images (e.g., the scene, e.g., a street corner, includes one or more images, e.g., images of a wristwatch worn by a person crossing the street corner, images of the building on the street corner, etc.).

[00322] Referring again to Fig. 11 A, operation 1002 may include operation 1104 depicting receiving the request for particular image data of the scene. For example, Fig.

6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving module 604 receiving (e.g., from a device, e.g., a user's personal laptop device) the

90 request for particular image data (e.g., a particular player from a game) of the scene (e.g., the portions of the game that are captured by the image sensor array).

[00323] Referring again to Fig. 11 A, operation 1104 may include operation 1106 depicting receiving the request for particular image data of the scene from a user device. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device module 606 receiving (e.g., receiving a call from an API that is accessing a remote server that sends commands to the image sensor array) the request for particular image data (e.g., a still shot of an area outside a building where any

91 movement has been detected, e.g., a security camera shot) of the scene from a user device (e.g., the API that was downloaded by an independent user is running on that user's device).

[00324] Referring again to Fig. 11 A, operation 1106 may include operation 1108 depicting receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene module 608 receiving the request for particular image data (e.g., image data of a particular animal) of the scene (e.g., image data that includes the sounds and video from an animal oasis) from a user device (e.g., a smart television) that is configured to display at least a portion of the scene (e.g., the data captured by an image sensor array of the animal oasis).

[00325] Referring again to Fig. 11 A, operation 1108 may include operation 1110 depicting receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene in a viewfinder module 610 receiving the request for particular image data (e.g., images of St. Peter's Basilica in Rome, Italy) of the scene (e.g., image data captured by the image sensor array of the Vatican) from a user device (e.g., a smartphone device) that is configured to display at least a portion of the scene (e.g., the Basilica, to be displayed on the screen as part of a virtual tourism app running on the smartphone) in a viewfinder (e.g., a screen or set of screens, whether real or virtual, that can display and/or process image data). It is noted that a viewfinder may be remote from where the image is captured.

[00326] Referring again to Fig. 11 A, operation 1106 may include operation 1112 depicting receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image. For example, Fig. 6, e.g., Fig. 6 A, shows request for particular image data that is part of a scene receiving from a

92 client device configured to receive a selection of a particular image module 612 receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image (e.g., the user device, e.g., a computer device, receives an audible command from a user regarding which portion of the scene the user wants to see (e.g., which may involve showing a "demo" version of the scene, e.g., a lower-resolution older version of the scene, for example), and the device receives this selection and then sends the request for the particular image data to the server device.

[00327] Referring again to Fig. 11 A, operation 1112 may include operation 1114 depicting receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to receive a scene-based selection of a particular image module 614 receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

[00328] Referring again to Fig. 11 A, operation 1112 may include operation 1116 depicting receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from one or more various devices configured to receive a scene-based selection of a particular image module 616 receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device

[00329] Referring now to Fig. 11B, operation 1002 may include operation 1118 depicting acquiring the request for particular image data of the scene, wherein the scene

93 is the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is the image data collected by the array of more than one image sensor acquiring module 618 acquiring the request for particular image data of the scene (e.g., a live street view of a corner in New York City near Madison Square Garden), wherein the scene is the image data (e.g., video and audio data) collected by the array of more than one image sensor (e.g., a set of twenty-five ten-megapixel CMOS sensors arranged at an angle to provide a full view).

[00330] Referring again to Fig. 11B, operation 1002 may include operation 1120 depicting acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that a representation of the image data collected by the array of more than one image sensor acquiring module 620 acquiring the request for particular image data (e.g., an image of a particular street vendor) of the scene (e.g., a city street in Alexandra, VA), wherein the scene is a representation (e.g., metadata, e.g., data about the image data, e.g., a sampling, a subset, a description, a retrieval location) of the image data (e.g., the pixel data) collected by the array of more than one image sensor (e.g., one thousand CMOS sensors of two megapixels each, mounted on a UAV).

[00331] Referring again to Fig. 11B, operation 1120 may include operation 1122 depicting acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that a sampling of the image data collected by the array of more than one image sensor acquiring module 622 acquiring the request for particular image data of the scene, wherein the scene is a sampling (e.g., a subset, selected randomly or through a pattern) of the image data (e.g., an image of a checkout line at a discount store) collected (e.g., gathered, read, stored) by the array of more than one image sensor (e.g., an array of two thirty megapixel sensors angled towards each other).

94 [00332] Referring again to Fig. 11B, operation 1120 may include operation 1124 depicting acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is a subset of the image data collected by the array of more than one image sensor acquiring module 624 acquiring the request for particular image data (e.g., a particular object inside of a house, e.g., a refrigerator) of the scene (e.g., an interior of a house), wherein the scene is a subset of the image data (e.g., a half of, or a sampling of the whole, or a selected area of, or only the contrast data, etc.,) collected by the array of more than one image sensor (e.g., a 10x10 grid of three-megapixel image sensors).

[00333] Referring again to Fig. 11B, operation 1120 may include operation 1126 depicting acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is a low-resolution version of the image data collected by the array of more than one image sensor acquiring module 626 acquiring the request for particular image data (e.g., an image of a particular car crossing a bridge) of the scene (e.g., a highway bridge), wherein the scene is a low-resolution (e.g., "low" here meaning "less than a possible resolution given the equipment that captured the image") version of the image data collected by the array of more than one image sensor.

[00334] Referring now to Fig. 11C, operation 1002 may include operation 1128 depicting acquiring the request for particular image data of a scene that is a football game. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is a football game acquiring module 628 acquiring the request for particular image data of a scene that is a football game.

[00335] Referring again to Fig. 11C, operation 1002 may include operation 1130 depicting acquiring the request for particular image data of a scene that is a street view of an area. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is an area street view acquiring module 630 acquiring the request for

95 particular image data that is a street view (e.g., a live or short-delayed view) of an area (e.g., a street corner, a garden oasis, a mountaintop, an airport, etc.).

[00336] Referring again to Fig. 11C, operation 1002 may include operation 1132 depicting acquiring the request for particular image data of a scene that is a tourist destination. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is a tourist destination acquiring module 632 acquiring the request for particular image data of a scene that is a tourist destination (e.g., the great pyramids of Giza).

[00337] Referring again to Fig. 11C, operation 1002 may include operation 1134 depicting acquiring the request for particular image data of a scene that is an inside of a home. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is inside a home acquiring module 634 acquiring the request for particular image data of a scene that is inside of a home (e.g., inside a kitchen).

[00338] Referring now to Fig. 11D, operation 1002 may include operation 1136 depicting acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a portion of the scene acquiring module 636 acquiring the request for particular image data (e.g., an image of a tiger in a wildlife preserve), wherein the particular image data (e.g., an image of a tiger) that is a portion of the scene (e.g., image data of the wildlife preserve).

[00339] Referring again to Fig. 11D, operation 1136 may include operation 1138 depicting acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a particular football player and a scene that is a football field acquiring module 638 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

96 [00340] Referring again to Fig. 11D, operation 1136 may include operation 1140 depicting acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a vehicle license plate and a scene that is a highway bridge acquiring module 640 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is a highway bridge (e.g., an image of the highway bridge).

[00341] Referring now to Fig. HE, operation 1002 may include operation 1142 depicting acquiring a request for a particular image object located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for particular image object located in the scene acquiring module 642 acquiring a request for a particular image object (e.g., a particular type of bird) located in the scene (e.g., a bird sanctuary).

[00342] Referring again to Fig. HE, operation 1002 may include operation 1144, which may appear in conjunction with operation 1142, operation 1144 depicting determining the particular image data of the scene that contains the particular image object. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining module 644 determining the particular image data (e.g., a 1920x1080 image that contains the particular type of bird) of the scene (e.g., the image of the bird sanctuary) that contains the particular image object (e.g., the particular type of bird).

[00343] Referring again to Fig. HE, operation 1142 may include operation 1146 depicting acquiring a request for a particular person located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for particular person located in the scene acquiring module 646 acquiring a request for a particular person (e.g., a person dressed a certain way, or loitering outside of a warehouse, or a particular celebrity or athlete, or a business tracking a specific worker) located in the scene (e.g., an image data).

97 [00344] Referring again to Fig. HE, operation 1142 may include operation 1148 depicting acquiring a request for a basketball located in the scene that is a basketball arena. For example, Fig. 6, e.g., Fig. 6E, shows request for a basketball located in the scene acquiring module 648 acquiring a request for a basketball (e.g., the image data corresponding to a basketball) located in the scene that is a basketball arena.

[00345] Referring again to Fig. HE, operation 1142 may include operation 1150 depicting acquiring a request for a motor vehicle located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for a motor vehicle located in the scene acquiring module 650 acquiring a request for a motor vehicle located in the scene.

[00346] Referring again to Fig. HE, operation 1142 may include operation 1152 depicting acquiring a request for any human object representations located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for a human object representation located in the scene acquiring module 652 acquiring a request for any human object representations (e.g., when any image data corresponding to a human walks by, e.g., for a security camera application, or an application that takes an action when a person approaches, e.g., an automated terminal) located in the scene.

[00347] Referring again to Fig. HE, operation 1142 may include operation 1153 depicting determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through automated pattern recognition application to scene data module 653 determining the particular image data of the scene (e.g., a tennis match) that contains the particular image object (e.g., a tennis player) through application of automated pattern recognition (e.g., recognizing human images through machine recognition, e.g., shape-based classification, head-shoulder detection, motion-based detection, and component-based detection) to scene image data.

[00348] Referring again to Fig. HE, operation 1144 may include operation 1154 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data

98 that is image data of the scene from at least one previous moment in time. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in previous scene data module 654 determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

[00349] Referring again to Fig. HE, operation 1144 may include operation 1156 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data module 656

[00350] Referring again to Fig. HE, operation 1156 may include operation 1158 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data previously transmitted from the image sensor array module 658 determining the particular image data of the scene that contains the particular image object (e.g., a particular landmark, or animal at a watering hole) through identification of the particular image object (e.g., a lion at a watering hole) in cached scene data that was previously transmitted from the image sensor array (e.g., a set of twenty five image sensors) that includes more than one image sensor (e.g., a three megapixel CMOS sensor).

[00351] Referring again to Fig. HE, operation 1156 may include operation 1160 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one

99 image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data previously transmitted from the image sensor array at a particular time module 660 determining the particular image data of the scene that contains the particular image object (e.g., a specific item in a shopping cart that doesn't match a cash-register generated list of what was purchased by the person wheeling the cart) through identification of the particular image object (e.g., the specific item, e.g., a toaster oven) in cached scene data (e.g., data that is stored in the server that was from a previous point in time, whether one-millionth of a second previously or years previously, although in the example the cached scene data is from a previous frame, e.g., less than one second prior) that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for connection to the image sensor array that includes more than one image sensor.

[00352] Referring now to Fig. 11F, operation 1002 may include operation 1162 depicting receiving a first request for first particular image data from the scene from a first requestor. For example, Fig. 6, e.g., Fig. 6F, shows first request for first particular image data from a first requestor receiving module 662 receiving a first request (e.g., a request for a 1920x180 "HD" view) for first particular image data (e.g., a first animal, e.g., a tiger, at a watering hole scene) from the scene (e.g., a watering hole) from a first requestor (e.g., a family watching the watering hole from an internet-connected television).

[00353] Referring again to Fig. 11F, operation 1002 may include operation 1164, which may appear in conjunction with operation 1162, operation 1164 depicting receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor. For example, Fig. 6, e.g., Fig. 6F, shows second request for first particular image data from a different second requestor receiving module receiving a second request (e.g., a 640x480 view for a smartphone) for second particular image data (e.g., a second animal, e.g., a pelican) from the scene (e.g., a watering hole)

10 from a second requestor (e.g., a person watching a stream of the watering hole on their smartphone).

[00354] Referring again to Fig. 11F, operation 1002 may include operation 1166 depicting combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene. For example, Fig. 6, e.g., Fig. 6F, shows first received request for first particular image data and second received request for second particular image data combining module 666 combining a received first request for first particular image data (e.g., a request to watch a running back of a football team) from the scene and a received second request for second particular image data (e.g., a request to watch a quarterback of the same football team) from the scene (e.g., a football game).

[00355] Referring again to Fig. 11F, operation 1166 may include operation 1168 depicting combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data. For example, Fig. 6, e.g., Fig. 6F, shows first received request for first particular image data and second received request for second particular image data combining into the request for particular image data module 668 combining the received first request for first particular image data (e.g., request from device 502A, as shown in Fig. 5B) from the scene and the received second request for second particular image data (e.g., the request from device 502B, as shown in Fig. 5B) from the scene into the request for particular image data that consolidates overlapping requested image data (e.g., the selected pixels 574, as shown in Fig. 5B).

[00356] Referring again to Fig. 11F, operation 1166 may include operation 1170 depicting receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene. For example, Fig. 6, e.g., Fig. 6F, shows first request for first particular image data and second request for second particular image data receiving module 670 receiving a first request for first particular image data (e.g., image data of a particular street corner from a street view) from the

100 scene (e.g., a live street view of DoG street in Alexandria, VA) and a second request for second particular image data (e.g., image data of the opposite corner of the live street view) from the scene (e.g., the live street view of DoG street in Alexandria, VA).

[00357] Referring again to Fig. 11F, operation 1002 may include operation 1172, which may appear in conjunction with operation 1170, operation 1172 depicting combining the received first request and the received second request into the request for particular image data. For example, Fig. 6, e.g., Fig. 6F, shows received first request and received second request combining module 672 combining the received first request (e.g., a 1920x1080 request for a virtual tourism view of the Sphinx) and the received second request (e.g., a 410x210 request for a virtual tourism view of an overlapping, but different part of the Sphinx) into the request for particular image data (e.g., the request that will be sent to the image sensor array that regards which pixels will be kept).

[00358] Referring again to Fig. 11F, operation 1172 may include operation 1174 depicting removing common pixel data between the received first request and the received second request. For example, Fig. 6, e.g., Fig. 6F, shows received first request and received second request common pixel deduplicating module 674 removing (e.g., deleting, marking, flagging, erasing, storing in a different format, storing in a different place, coding/compressing using a different algorithm, changing but not necessarily destroying, destroying, allowing to be written over by new data, etc.) common pixel data (e.g., pixel data that was part of more than one request) between the received first request (e.g., a request to view the left fielder of the Washington Nationals from a baseball game) and the received second request (e.g., a request to view the right fielder of the

Washington Nationals from a baseball game).

[00359] Referring now to Fig. 11G, operation 1002 may include operation 1176 depicting acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data. For example, Fig. 6, e.g., Fig. 6G, shows request for particular video data that is part of a scene acquiring module 676 acquiring the request for particular image data that is part of the scene, wherein the particular image

101 data includes video data (e.g., streaming data, e.g., as in a live street view of a corner near the Verizon Center in Washington, DC).

[00360] Referring again to Fig. 11G, operation 1002 may include operation 1178 depicting acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data. For example, Fig. 6, e.g., Fig. 6G, shows request for particular audio data that is part of a scene acquiring module 678 acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data (e.g., data of the sounds at an oasis, or of people in the image that are speaking).

[00361] Referring again to Fig. 11G, operation 1002 may include operation 1180 depicting acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface. For example, Fig. 6, e.g., Fig. 6G, shows request for particular image data that is part of a scene receiving from a user device with an audio interface module 680 acquiring the request for particular image data (e.g., to watch a particular person on the field at a football game, e.g., the quarterback) that is part of the scene (e.g., the scene of a football stadium during a game) from a user device (e.g., an internet-connected television) that receives the request for particular image data through an audio interface (e.g., the person speaks to an interface built into the television to instruct the television regarding which player to follow)

[00362] Referring again to Fig. 11G, operation 1002 may include operation 1182 depicting acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user. For example, Fig. 6, e.g., Fig. 6G, shows request for particular image data that is part of a scene receiving from a microphone-equipped user device with an audio interface module 682 acquiring the request for particular image data (e.g., an image of a cheetah at a jungle oasis) that is part of the scene (e.g., a jungle oasis) from a user device that has a microphone (e.g., a smartphone device) that receives a spoken request (e.g., "zoom in on the cheetah") for particular image data (e.g., an image of a cheetah at a

102 jungle oasis) for particular image data (e.g., an image of a cheetah at a jungle oasis) from the user (e.g., the person operating the smartphone device that wants to zoom in on the cheetah).

[00363] Figs. 12A-12E depict various implementations of operation 1004, depicting transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, according to embodiments. Referring now to Fig. 12A, operation 1004 may include operation 1202 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array that includes more than one image sensor and to capture a larger image module 702 transmitting the request for the particular image data of the scene to the image sensor array (e.g., an array of twelve sensors of ten megapixels each) that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data (e.g., the requested image data is 1920x1080 (e.g., roughly 2 million pixels), and the captured area is 12,000,000 pixels, minus overlap).

[00364] Referring again to Fig. 12A, operation 1004 may include operation 1204 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes two angled image sensors and that is configured to capture the scene that is larger than the requested particular image data 704 transmitting the request for the particular image data of the scene (e.g., a chemistry lab) to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested

103 image data (e.g., the requested image is a zoomed-out view of the lab that can be expressed in 1.7 million pixels, but the cameras capture 10.5 million pixels).

[00365] Referring again to Fig. 12A, operation 1004 may include operation 1206 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a grid and that is configured to capture the scene that is larger than the requested particular image data 706 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data (e.g., the image data requested is of a smaller area (e.g., the area around a football player) than the image (e.g., the entire football field)).

[00366] Referring again to Fig. 12A, operation 1004 may include operation 1208 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a line and that is configured to capture the scene that is larger than the requested particular image data 708 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image (e.g., an image of a highway) that is larger than the requested image data (e.g., an image of one or more of the cars on the highway).

104 [00367] Referring again to Fig. 12A, operation 1004 may include operation 1210 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one nonlinearly arranged stationary image sensor and that is configured to capture the scene that is larger than the requested particular image data 710 transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors (e.g., five-megapixel CCD sensors) and that is configured to capture an image that is larger than the requested image data.

[00368] Referring now to Fig. 12B, operation 1004 may include operation 1212 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of static image sensors module 712 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

[00369] Referring again to Fig. 12B, operation 1212 may include operation 1214 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of static image sensors that have fixed focal length and fixed field of view module 714 transmitting the request for the particular image data (e.g., an image of a black bear) of the scene (e.g., a mountain watering hole) to the image sensor array that includes the array of image

105 sensors (e.g., twenty- five megapixel CMOS sensors) that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data (e.g., the image requested is ultra high resolution but represents a smaller area than what is captured in the scene).

[00370] Referring again to Fig. 12B, operation 1004 may include operation 1216 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of image sensors mounted on a movable platform module 716 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform (e.g., a movable dish, or a UAV) and that is configured to capture the scene (e.g., the scene is a wide angle view of a city) that is larger than the requested image data (e.g., one building or street corner of the city).

[00371] Referring again to Fig. 12B, operation 1004 may include operation 1218 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more data than the requested particular image data 718 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular data.

[00372] Referring again to Fig. 12B, operation 1218 may include operation 1220 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture

106 the scene that represents more than ten times as much image data as the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times as much data as the requested particular image data 720 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times (e.g., twenty million pixels) as much image data as the requested particular image data (e.g., 1.8 million pixels).

[00373] Referring again to Fig. 12B, operation 1218 may include operation 1222 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much data as the requested particular image data 722 transmitting the request for the particular image data (e.g., a 1920x1080 image of a red truck crossing a bridge) of the scene (e.g., a highway bridge) to the image sensor array (e.g., a set of one hundred sensors) that includes more than one image sensor (e.g., twenty sensors each of two megapixel, four megapixel, six megapixel, eight megapixel, and ten megapixel) and that is configured to capture the scene that represents more than one hundred times (e.g. ,600 million pixels vs. the requested two million pixels) as much image data as the requested particular image data.

[00374] Referring again to Fig. 12B, operation 1004 may include operation 1224 depicting transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for

107 particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents a greater field of view than the requested particular image data 724 transmitting the request for the particular image data (e.g., an image of a bakery shop on a corner ) to the image sensor array that is configured to capture the scene (e.g., a live street view of a busy street corner) that represents a greater field of view (e.g., the entire corner) than the requested image data (e.g., just the bakery).

[00375] Referring now to Fig. 12C, operation 1004 may include operation 1226 modifying the request for the particular image data. For example, Fig. 7, e.g., Fig. 7C, shows request for particular image data modifying module 726 modifying (e.g., altering, changing, adding to, subtracting from, deleting, supplementing, changing the form of, changing an attribute of, etc.) the request for the particular image data (e.g., a request for an image of a baseball player).

[00376] Referring again to Fig. 12C, operation 1004 may include operation 1228, which may appear in conjunction with operation 1226, operation 1228 depicting transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data. For example, Fig. 7, e.g., Fig. 7C, shows modified request for particular image data transmitting to an image sensor array module 728 transmitting the modified request (e.g., the request increases the area around the baseball player that was requested) for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene (e.g., a baseball game at a baseball stadium) that is larger than the requested particular image data.

[00377] Referring again to Fig. 12C, operation 1226 may include operation 1230 depicting removing designated image data from the request for the particular image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data module 730 removing designated image data (e.g., image data of a static object that has already been captured and stored in memory, e.g., a building

108 from a live street view, or a car that has not moved since the last request) from the request for the particular image data (e.g., a request to see a part of the live street view).

[00378] Referring again to Fig. 12C, operation 1230 may include operation 1232 depicting removing designated image data from the request for the particular image data based on previously stored image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data module 732 removing designated image data (e.g., image data of a static object that has already been captured and stored in memory, e.g., a building from a live street view, or a car that has not moved since the last request) from the request for the particular image data (e.g., a request to see a part of the live street view) based on previously stored image data (e.g., the most previously requested image has the car in it already, and so it will not be checked again for another sixty frames of captured image data, for example).

[00379] Referring again to Fig. 12C, operation 1232 may include operation 1234 depicting removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data retrieved from the image sensor array module 734 removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array (e.g., the image sensor array sent the older version of the data that included a static object, e.g., a part of a bridge when the scene is a highway bridge, and so the request for the scene that includes part of the bridge, the part of the bridge that is static is removed).

[00380] Referring again to Fig. 12C, operation 1232 may include operation 1236 depicting removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data that is an

109 earlier-in-time version of the designated image data module 736 removing designated image data (e.g., portions of a stadium) from the request for the particular image data (e.g., a request to view a player inside a stadium for a game) based on previously stored image data (e.g., image data of the stadium) that is an earlier- in-time version of the designated image data (e.g., the image data of the stadium from one hour previous, or from one frame previous).

[00381] Referring again to Fig. 12C, operation 1236 may include operation 1238 depicting removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data that is a static object module 738 removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

[00382] Referring now to Fig. 12D, operation 1226 may include operation 1240 depicting removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows designated image data removing from request for particular image data based on pixel data interpolation/extrapolation module 740 removing portions of the request for the particular image data (e.g., portions of a uniform building) through pixel interpolation (e.g., filling in the middle of the building based on extrapolation of a known pattern of the building) of portions of the request for the particular image data (e.g., a request for a live street view that includes abuilding).

[00383] Referring again to Fig. 12D, operation 1240 may include operation 1242 depicting removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows designated image data corresponding to

110 one or more static image objects removing from request for particular image data based on pixel data interpolation/extrapolation module 742 removing one or more static objects (e.g., a brick of a pyramid) through pixel interpolation (e.g., filling in the middle of the pyramid based on extrapolation of a known pattern of the pyramid) of portions of the request for the particular image data (e.g., a request for a live street view that includes a building).

[00384] Referring again to Fig. 12D, operation 1226 may include operation 1244 depicting identifying at least one portion of the request for the particular image data that is already stored in a memory. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for particular image data that was previously stored in memory identifying module 744 identifying at least one portion of the request for the particular image data (e.g., a request for a virtual tourism exhibit of which a part has been cached in memory from a previous access) that is already stored in a memory (e.g., a memory of the server device, e.g., memory 245).

[00385] Referring again to Fig. 12D, operation 1226 may include operation 1246, which may appear in conjunction with operation 1244, operation 1246 depicting removing the identified portion of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows identified portion of the request for the particular image data removing module 746 removing the identified portion of the request for the particular image data (e.g., removing the part of the request that requests the image data that is already stored in a memory of the server).

[00386] Referring again to Fig. 12D, operation 1226 may include operation 1248 depicting identifying at least one portion of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that was previously captured by the image sensor array identifying module 748 identifying at least one portion of the request for particular image data that was previously captured by the image sensor array (e.g., an array of twenty-five two megapixel CMOS sensors).

Il l [00387] Referring again to Fig. 12D, operation 1226 may include operation 1250 depicting identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that includes at least one static image object that was previously captured by the image sensor array identifying module 750 identifying one or more static objects (e.g., buildings, roads, trees, etc.) of the request for particular image data (e.g., image data of a part of a rural town) that was previously captured by the image sensor array.

[00388] Referring again to Fig. 12D, operation 1250 may include operation 1252 depicting identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that includes at least one static image object of a rock outcropping that was previously captured by the image sensor array identifying module 752 identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array (e.g., an array of twenty-five two megapixel CMOS sensors).

[00389] Referring now to Fig. 12E, operation 1004 may include operation 1254 depicting determining a size of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image data determining module 754 determining a size (e.g., a number of pixels, or a transmission speed, or a number of frames per second) of the request for the particular image data (e.g., data of a lion at a jungle oasis).

[00390] Referring again to Fig. 12E, operation 1004 may include operation 1256, which may appear in conjunction with operation 1254, operation 1256 depicting transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene. For example, Fig. 7, e.g., Fig. 7E, shows determined- size request for particular image data transmitting to the image sensor array module 756 transmitting the request for the particular image data for which the size

112 has been determined to the image sensor array that is configured to capture the scene (e.g., a scene of an interior of a home).

[00391] Referring again to Fig. 12E, operation 1254 may include operation 1258 depicting determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device property module 758 determining the size (e.g., the horizontal and vertical resolutions, e.g., 1920x1080) of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

[00392] Referring again to Fig. 12E, operation 1258 may include operation 1260 depicting determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device resolution module 760 determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

[00393] Referring again to Fig. 12E, operation 1254 may include operation 1262 depicting determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device access level module 762 determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data (e.g., whether the user has paid for the service, or what level of service the user has subscribed to, or whether other "superusers" are present that demand higher bandwidth and receive priority in receiving images).

[00394] Referring again to Fig. 12E, operation 1254 may include operation 1264 depicting determining the size of the request for the particular image data at least partially

113 based on an available amount of bandwidth for communication with the image sensor array. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on available bandwidth module 764 determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array (e.g., a set of twenty- five image sensors lined on each face of a twenty- five sided polygonal structure)

[00395] Referring again to Fig. 12E, operation 1254 may include operation 1266 depicting determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on device usage time module 766 determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data (e.g., devices that have waited longer may get preference; or, once a device has been sent a requested image, that device may move to the back of a queue for image data requests).

[00396] Referring again to Fig. 12E, operation 1254 may include operation 1268 depicting determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on device available bandwidth module 768 determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data (e.g., based on a connection between the user device and the server, e.g., if the bandwidth to the user device is a limiting factor, that may be taken into account and used in setting the size of the request for the particular image data).

[00397] Figs. 13A-13C depict various implementations of operation 1006, depicting receiving only the particular image data from the image sensor array, according to

114 embodiments. Referring now to Fig. 13 A, operation 1006 may include operation 1302 depicting receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array in which other image data is discarded receiving module 802 receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded (e.g., the data may be stored, at least temporarily, but is not stored in a place where overwriting will be prevented, as in a persistent memory).

[00398] Referring again to Fig. 13A, operation 1006 may include operation 1304 depicting receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array in which other image data is stored at the image sensor array receiving module 804 receiving only the particular image data (e.g., an image of a polar bear and a penguin) from the image sensor array (e.g., twenty five CMOS sensors), wherein data from the scene (e.g., an Antarctic ice floe) other than the particular image data is stored at the image sensor array (e.g., a grouping of twenty- five CMOS sensors).

[00399] Referring again to Fig. 13A, operation 1006 may include operation 1306 depicting receiving the particular image data from the image sensor array in near-real time. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive near-real-time receiving module 806 receiving the particular image data from the image sensor array in near-real time (e.g., not necessarily as something is happening, but near enough to give an appearance of real-time).

[00400] Referring now to Fig. 13B, operation 1006 may include operation 1308 depicting receiving the particular image data from the image sensor array in near-real time. For example, Fig. 8, e.g., Fig. 8B, shows particular image data from the image sensor array exclusive near-real-time receiving module 808 receiving the particular image data from the image sensor array in near real time receiving the particular image data

115 (e.g., an image of a person walking across a street captured in a live street view setting) from the image sensor array (e.g., two hundred ten-megapixel sensors) in near-real time.

[00401] Referring again to Fig. 13B, operation 1006 may include operation 1310, which may appear in conjunction with operation 1308, operation 1310 depicting retrieving data from the scene other than the particular image data at a later time. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a later time module 810 retrieving data from the scene (e.g., a scene of a mountain pass) other than the particular image data (e.g., the data that was not requested- e.g., data that no user requested but that was captured by the image sensor array, but not transmitted to the remote server) at a later time (e.g., at an off-peak time when more bandwidth is available, e.g., fewer users are using the system).

[00402] Referring again to Fig. 13B, operation 1310 may include operation 1312 depicting retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time of available bandwidth module 812 retrieving data from the scene other than the particular image data (e.g., data that was not requested) at a time at which bandwidth is available to the image sensor array (e.g., the image sensor array is not using all of its allotted bandwidth to handle requests for portions of the scene, and has available bandwidth to transmit data that can be retrieved that is other than the requested particular image data).

[00403] Referring again to Fig. 13B, operation 1310 may include operation 1314 depicting retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at an off- peak usage time of the image sensor array module 814 retrieving data from the scene other than the particular image data (e.g., data that was not requested) at a time that represents off-peak usage (e.g., the image sensor array may be capturing a city street, so off-peak usage would be at night; or the image sensor array may be a security camera, so off-peak usage may be the middle of the day, or off-peak usage may be flexible based on

116 previous time period analysis, e.g., could also mean any time the image sensor array is not using all of its allotted bandwidth to handle requests for portions of the scene, and has available bandwidth to transmit data that can be retrieved that is other than the requested particular image data) for the image sensor array.

[00404] Referring again to Fig. 13B, operation 1310 may include operation 1316 depicting retrieving data from the scene other than the particular image data at a time when no particular image data is requested. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time when no particular image data requests are present at the image sensor array module 816 retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

[00405] Referring again to Fig. 13B, operation 1310 may include operation 1318 depicting retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time of available image sensor array capacity module 818 retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

[00406] Referring now to Fig. 13C, operation 1006 may include operation 1320 depicting receiving only the particular image data that includes audio data from the sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that includes audio data from the image sensor array exclusive receiving module 820 receiving only the particular image data (e.g., image data from a watering hole) that includes audio data (e.g., sound data, e.g., as picked up by a microphone) from the sensor array (e.g., the image sensor array may include one or more microphones or other sound-collecting devices, either separately from or linked to image capturing sensors).

[00407] Referring again to Fig. 13C, operation 1006 may include operation 1322 depicting receiving only the particular image data that was determined to contain a

117 particular requested image object from the image sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that was determined to contain a particular requested image object from the image sensor array exclusive receiving module 822 receiving only the particular image data that was determined to contain a particular requested image object (e.g., a particular football player from a football game that is the scene) from the image sensor array.

[00408] Referring again to Fig. 13C, operation 1322 may include operation 1324 depicting receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that was determined to contain a particular requested image object by the image sensor array exclusive receiving module 824.

receiving only the particular image data that was determined to contain a particular requested image object (e.g., a lion at a watering hole) by the image sensor array (e.g., the image sensor array performs the pattern recognition and identifies the particular image data, which may only have been identified as "the image data that contains the lion," and only that particular image data is transmitted and thus received by the server).

[00409] Figs. 14A-14E depict various implementations of operation 1008, depicting transmitting the received particular image data to at least one requestor, according to embodiments. Referring now to Fig. 14A, operation 1008 may include operation 1402 depicting transmitting the received particular image data to a user device. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device requestor module 902 transmitting the received particular image data (e.g., an image of a quarterback at a National Football League game) to a user device (e.g., a television connected to the internet).

[00410] Referring again to Fig. 14A, operation 1402 may include operation 1404 depicting transmitting at least a portion of the received particular image data to a user device that requested the particular image data. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device that requested at least a portion of the received particular data requestor module 904 transmitting at least a

118 portion of the received particular image data (e.g., a portion that corresponds to a particular request received from a device, e.g., a request for a particular segment of the scene that shows a lion at a watering hole) to a user device (e.g., a computer device with a CPU and monitor) that requested the particular image data (e.g., the computer device requested the portion of the scene at which the lion is visible).

[00411] Referring again to Fig. 14A, operation 1008 may include operation 1406 depicting separating the received particular image data into a set of one or more requested images. For example, Fig. 9, e.g., Fig. 9A, shows separation of the received particular data into set of one or more requested images executing module 906 separating the received particular image data into a set of one or more requested images (e.g., if there were five requests for portions of the scene data, and some of the requests overlapped, the image data may be duplicated and packaged such that each requesting device receives the pixels that were requested).

[00412] Referring again to Fig. 14A, operation 1008 may include operation 1408, which may appear in conjunction with operation 1406, operation 1408 depicting transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image. For example, Fig. 9, e.g., Fig. 9A, shows at least one image of the set of one or more requested images transmitting to a particular requestor that requested the one or more images transmitting module 908 transmitting at least one image of the set of one or more requested images to a particular requestor (e.g., a person operating a "virtual camera" that lets the person "see" the scene through the lens of a camera, even though the camera is temporally separated from the image sensor array, possibly by a large distance, because the image is transmitted to the camera).

[00413] Referring again to Fig. 14A, operation 1406 may include operation 1410 depicting separating the received particular image data into a first requested image and a second requested image. For example, Fig. 9, e.g., Fig. 9A, shows separation of the received particular data into a first requested image and a second requested image executing module 910 separating the received particular image data (e.g., image data

119 from a jungle oasis) into a first requested image (e.g., an image of a lion) and a second requested image (e.g., an image of a hippopotamus).

[00414] Referring again to Fig. 14A, operation 1008 may include operation 1412 depicting transmitting the received particular image data to a user device that requested an image that is part of the received particular image data. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device that requested image data that is part of the received particular image data module 912 transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

[00415] Referring now to Fig. 14B, operation 1008 may include operation 1414 depicting transmitting a first portion of the received particular image data to a first requestor. For example, Fig. 9, e.g., Fig. 9B, shows first portion of received particular image data transmitting to a first requestor module 914 transmitting a first portion (e.g., a part of an animal oasis that contains a zebra) of the received particular image data (e.g., image data from the oasis that contains a zebra) to a first requestor (e.g., device that requested video feed that is the portion of the oasis that contains the zebra, e.g., a television device).

[00416] Referring again to Fig. 14B, operation 1008 may include operation 1416, which may appear in conjunction with operation 1414, operation 1416 depicting transmitting a second portion of the received particular image data to a second requestor. For example, Fig. 9, e.g., Fig. 9B, shows second portion of received particular image data transmitting to a second requestor module 916 transmitting a second portion (e.g., a portion of the oasis that contains birds) of the received particular image data (e.g., image data from the oasis) to a second requestor (e.g., a device that requested the image that is the portion of the oasis that contains birds, e.g., a tablet device).

[00417] Referring again to Fig. 14B, operation 1414 may include operation 1418 depicting transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9B, shows first portion of received particular image data

120 transmitting to a first requestor that requested the first portion module 918 transmitting the first portion of the received particular image data (e.g., a portion that contains a particular landmark in a virtual tourism setting) to the first requestor that requested the first portion of the received particular image data.

[00418] Referring again to Fig. 14B, operation 1418 may include operation 1420 depicting transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player. For example, Fig. 9, e.g., Fig. 9B, shows portion of received particular image data that includes a particular football player transmitting to a television device that requested the football player from a football game module 920 transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

[00419] Referring again to Fig. 14B, operation 1416 may include operation 1422 depicting transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9B, shows second portion of received particular image data transmitting to the second requestor that requested the second portion module 922 transmitting the second portion of the received particular image data (e.g., the portion of the received particular image data that includes the lion) to the second requestor that requested the second portion of the received particular image data (e.g., a person watching the feed on their television).

[00420] Referring again to Fig. 14B, operation 1422 may include operation 1424 depicting transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city. For example, Fig. 9, e.g., Fig. 9B, shows portion that contains a view of a motor vehicle transmitting to the second requestor that is a tablet device that requested the view of the motor vehicle module 924 transmitting an image that contains a view of a motor vehicle (e.g., a Honda

121 Accord) to a tablet device that requested a street view image of a particular corner of a city (e.g., Alexandria, VA).

[00421] Referring again to Fig. 14B, operation 1008 may include operation 1426 depicting transmitting at least a portion of the received particular image data without alteration to at least one requestor. For example, Fig. 9, e.g., Fig. 9B, shows received particular image data unaltered transmitting to at least one requestor module 926 transmitting at least a portion of the received particular image data (e.g., an image of animals at an oasis) without alteration (e.g., without altering how the image appears to human eyes, e.g., there may be data manipulation that is not visible) to at least one requestor (e.g., the device that requested the image, e.g., a mobile device).

[00422] Referring now to Fig. 14C, operation 1008 may include operation 1428 depicting adding supplemental data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows supplemental data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 928 adding supplemental data (e.g., context data, or advertisement data, or processing assistance data, or data regarding how to display or cache the image, whether visible in the image or embedded therein, or otherwise associated with the image) to at least a portion of the received particular image data (e.g., images from an animal watering hole) to generate transmission image data (e.g., image data that will be transmitted to the requestor, e.g., a user of a desktop computer).

[00423] Referring again to Fig. 14C, operation 1008 may include operation 1430, which may appear in conjunction with operation 1428, operation 1430 depicting transmitting the generated transmission image data to at least one requestor. For example, Fig. 9, e.g., Fig. 9C, shows generated transmission image data transmitting to at least one requestor module 930 transmitting the generated transmission image data to at least one requestor transmitting the generated transmission image data (e.g., image data of a football player at a football game with statistical data of that football player overlaid in the image) to at least one requestor (e.g., a person watching the game on their mobile tablet device).

122 [00424] Referring again to Fig. 14C, operation 1428 may include operation 1432 depicting adding advertisement data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 932 adding advertisement data (e.g., data for an advertisement for buying tickets to the next soccer game and an advertisement for buying a soccer jersey of the player that is pictured) to at least a portion of the received particular image data (e.g., images of a soccer game and/or players in the soccer game) to generate transmission image data.

[00425] Referring again to Fig. 14C, operation 1432 may include operation 1434 depicting adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9C, shows context-based advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 934 adding context-based advertisement data (e.g., an ad for travel services to a place that is being viewed in a virtual tourism setting, e.g., the Great Pyramids) that is at least partially based on the received particular image data (e.g., visual image data from the Great Pyramids) to at least the portion of the received particular image data.

[00426] Referring again to Fig. 14C, operation 1434 may include operation 1436 depicting adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis. For example, Fig. 9, e.g., Fig. 9C, shows animal rights donation fund advertisement data addition to at least a portion of the received particular image data that includes a jungle tiger at an oasis to generate transmission image data facilitating module 936 adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

123 [00427] Referring again to Fig. 14C, operation 1428 may include operation 1438 depicting adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 938 adding related visual data (e.g., the name of an animal being shown, or a make and model year of a car being shown, or, if a product is shown in the frame, the name of the website that has it for the cheapest price right now) related to the received particular image data (e.g., an animal, a car, or a product) to at least a portion of the received particular image data to generate transmission image data (e.g., data to be transmitted to the receiving device).

[00428] Referring again to Fig. 14C, operation 1438 may include operation 1440 depicting adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows related fantasy football statistical data addition to at least a portion of the received particular image data of a quarterback data to generate transmission image data facilitating module 940 adding fantasy football statistical data (e.g., passes completed, receptions, rushing yards gained, receiving yards gained, total points scored, player name, etc.) to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data (e.g., image data that is to be transmitted to the requesting device, e.g., a television).

[00429] Referring now to Fig. 14D, operation 1008 may include operation 1442 depicting modifying data of a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data modification to generate transmission image data facilitating module 942 modifying data (e.g., adding to, subtracting from, or changing the data of a characteristic of, e.g., alpha data, color data, saturation data, either on individual bytes or on the image as a whole) of a portion of the received particular image data (e.g., the

124 image data sent from the camera array) to generate transmission image data (e.g., data to be transmitted to the device that requested the data, e.g., a smartphone device).

[00430] Referring again to Fig. 14D, operation 1008 may include operation 1444 depicting transmitting at the generated transmission image data to at least one requestor. For example, Fig. 9, e.g., Fig. 9D, shows generated transmission image data transmitting to at least one requestor module 944 transmitting the generated transmission image data (e.g., the image data that was generated by the server device to transmit to the requesting device) to at least one requestor (e.g., the requesting device, e.g., a laptop computer of a family at home running a virtual tourism program in a web page).

[00431] Referring again to Fig. 14D, operation 1442 may include operation 1446 depicting performing image manipulation modifications of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data image manipulation modification to generate transmission image data facilitating module 946 performing image manipulation modifications (e.g., editing a feature of a captured image) of the received particular image data (e.g., a live street view of an area with a lot of shading from tall skyscrapers) to generate transmission image data (e.g., the image data to be transmitted to the device that requested the data, e.g., a camera device).

[00432] Referring again to Fig. 14D, operation 1446 may include operation 1448 depicting performing contrast balancing on the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data contrast balancing modification to generate transmission image data facilitating module 948 performing contrast balancing on the received particular image data (e.g., a live street view of an area with a lot of shading from tall skyscrapers) to generate transmission image data (e.g., the image data to be transmitted to the device that requested the data, e.g., a camera device).

[00433] Referring again to Fig. 14D, operation 1446 may include operation 1450 depicting performing color modification balancing on the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of

125 received particular image data color modification balancing to generate transmission image data facilitating module 950 performing color modification balancing on the received particular image data (e.g., an image of a lion at an animal watering hole) to generate transmission image data (e.g., the image data that will be transmitted to the device).

[00434] Referring again to Fig. 14D, operation 1442 may include operation 1452 depicting redacting at least a portion of the received particular image data to generate the transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data redaction to generate transmission image data facilitating module 952 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) at least a portion of the received particular image data to generate the transmission image data (e.g., the image data that will be transmitted to the device or designated for transmission to the device).

[00435] Referring again to Fig. 14D, operation 1452 may include operation 1454 depicting redacting at least a portion of the received particular image data based on a security clearance level of the requestor. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 954 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) at least a portion of the received particular image data (e.g., the faces of people, or the license plates of cars) based on a security clearance level of the requestor (e.g., a device that requested the image may have a security clearance based on what that device is allowed to view, and if the security clearance level is below a certain threshold, data like license plates and people's faces may beredacted).

126 [00436] Referring again to Fig. 14D, operation 1454 may include operation 1456 depicting redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received satellite image data that includes a tank redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 956 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) a tank from the received particular image data that includes a satellite image (e.g., the image sensor array that captured the image is at least partially mounted on a satellite) that includes a military base, based on an insufficient security clearance level (e.g., some data indicates that the device does not have a security level sufficient to approve seeing the tank) of a device that requested the particular image data).

[00437] Referring now to Fig. 14E, operation 1008 may include operation 1458 depicting transmitting a lower-resolution version of the received particular image data to the at least one requestor. For example, Fig. 9, e.g., Fig. 9E, shows lower-resolution version of received particular image data transmitting to at least one requestor module 958 transmitting a lower-resolution version (e.g., a version of the image data that is at a lower resolution than what the device that requested the particular image data is capable of displaying) of the received particular image data (e.g., an image of a baseball player at a baseball game) to the at least one requestor (e.g., the device that requested the data).

[00438] Referring again to Fig. 14E, operation 1008 may include operation 1460, which may appear in conjunction with operation 1458, operation 1460 depicting transmitting a full-resolution version of the received particular image data to the at least one requestor. For example, Fig. 9, e.g., Fig. 9F, shows full-resolution version of received particular image data transmitting to at least one requestor module 960 transmitting a full-resolution version (e.g., a version that is at a resolution of the device that requested the image) of the received particular image data (e.g., an image of an animal at an animal oasis).

127 [00439] All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in any Application Data Sheet, are incorporated herein by reference, to the extent not inconsistent herewith.

[00440] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware, or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101. In an embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101, and that designing the circuitry and/or writing the code for the software (e.g., a high-level computer program serving as a hardware specification) and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape,

128 a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.)

[00441] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[00442] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited

129 number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[00443] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[00444] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[00445] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify

130 and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[00446] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[00447] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes

131 and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

STRUCTURED DISCLOSURE DIRECTED TOWARD ONE OF SKILL IN THE ART [START] Those skilled in the art will appreciate the indentations and cross-referencing of the following as instructive. In general, each immediately following numbered paragraph in this

"Structured Disclosure Directed Toward One Of Skill In The Art Section" should be treated as preceded by the word "Clause," as can be seen by the Clauses that cross-reference against them, and each cross-referencing paragraph is to be understood as permissively incorporating its parent clause(s) by reference.

132 As used in the herein, and in particular the following, thing/operation disclosures, the word "comprising" can generally be interpreted as "including but not limited to":

133 1. A computationally-implemented thing/operation disclosure, comprising:

acquiring a request for particular image data that is part of a scene;

transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

receiving only the particular image data from the image sensor array; and transmitting the received particular image data to at least one requestor.

2. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene that includes one or more images.

3. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving the request for particular image data of the scene.

4. The computationally-implemented thing/operation disclosure of clause 3, wherein said receiving the request for particular image data of the scene comprises:

receiving the request for particular image data of the scene from a user device.

5. The computationally-implemented thing/operation disclosure of clause 4, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene.

134 6. The computationally-implemented thing/operation disclosure of clause 5, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in aviewfinder.

7. The computationally-implemented thing/operation disclosure of clause 4, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

8. The computationally-implemented thing/operation disclosure of clause 7, wherein said receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

9. The computationally-implemented thing/operation disclosure of clause 7, wherein said receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

10. The computationally- implemented thing/operation disclosure of clause 1,

135 wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the the image data collected by the array of more than one image sensor.

136 11. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

12. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

13. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

14. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

15. The computationally-implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene

137 comprises:

acquiring the request for particular image data of a scene that is a football game.

138 16. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is a street view of an area.

17. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is a tourist destination.

18. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is an inside of a home.

19. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

20. The computationally-implemented thing/operation disclosure of clause 19, wherein said acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises:

acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

21. The computationally- implemented thing/operation disclosure of clause 19,

139 wherein said acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises:

140 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

22. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring a request for a particular image object located in the scene; and determining the particular image data of the scene that contains the particular image object.

23. The computationally- implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a particular person located in the scene.

24. The computationally-implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a basketball located in the scene that is a basketball arena.

25. The computationally- implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a motor vehicle located in the scene.

26. The computationally-implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for any human object representations located in the scene.

27. The computationally- implemented thing/operation disclosure of clause 22,

141 wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

142 28. The computationally- implemented thing/operation disclosure of clause 22, wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

29. The computationally-implemented thing/operation disclosure of clause 22, wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

30. The computationally-implemented thing/operation disclosure of clause 29, wherein said determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

31. The computationally- implemented thing/operation disclosure of clause 29, wherein said determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes

143 more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

144 32. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving a first request for first particular image data from the scene from a first requestor; and

receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

33. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

34. The computationally-implemented thing/operation disclosure of clause 33, wherein said combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

35. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and

combining the received first request and the received second request into the request for particular image data.

The computationally-implemented thing/operation disclosure of clause 35,

145 wherein said combining the received first request and the received second request into the request for particular image data comprises:

146 removing common pixel data between the received first request and the received second request.

37. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

38. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

39. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

40. The computationally-implemented thing/operation disclosure of clause 39, wherein said acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

41. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor

147 array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

148 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

42. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

43. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

44. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

149 45. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

46. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

47. The computationally- implemented thing/operation disclosure of clause 46, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

48. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

150 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data.

49. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

50. The computationally-implemented thing/operation disclosure of clause 49, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

51. The computationally- implemented thing/operation disclosure of clause 49, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image

151 data as the requested particular image data.

152 52. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

53. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

modifying the request for the particular image data; and

transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

54. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

removing designated image data from the request for the particular image data.

55. The computationally- implemented thing/operation disclosure of clause 54, wherein said removing designated image data from the request for the particular image data comprises:

removing designated image data from the request for the particular image data based on previously stored image data.

56. The computationally-implemented thing/operation disclosure of clause 55, wherein said removing designated image data from the request for the particular image data based on previously stored image data comprises:

153 removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

57. The computationally-implemented thing/operation disclosure of clause 55, wherein said removing designated image data from the request for the particular image data based on previously stored image data comprises:

removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

58. The computationally-implemented thing/operation disclosure of clause 57, wherein said removing designated image data from the request for the particular image data based on previously stored image data that is an earlier- in-time version of the designated image data comprises:

removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

59. The computationally-implemented thing/operation disclosure of clause 58, wherein said removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

60. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

154 removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

61. The computationally- implemented thing/operation disclosure of clause 60, wherein said removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

62. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

identifying at least one portion of the request for the particular image data that is already stored in a memory; and

removing the identified portion of the request for the particular image data.

63. The computationally-implemented thing/operation disclosure of clause 62, wherein said identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

64. The computationally-implemented thing/operation disclosure of clause 62, wherein said identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

155 65. The computationally- implemented thing/operation disclosure of clause 64, wherein said identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array comprises:

identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

66. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

determining a size of the request for the particular image data; and

transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

67. The computationally- implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data.

68. The computationally-implemented thing/operation disclosure of clause 67, wherein said determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

determining the size of the request for the particular image data at least partially based on a resolution of the user thing/operation disclosure that requested the particular image data.

69. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of

156 the particular image data.

157 70. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

71. The computationally- implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

72. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

73. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded.

74. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

158 75. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving the particular image data from the image sensor array in near-real time.

76. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving the particular image data from the image sensor array in near-real time; and

retrieving data from the scene other than the particular image data at a later time.

77. The computationally- implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

78. The computationally- implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

79. The computationally-implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

80. The computationally-implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at time at

159 which fewer users are requesting particular image data than for which the sensor array has capacity.

160 81. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data that includes audio data from the sensor array.

82. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

83. The computationally- implemented thing/operation disclosure of clause 82, wherein said receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array comprises:

receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

84. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting the received particular image data to a user device.

85. The computationally- implemented thing/operation disclosure of clause 84, wherein said transmitting the received particular image data to a user thing/operation disclosure comprises:

transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

86. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

separating the received particular image data into a set of one or more requested images; and

161 transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

162 87. The computationally- implemented thing/operation disclosure of clause 86, wherein said separating the received particular image data into a set of one or more requested images comprises:

separating the received particular image data into a first requested image and a second requested image.

88. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

89. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting a first portion of the received particular image data to a first requestor; and

transmitting a second portion of the received particular image data to a second requestor.

90. The computationally-implemented thing/operation disclosure of clause 89, wherein said transmitting a first portion of the received particular image data to a first requestor comprises:

transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

91. The computationally- implemented thing/operation disclosure of clause 90, wherein said transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

transmitting an image that contains a particular football player to a television

163 device that is configured to display a football game and that requested the image that contains the particular football player.

164 92. The computationally-implemented thing/operation disclosure of clause 89, wherein said transmitting a second portion of the received particular image data to a second requestor comprises:

transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

93. The computationally- implemented thing/operation disclosure of clause 92, wherein said transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

94. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting at least a portion of the received particular image data without alteration to at least one requestor.

95. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

transmitting the generated transmission image data to at least one requestor.

96. The computationally-implemented thing/operation disclosure of clause 95, wherein said adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

165 97. The computationally- implemented thing/operation disclosure of clause 96, wherein said adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

166 adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

98. The computationally-implemented thing/operation disclosure of clause 97, wherein said adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

99. The computationally-implemented thing/operation disclosure of clause 95, wherein said adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

100. The computationally-implemented thing/operation disclosure of clause 99, wherein said adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

101. The computationally-implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

modifying data of a portion of the received particular image data to generate transmission image data; and

transmitting at the generated transmission image data to at least one requestor.

167 102. The computationally-implemented thing/operation disclosure of clause 101, wherein said modifying data of a portion of the received particular image data to generate transmission image data comprises:

performing image manipulation modifications of the received particular image data to generate transmission image data.

103. The computationally-implemented thing/operation disclosure of clause 102, wherein said performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

performing contrast balancing on the received particular image data to generate transmission image data.

104. The computationally-implemented thing/operation disclosure of clause 102, wherein said performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

performing color modification balancing on the received particular image data to generate transmission image data.

105. The computationally-implemented thing/operation disclosure of clause 101, wherein said modifying data of a portion of the received particular image data to generate transmission image data comprises:

redacting at least a portion of the received particular image data to generate the transmission image data.

106. The computationally-implemented thing/operation disclosure of clause 105, wherein said redacting at least a portion of the received particular image data to generate the transmission image data comprises:

redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

168 107. The computationally-implemented thing/operation disclosure of clause 106, wherein said redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

108. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

transmitting a full-resolution version of the received particular image data to the at least one requestor.

109. A computationally-implemented thing/operation disclosure, comprising

means for acquiring a request for particular image data that is part of a scene; means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

means for receiving only the particular image data from the image sensor array; and

means for transmitting the received particular image data to at least one requestor.

110. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene that includes one or more images.

111. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a

169 scene comprises:

means for receiving the request for particular image data of the scene.

170 112. The computationally- implemented thing/operation disclosure of clause 111, wherein said means for receiving the request for particular image data of the scene comprises:

means for receiving the request for particular image data of the scene from a user device.

113. The computationally-implemented thing/operation disclosure of clause 112, wherein said means for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

means for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene.

114. The computationally-implemented thing/operation disclosure of clause 113, wherein means for receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

means for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder.

115. The computationally-implemented thing/operation disclosure of clause 112, wherein said means for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

means for receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

116. The computationally-implemented thing/operation disclosure of clause 115, wherein said means for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

means for receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said

171 selection based on a view of the scene.

172 117. The computationally-implemented thing/operation disclosure of clause 115, wherein said means for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

means for receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

118. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is the image data collected by the array of more than one image sensor.

119. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

120. The computationally- implemented thing/operation disclosure of clause 119, wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

The computationally- implemented thing/operation disclosure of clause 119,

173 wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a

174 representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

122. The computationally- implemented thing/operation disclosure of clause 119, wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

123. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a football game.

124. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a street view of an area.

125. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a tourist destination.

175 126. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

176 means for acquiring the request for particular image data of a scene that is an inside of a home.

127. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

128. The computationally-implemented thing/operation disclosure of clause 127, wherein said means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: means for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

129. The computationally-implemented thing/operation disclosure of clause 127, wherein said means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: means for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

130. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring a request for a particular image object located in the scene; and

means for determining the particular image data of the scene that contains the particular image object.

131. The computationally-implemented thing/operation disclosure of clause 130,

177 wherein said means for acquiring a request for a particular image object located in the scene comprises:

178 means for acquiring a request for a particular person located in the scene.

132. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for a basketball located in the scene that is a basketball arena.

133. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for a motor vehicle located in the scene.

134. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for any human object representations located in the scene.

135. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

136. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous

179 moment in time.

180 137. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

138. The computationally-implemented thing/operation disclosure of clause 137, wherein said means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

139. The computationally-implemented thing/operation disclosure of clause 137, wherein said means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

140. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for receiving a first request for first particular image data from the scene from a first requestor; and

181 means for receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

141. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

142. The computationally- implemented thing/operation disclosure of clause 141, wherein said means for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

means for combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

143. The computationally-implemented thing/operation disclosure of clause 142, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and means for combining the received first request and the received second request into the request for particular image data.

144. The computationally-implemented thing/operation disclosure of clause 143, wherein said means for combining the received first request and the received second request into the request for particular image data comprises:

means for removing common pixel data between the received first request and the received second request.

182 145. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

146. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

147. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

148. The computationally-implemented thing/operation disclosure of clause 147, wherein said means for acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

means for acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

149. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

183 means for transmitting the request for the particular image data of the scene image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

184 150. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

151. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

152. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

153. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that

185 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

154. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

155. The computationally-implemented thing/operation disclosure of clause 154, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data

comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

156. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a

186 movable platform and that is configured to capture the scene that is larger than the requested image data.

157. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

158. The computationally-implemented thing/operation disclosure of clause 157, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

159. The computationally-implemented thing/operation disclosure of clause 157, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

187 160. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises: means for transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

161. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises: means for modifying the request for the particular image data; and

means for transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

162. The computationally-implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for removing designated image data from the request for the particular image data.

163. The computationally-implemented thing/operation disclosure of clause 162, wherein said means for removing designated image data from the request for the particular image data comprises:

means for removing designated image data from the request for the particular image data based on previously stored image data.

164. The computationally-implemented thing/operation disclosure of clause 163, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data comprises:

188 means for removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

165. The computationally-implemented thing/operation disclosure of clause 163, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data comprises:

means for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

166. The computationally-implemented thing/operation disclosure of clause 165, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data comprises:

means for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

167. The computationally-implemented thing/operation disclosure of clause 166, wherein said means for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

means for removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

189 168. The computationally-implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

169. The computationally-implemented thing/operation disclosure of clause 168, wherein said means for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

means for removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

170. The computationally- implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for identifying at least one portion of the request for the particular image data that is already stored in a memory; and

means for removing the identified portion of the request for the particular image data.

171. The computationally- implemented thing/operation disclosure of clause 170, wherein said means for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

means for identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

172. The computationally-implemented thing/operation disclosure of clause 170, wherein said means for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

means for identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

190 173. The computationally-implemented thing/operation disclosure of clause 172, wherein said means for determining the size of the request for the particular image data at least partially based on a resolution of the user thing/operation disclosure that requested the particular image data comprises:

means for identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

174. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for determining a size of the request for the particular image data; and means for transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

175. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data.

176. The computationally-implemented thing/operation disclosure of clause 175, wherein said means for determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

191 177. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data.

178. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

179. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an amount of time that a user thing/operation disclosure that requested at least a portion of the particular image data has requested image data.

180. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

181. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data from the image sensor array,

192 wherein data from the scene other than the particular image data is discarded.

182. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

193 means for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

183. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving the particular image data from the image sensor array in near- real time.

184. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving the particular image data from the image sensor array in near- real time; and

means for retrieving data from the scene other than the particular image data at a later time.

185. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

186. The computationally- implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

187. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image

194 data at a later time comprises:

195 means for retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

188. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

189. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data that includes audio data from the sensor array.

190. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

191. The computationally-implemented thing/operation disclosure of clause 190, wherein said means for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array:

means for receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

192. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting the received particular image data to a user device.

196 193. The computationally-implemented thing/operation disclosure of clause 192, wherein said means for transmitting the received particular image data to a user thing/operation disclosure comprises:

means for transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

194. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for separating the received particular image data into a set of one or more requested images; and

means for transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

195. The computationally-implemented thing/operation disclosure of clause 194, wherein said means for separating the received particular image data into a set of one or more requested images comprises:

means for separating the received particular image data into a first requested image and a second requested image.

196. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

197. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting a first portion of the received particular image data to a first requestor; and

means for transmitting a second portion of the received particular image data to a

197 second requestor.

198 198. The computationally-implemented thing/operation disclosure of clause 197, wherein said means for transmitting a first portion of the received particular image data to a first requestor comprises:

means for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

199. The computationally-implemented thing/operation disclosure of clause 198, wherein said means for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

means for transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

200. The computationally-implemented thing/operation disclosure of clause 197, wherein said means for transmitting a second portion of the received particular image data to a second requestor comprises:

means for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

201. The computationally- implemented thing/operation disclosure of clause 200, wherein said means for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

means for transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

202. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

199 means for transmitting at least a portion of the received particular image data without alteration to at least one requestor.

203. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

means for transmitting the generated transmission image data to at least one requestor.

204. The computationally-implemented thing/operation disclosure of clause 203, wherein said means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

205. The computationally-implemented thing/operation disclosure of clause 204, wherein said means for adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

206. The computationally-implemented thing/operation disclosure of clause 205, wherein said means for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

means for adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

200 207. The computationally-implemented thing/operation disclosure of clause 203, wherein said means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

208. The computationally-implemented thing/operation disclosure of clause 207, wherein said means for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

209. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for modifying data of a portion of the received particular image data to generate transmission image data; and

means for transmitting at the generated transmission image data to at least one requestor.

210. The computationally- implemented thing/operation disclosure of clause 209, wherein said means for modifying data of a portion of the received particular image data to generate transmission image data comprises:

means for performing image manipulation modifications of the received particular image data to generate transmission image data.

211. The computationally-implemented thing/operation disclosure of clause 210, wherein said means for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

201 means for performing contrast balancing on the received particular image data to generate transmission image data.

212. The computationally-implemented thing/operation disclosure of clause 210, wherein said means for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

means for performing color modification balancing on the received particular image data to generate transmission image data.

213. The computationally-implemented thing/operation disclosure of clause 209, wherein said means for modifying data of a portion of the received particular image data to generate transmission image data comprises:

means for redacting at least a portion of the received particular image data to generate the transmission image data.

214. The computationally-implemented thing/operation disclosure of clause 213, wherein said means for redacting at least a portion of the received particular image data to generate the transmission image data comprises:

means for redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

215. The computationally-implemented thing/operation disclosure of clause 214, wherein said means for redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

means for redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

216. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

202 means for transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

means for transmitting a full-resolution version of the received particular image data to the at least one requestor.

217. A computationally-implemented thing/operation disclosure, comprising

circuitry for acquiring a request for particular image data that is part of a scene; circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

circuitry for receiving only the particular image data from the image sensor array; and

circuitry for transmitting the received particular image data to at least one requestor.

218. The computationally- implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene that includes one or more images.

219. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving the request for particular image data of the scene.

220. The computationally-implemented thing/operation disclosure of clause 219, wherein said circuitry for receiving the request for particular image data of the scene comprises:

circuitry for receiving the request for particular image data of the scene from a

203 user device.

204 221. The computationally- implemented thing/operation disclosure of clause 220, wherein said circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

circuitry for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene.

222. The computationally-implemented thing/operation disclosure of clause 221, wherein circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

circuitry for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder.

223. The computationally-implemented thing/operation disclosure of clause 220, wherein said circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

circuitry for receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

224. The computationally-implemented thing/operation disclosure of clause 223, wherein said circuitry for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

circuitry for receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

225. The computationally-implemented thing/operation disclosure of clause 223, wherein said circuitry for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

205 circuitry for receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein

206 the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

226. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is the image data collected by the array of more than one image sensor.

227. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

228. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

229. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

207 230. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

231. The computationally- implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a football game.

232. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a street view of an area.

233. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a tourist destination.

234. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is an inside of a home.

208 235. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

209 circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

236. The computationally-implemented thing/operation disclosure of clause 235, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: circuitry for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

237. The computationally-implemented thing/operation disclosure of clause 235, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: circuitry for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

238. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring a request for a particular image object located in the scene; and

circuitry for determining the particular image data of the scene that contains the particular image object.

239. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for a particular person located in the scene.

240. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the

210 scene comprises:

211 circuitry for acquiring a request for a basketball located in the scene that is a basketball arena.

241. The computationally- implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for a motor vehicle located in the scene.

242. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for any human object representations located in the scene.

243. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

244. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

245. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

212 circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

246. The computationally-implemented thing/operation disclosure of clause 245, wherein said circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

247. The computationally-implemented thing/operation disclosure of clause 245, wherein said circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

248. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving a first request for first particular image data from the scene from a first requestor; and

circuitry for receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

213 249. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

250. The computationally-implemented thing/operation disclosure of clause 249, wherein said circuitry for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

circuitry for combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

251. The computationally- implemented thing/operation disclosure of clause 250, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and circuitry for combining the received first request and the received second request into the request for particular image data.

252. The computationally- implemented thing/operation disclosure of clause 251, wherein said circuitry for combining the received first request and the received second request into the request for particular image data comprises:

circuitry for removing common pixel data between the received first request and the received second request.

253. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a

214 scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

215 254. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

255. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

256. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

circuitry for acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

257. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

258. The computationally-implemented thing/operation disclosure of clause 217,

216 wherein said circuitry for transmitting the request for the particular image data to an image sensor array that

217 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

259. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

260. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

261. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly,nonsequentially

218 arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

262. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

263. The computationally-implemented thing/operation disclosure of clause 262, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data comprises: circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

264. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data.

265. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that

219 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

266. The computationally-implemented thing/operation disclosure of clause 265, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

267. The computationally-implemented thing/operation disclosure of clause 265, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

268. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

220 circuitry for transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

269. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for modifying the request for the particular image data; and

circuitry for transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

270. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises: circuitry for removing designated image data from the request for the particular image data.

271. The computationally- implemented thing/operation disclosure of clause 270, wherein said circuitry for removing designated image data from the request for the particular image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data.

272. The computationally- implemented thing/operation disclosure of clause 271, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

221 273. The computationally- implemented thing/operation disclosure of clause 271, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

274. The computationally-implemented thing/operation disclosure of clause 273, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data comprises:

circuitry for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

275. The computationally-implemented thing/operation disclosure of clause 274, wherein said circuitry for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

circuitry for removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

276. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises:

222 circuitry for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

277. The computationally-implemented thing/operation disclosure of clause 276, wherein said circuitry for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

circuitry for removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

278. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises: circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory; and

circuitry for removing the identified portion of the request for the particular image data.

279. The computationally-implemented thing/operation disclosure of clause 278, wherein said circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

circuitry for identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

280. The computationally-implemented thing/operation disclosure of clause 278, wherein said circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

circuitry for identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

223 281. The computationally- implemented thing/operation disclosure of clause 280, wherein said circuitry for determining the size of the request for the particular image data at least partially based on a resolution of the user thing/operation disclosure that requested the particular image data comprises:

circuitry for identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

282. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for determining a size of the request for the particular image data; and circuitry for transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

283. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data.

284. The computationally-implemented thing/operation disclosure of clause 283, wherein said circuitry for determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

285. The computationally-implemented thing/operation disclosure of clause 282,

224 wherein said circuitry for determining a size of the request for the particular image data comprises:

225 circuitry for determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data.

286. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

287. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

288. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

289. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded.

290. The computationally-implemented thing/operation disclosure of clause 217,

226 wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

227 circuitry for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

291. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving the particular image data from the image sensor array in near-real time.

292. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving the particular image data from the image sensor array in near-real time; and

circuitry for retrieving data from the scene other than the particular image data at a later time.

293. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

294. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

295. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image

228 data at a later time comprises:

229 circuitry for retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

296. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

297. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data that includes audio data from the sensor array.

298. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

299. The computationally-implemented thing/operation disclosure of clause 298, wherein said circuitry for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array:

circuitry for receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

300. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting the received particular image data to a user device.

230 301. The computationally- implemented thing/operation disclosure of clause 300, wherein said circuitry for transmitting the received particular image data to a user thing/operation disclosure comprises:

circuitry for transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

302. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for separating the received particular image data into a set of one or more requested images; and

circuitry for transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

303. The computationally-implemented thing/operation disclosure of clause 302, wherein said circuitry for separating the received particular image data into a set of one or more requested images comprises:

circuitry for separating the received particular image data into a first requested image and a second requested image.

304. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

305. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting a first portion of the received particular image data to a first requestor; and

circuitry for transmitting a second portion of the received particular image data to

231 a second requestor.

232 306. The computationally-implemented thing/operation disclosure of clause 305, wherein said circuitry for transmitting a first portion of the received particular image data to a first requestor comprises:

circuitry for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

307. The computationally-implemented thing/operation disclosure of clause 306, wherein said circuitry for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

circuitry for transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

308. The computationally-implemented thing/operation disclosure of clause 305, wherein said circuitry for transmitting a second portion of the received particular image data to a second requestor comprises:

circuitry for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

309. The computationally-implemented thing/operation disclosure of clause 308, wherein said circuitry for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

circuitry for transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

310. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

233 circuitry for transmitting at least a portion of the received particular image data without alteration to at least one requestor.

311. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

circuitry for transmitting the generated transmission image data to at least one requestor.

312. The computationally-implemented thing/operation disclosure of clause 311, wherein said circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

313. The computationally-implemented thing/operation disclosure of clause 312, wherein said circuitry for adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

314. The computationally-implemented thing/operation disclosure of clause 313, wherein said circuitry for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

circuitry for adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

200 315. The computationally-implemented thing/operation disclosure of clause 311, wherein said circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

316. The computationally-implemented thing/operation disclosure of clause 315, wherein said circuitry for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

317. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for modifying data of a portion of the received particular image data to generate transmission image data; and

circuitry for transmitting at the generated transmission image data to at least one requestor.

318. The computationally-implemented thing/operation disclosure of clause 317, wherein said circuitry for modifying data of a portion of the received particular image data to generate transmission image data comprises:

circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data.

319. The computationally-implemented thing/operation disclosure of clause 318, wherein said circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

201 circuitry for performing contrast balancing on the received particular image data to generate transmission image data.

320. The computationally- implemented thing/operation disclosure of clause 318, wherein said circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

circuitry for performing color modification balancing on the received particular image data to generate transmission image data.

321. The computationally- implemented thing/operation disclosure of clause 317, wherein said circuitry for modifying data of a portion of the received particular image data to generate transmission image data comprises:

circuitry for redacting at least a portion of the received particular image data to generate the transmission image data.

322. The computationally- implemented thing/operation disclosure of clause 321, wherein said circuitry for redacting at least a portion of the received particular image data to generate the transmission image data comprises:

circuitry for redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

323. The computationally-implemented thing/operation disclosure of clause 322, wherein said circuitry for redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

circuitry for redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

324. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

202 circuitry for transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

circuitry for transmitting a full-resolution version of the received particular image data to the at least one requestor.

325. A thing/operation disclosure, comprising:

a signal -bearing medium bearing:

one or more instructions for acquiring a request for particular image data that is part of a scene;

one or more instructions for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

one or more instructions for receiving only the particular image data from the image sensor array; and

one or more instructions for transmitting the received particular image data to at least one requestor.

326. A thing/operation disclosure defined by a computational language comprising: one or more interchained physical machines ordered for acquiring a request for particular image data that is part of a scene;

one or more interchained physical machines ordered for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

one or more interchained physical machines ordered for receiving only the particular image data from the image sensor array; and

one or more interchained physical machines ordered for transmitting the received particular image data to at least one requestor.

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [EN D]

203 Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If" Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods and Systems for Visual Imaging Arrays

DETAILED DESCRIPTION -- jj^^^^^^M

High-Level System Architecture

[00151] Fig. 1, including Figs. 1-A-l-AN, shows partial views that, when assembled, form a complete view of an entire system, of which at least a portion will be described in more detail. An overview of the entire system of Fig. 1 is now described herein, with a more specific reference to at least one subsystem of Fig. 1 to be described later with respect to Figs. 2-14D.

[00152] Fig. 1 shows various implementations of the overall system. At a high level, Fig. 1 shows various implementations of a multiple user video imaging array (hereinafter interchangeably referred to as a "MUVIA"). It is noted that the designation "MUVIA" is merely shorthand and descriptive of an exemplary embodiment, and not a limiting term. Although "multiple user" appears in the name MUVIA, multiple users or even a single user are not required. Further, "video" is used in the designation "MUVIA," but MUVIA systems also may capture still images, multiple images, audio data, electromagnetic waves outside the visible spectrum, and other data as will be described herein. Further, "imaging array" may be used in the MUVIA designation, but the image sensor in MUVIA is not necessarily an array or even multiple sensors (although commonly implemented as larger groups of image sensors, single-sensor implementations are also contemplated), and "array" here does not necessarily imply any specific structure, but rather any grouping of one or more sensors.

[00153] Generally, although not necessarily required, a MUVIA system may include one or more of a user device (e.g., hereinafter interchangeably referred to as a "client device," in recognition that a user may not necessarily be a human, living, or organic"), a server, and an image sensor array. A "server" in the context of this application may refer to any

204 device, program, or module that is not directly connected to the image sensor array or to the client device, including any and all "cloud" storage, applications, and/or processing.

[00154] For example, in an embodiment, e.g., as shown in Fig. 1-A, Fig. 1-K, Fig. 1-U, Fig. 1-AE, and Fig. 1-AF, in an embodiment, the system may include one or more of image sensor array 3200, array local storage and processing module 3300, server 4000, and user device 5200. Each of these portions will be discussed in more detail herein.

205 [00155] Referring now to Fig. 1-A, Fig. 1-A depicts user device 5200, which is a device that may be operated or controlled by a user of a MUVIA system. It is noted here that "user" is merely provided as a designation for ease of understanding, and does not imply control by a human or other organism, sentient or otherwise. In an embodiment, for example, in a security-type embodiment, the user device 5200 may be mostly or completely unmonitored, or may be monitored by an artificial intelligence, or by a combination of artificial intelligence, pseudo-artificial intelligence (e.g., that is intelligence amplification) and human intelligence.

[00156] User device 5200 may be, but is not limited to, a wearable device (e.g., glasses, goggles, headgear, a watch, clothing), an implant (e.g., a retinal-implant display), a computer of any kind (e.g., a laptop computer, desktop computer, mainframe, server, etc.), a tablet or other portable device, a phone or other similar device (e.g., smartphone, personal digital assistant), a personal electronic device (e.g., music player, CD player), a home appliance (e.g., a television, a refrigerator, or any other so-called "smart" device), a piece of office equipment (e.g., a copier, scanner, fax device, etc.), a camera or other camera-like device, a video game system, an entertainment/media center, or any other electrical equipment that has a functionality of presenting an image (whether visual or by other means, e.g., a screen, but also other sensory stimulating work).

[00157] User device 5200 may be capable of presenting an image, which, for purposes of clarity and conciseness will be referred to as displaying an image, although

communication through forms other than generating light waves through the visible light spectrum, although the image is not required to be presented at all times or even at all. For example, in an embodiment, user device 5200 may receive images from server 4000 (or directly from the image sensor array 3200, as will be discussed herein), and may store the images for later viewing, or for processing internally, or for any other reason.

[00158] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection accepting module 5210. User selection accepting module 5210 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-A in the exemplary interface 5212, the user selection accepting module 5210

206 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00159] In an embodiment, the user selection accepting module may accept a selection of a particular thing- e.g., a building, an animal, or any other object whose representation is present on the screen. Moreover, a user may use a text box to "search" the image for a particular thing, and processing, done at the user device 5200 or at the server 4000, may determine the image and the zoom level for viewing that thing. The search for a particular thing may include a generic search, e.g., "search for people," or "search for penguins," or a more specific search, e.g., "search for the Space Needle," or "search for the White House." The search for a particular thing may take on any known contextual search, e.g., an address, a text string, or any other input.

[00160] In an embodiment, the "user selection" facilitated by the user selection accepting module 5210 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00161] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection transmitting module 5220. The user selection transmitting module 5220 may take the user selection from user selection transmitting module 5220, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5200 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server

207 4000. Following the thick- line arrow leftward from user selection transmitting module 5220 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00162] Referring again to Fig. 1-A, Fig. 1-A also includes a selected image receiving module 5230 and a user selection presenting module 5240, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00163] Referring now to Fig. 1-K, Figs. 1-K and 1-U show an embodiment of a server 4000 that communicates with one or both of user device 5200 and array local storage and processing module 3300. Sever 4000 may be a single computing device, or may be many computing devices, which may or may not be in proximity with each other.

[00164] Referring again to Fig. 1-K, server 4000 may include a user request reception module 4010. The user request reception module 4010 may receive the transmitted request from user selection transmitting module 5220. The user request reception module 4010 may then turn over processing to user request validation module 4020, which may perform, among other things, a check to make sure the user is not requesting more resolution than what their device can handle. For example, if the server has learned (e.g., through gathered information, or through information that was transmitted with the user request or in a same session as the user request), that the user is requesting a 1900x1080 resolution image, and the maximum resolution for the device is 1334x750, then the request will be modified so that no more than the maximum resolution that can be handled by the device is requested. In an embodiment, this may conserve the bandwidth required to transmit from the MUVIA to the server 4000 and/or the user device 3200

[00165] Referring again to Fig. 1-K, in an embodiment, server 4000 may include a user request latency management module 4030. User request latency management module 4030 may, in conjunction with user device 3200, attempt to reduce the latency from the

208 time a specific image is requested by user device 3200 to the time the request is acted upon and data is transmitted to the user. The details for this latency management will be described in more detail herein, with varying techniques that may be carried out by any or all of the devices in the chain (e.g., user device, camera array, and server). As an example, in an embodiment, a lower resolution version of the image, e.g., that is stored locally or on the server, may be sent to the user immediately upon the request, and then that image is updated with the actual image taken by the camera. In an embodiment, user request latency management module 4030 also may handle static gap-filling, that is, if the image captured by the camera is unchanging, e.g., has not changed for a particular period of time, then a new image is not necessary to be captured, and an older image, that may be stored on server 4000, may be transmitted to the user device 3200. This process also will be discussed in more detail herein.

[00166] Referring now to Fig. 1-U, which shows more of server 4000, in an

embodiment, server 4000 may include a consolidated user request transmission module 4040, which may be configured to consolidate all the user requests, perform any necessary pre-processing on those requests, and send the request for particular sets of pixels to the array local storage and processing module 3300. The process for

consolidating the user requests and performing pre-processing will be described in more detail herein with respect to some of the other exemplary embodiments. In this embodiment, however, server consolidated user request transmission module 4040 transmits the request (exiting leftward from Fig. 1-U and traveling downward to Fig. 1- AE, through a pathway identified in Fig. 1-AE as lower-bandwidth communication from remote server 3515. It is noted here that "lower bandwidth communication" does not necessarily mean "low bandwidth" or imply any specific number about the bandwidth— it is simply lower than the relatively higher bandwidth communication from the actual image sensor array 3505 to the array local storage and processing module 3300, which will be discussed in more detail herein.

[00167] Referring again to Fig. 1-U, server 4000 also may include requested pixel reception module 4050, user request preparation module 4060, and user request

209 transmission module 4070 (shown in Fig. 1-T), which will be discussed in more detail herein, with respect to the dataflow of this embodiment

[00168] Referring now to Figs. 1-AE and 1-AF, Figs. 1-AE and 1-AF show an image sensor array ("ISA") 3200 and an array local storage and processing module 3300, each of which will now be described in more detail.

[00169] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00170] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3300. In an embodiment, array local storage and processing module 3300 is integrated into the image sensor array 3200. In another embodiment, the array local storage and processing module 3300 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3300 to the remote server, which may be, but is not required to be, located further away temporally.

[00171] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically

210 changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00172] Referring again to Fig. 1-AE, the image sensor array 3300 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3310. Consolidated user request reception module 3310 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00173] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests.

[00174] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

211 [00175] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3300 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3300 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00176] Referring back to Fig. 1-U, the transmitted pixels transmitted from selected pixel transmission module 3340 of array local processing module 3300 may be received by server 4000, e.g., at requested pixel reception module 4050. Requested pixel reception module 4050 may receive the requested pixels and turn them over to user request preparation module 4060, which may "unpack" the requested pixels, e.g., determining which pixels go to which user, and at what resolutions, along with any postprocessing, including image adjustment, adding in missing cached data, or adding additional data to the images (e.g., advertisements or other data). In an embodiment, server 4000 also may include a user request transmission module 4070, which may be configured to transmit the requested pixels back to the user device 5200.

[00177] Referring again to Fig. 1-A, user device 5200 may include a selected image receiving module 5230, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

212 [00178] Figs. 1-B, 1-C, 1-M, 1-W, 1-AG, and 1-AH show another embodiment of the MUVIA system, in which multiple user devices 5510, 5520, and 5530 may request images captured by the same image sensor array 3200.

[00179] Referring now to Figs. 1-B and 1-C, user device 5510, user device 5520, and user device 5530 are shown. In an embodiment, user devices 5510, 5520, and 5530 may have some or all of the same components as user device 5200, but are not shown here for clarity and ease of understanding the drawing. For each of user devices 5510, 5520, and 5530, exemplary screen resolutions have been chosen. There is nothing specific about these numbers that have been chosen, however, they are merely illustrated for exemplary purposes, and any other numbers could have been chosen in their place.

[00180] For example, in an embodiment, referring to Fig. 1-B, user device 5510 may have a screen resolution of 1920x1080 (e.g., colloquially referred to as "HD quality"). User device 5510 may send an image request to the server 4000, and may also send data regarding the screen resolution of the device.

[00181] Referring now to Fig. 1-C, user device 5520 may have a screen resolution of 1334x750. User device 5520 may send another image request to the server 4000, and, in an embodiment, instead of sending data regarding the screen resolution of the device, may send data that identifies what kind of device it is (e.g., an Apple-branded

smartphone). Server 4000 may use this data to determine the screen resolution for user device 5520 through an internal database, or through contacting an external source, e.g., a manufacturer of the device or a third party supplier of data about devices.

[00182] Referring again to Fig. 1-C, user device 5530 may have a screen resolution of 640x480, and may send the request by itself to the server 4000, without any additional data. In addition, server 4000 may receive independent requests from various users to change their current viewing area on the device.

[00183] Referring now to Fig. 1-M, server 4000 may include user request reception module 4110. User request reception module 4110 may receive requests from multiple user devices, e.g., user devices 5510, 5520, and 5530. Server 4000 also may include an

213 independent user view change request reception module 4115, which, in an embodiment, may be a part of user request reception module 4110, and may be configured to receive requests from users that are already connected to the system, to change the view of what they are currently seeing.

[00184] Referring again to Fig. 1-M, server 4000 may include relevant pixel selection module 4120 configured to combine the user selections into a single area, as shown in Fig. 1-M. It is noted that, in an embodiment, the different user devices may request areas that overlap each other. In this case, there may be one or more overlapping areas, e.g., overlapping areas 4122. In an embodiment, the overlapping areas are only transmitted once, in order to save data/transmission costs and increase efficiency.

[00185] Referring now to Fig. 1-W, server 4000 may include selected pixel transmission to ISA module 4130. Module 4130 may take the relevant selected pixels, and transmit them to the array local processing module 3400 of image sensor array 3200. Selected pixel transmission to ISA module 4130 may include communication components, which may be shared with other transmission and/or reception modules.

[00186] Referring now to Fig. 1-AG, array local processing module 3400 may communicate with image sensor array 3200. Similarly to Figs. 1-AE and 1-AF, Figs. 1- AG and 1-AH show array local processing module 3400 and image sensor array 3200, respectively.

[00187] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

214 [00188] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3400. In an embodiment, array local storage and processing module 3400 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3400 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3400 to the remote server, which may be, but is not required to be, located further away temporally.

[00189] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00190] Referring again to Fig. 1-AG, the image sensor array 3200 may capture an image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3410. Consolidated user request reception module 3410 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3420 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00191] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3430. In an

215 embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3417. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3415. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3400, or may be subject to other manipulations or processing separate from the user requests.

[00192] Referring gain to Fig. 1-AG, array local processing module 3400 may include flagged selected pixel transmission module 3440, which takes the pixels identified as requested (e.g., "flagged") and transmits them back to the server 4000 for further processing. Similarly to as previously described, this transmission may utilize a lower- bandwidth channel, and module 3440 may include all necessary hardware to effect that lower-bandwidth transmission to server 4000.

[00193] Referring again to Fig. 1-W, the flagged selected pixel transmission module 3440 of array local processing module 3400 may transmit the flagged pixels to server 4000. Specifically, flagged selected pixel transmission module 3440 may transmit the pixels to flagged selected pixel reception from ISA module 4140 of server 4000, as shown in Fig. 1-W.

[00194] Referring again to Fig. 1-W, server 4000 also may include flagged selected pixel separation and duplication module 4150, which may, effectively, reverse the process of combining the pixels from the various selections, duplicating overlapping areas where necessary, and creating the requested images for each of the user devices that requested images. Flagged selected pixel separation and duplication module 4150 also may include the post-processing done to the image, including filling in cached versions of images, image adjustments based on the device preferences and/or the user preferences, and any other image post-processing.

[00195] Referring now to Fig. 1-M (as data flows "northward" from Fig. 1-W from module 4150), server 4000 may include pixel transmission to user device module 4160, which may be configured to transmit the pixels that have been separated out and processed to the specific users that requested the image. Pixel transmission to user

216 device module 4160 may handle the transmission of images to the user devices 5510, 5520, and 5530. In an embodiment, pixel transmission to user device module 4160 may have some or all components in common with user request reception module 4110.

[00196] Following the arrow of data flow to the right and upward from module 4160 of server 4000, the requested user images arrive at user device 5510, user device 5520, and user device 5530, as shown in Figs. 1-B and 1-C. The user devices 5510, 5520, and 5530 may present the received images as previously discussed and/or as further discussed herein.

[00197] Referring again to Fig. 1, Figs. 1-E, l-O, 1-Y, 1-AH, and 1-AI depict a MUVIA implementation according to an embodiment. In an embodiment, referring now to Fig. 1- E, a user device 5600 may include a target selection reception module 5610. Target selection reception module 5610 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA array is pointed at a football stadium, e.g., CenturyLink Field. As an example, a user may select one of the football players visible on the field as a "target." This may be facilitated by a target presentation module, e.g., target presentation module 5612, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the football player.

[00198] In an embodiment, target selection reception module 5610 may include an audible target selection module 5614 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00199] Referring again to Fig. 1, e.g., Fig. 1-E, in an embodiment, user device 5600 may include selected target transmission module 5620. Selected target transmission module 5620 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00200] Referring now to Fig. l-O, Fig. 1-0 (and Fig. 1-Y to the direct "south" of Fig. 1- O) shows an embodiment of server 4000. For example, in an embodiment, server 4000

47 may include a selected target reception module 4210. In an embodiment, selected target reception module 4210 of server 4000 may receive the selected target from the user device 5600. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00201] Referring again to Fig. l-O, in an embodiment, server 4000 may include selected target identification module 4220, which may be configured to take the target data received by selected target reception module 4210 and determine an image that needs to be captured in order to obtain an image that contains the selected target (e.g., in the shown example, the football player). In an embodiment, selected target identification module 4220 may use images previously received (or, in an embodiment, current images) from the image sensor array 3200 to determine the parameters of an image that contains the selected target. For example, in an embodiment, lower-resolution images from the image sensor array 3200 may be transmitted to server 4000 for determining where the target is located within the image, and then specific requests for portions of the image may be transmitted to the image sensor array 3200, as will be discussed herein.

[00202] In an embodiment, server 4000 may perform processing on the selected target data, and/or on image data that is received, in order to create a request that is to be transmitted to the image sensor array 3200. For example, in the given example, the selected target data is a football player. The server 4000, that is, selected target identification module 4220 may perform image recognition on one or more images captured from the image sensor array to determine a "sector" of the entire scene that contains the selected target. In another embodiment, the selected target identification module 4220 may use other, external sources of data to determine where the target is. In yet another embodiment, the selected target data was selected by the user from the scene displayed by the image sensor array, so such processing may not be necessary.

48 [00203] Referring again to Fig. 1-0, in an embodiment, server 4000 may include pixel information selection module 4230, which may select the pixels needed to capture the target, and which may determine the size of the image that should be transmitted from the image sensor array. The size of the image may be determined based on a type of target that is selected, one or more parameters (set by the user, by the device, or by the server, which may or may not be based on the selected target), by the screen resolution of the device, or by any other algorithm. Pixel information selection module 4230 may determine the pixels to be captured in order to express the target, and may update based on changes in the target's status (e.g., if the target is moving, e.g., in the football example, once a play has started and the football player is moving in a certain direction).

[00204] Referring now to Fig. 1-Y, Fig. 1Y includes more of server 4000 according to an embodiment. In an embodiment, server 4000 may include pixel information transmission to ISA module 4240. Pixel information transmission to ISA module 4240 may transmit the selected pixels to the array local processing module 3500 associated with image sensor array 3200.

[00205] Referring now to Figs. 1-AH and 1-AI, Fig. 1-AH depicts an image sensor array 3200, which in this example is pointed at a football stadium, e.g., CenturyLink field. Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00206] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3500. In an embodiment, array local storage and processing module

49 3500 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3500 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3500 to the remote server, which may be, but is not required to be, located further away temporally.

[00207] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00208] Referring again to Fig. 1-AE, the image sensor array 3200 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3510. Consolidated user request reception module 3510 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00209] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels

50 may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module 3330 may include or communicate with a lower resolution module 3314, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[00210] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00211] Referring now again to Fig. 1-Y, server 4000 may include a requested image reception from ISA module 4250. Requested image reception from ISA module 4250 may receive the image data from the array local processing module 3500 (e.g., in the arrow coming "north" from Fig. 1-AI. That image, as depicted in Fig. 1-Y, may include the target (e.g., the football player), as well as some surrounding area (e.g., the area of the field around the football player). The "surrounding area" and the specifics of what is included/transmitted from the array local processing module may be specified by the user (directly or indirectly, e.g., through a set of preferences), or may be determined by the server, e.g., in the pixel information selection module 4230 (shown in Fig. l-O).

[00212] Referring again to Fig. 1-Y, server 4000 may also include a requested image transmission to user device module 4260. Requested image transmission to user device module 4260 may transmit the requested image to the user device 5600. Requested

51 image transmission to user device module 4260 may include components necessary to communicate with user device 5600 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00213] Referring again to Fig. 1-Y, server 4000 may include a server cached image updating module 4270. Server cached image updating module 4270 may take the images received from the array local processing module 3500 (e.g., which may include the image to be sent to the user), and compare the received images with stored or "cached" images on the server, in order to determine if the cached images should be updated. This process may happen frequently or infrequently, depending on embodiments, and may be continuously ongoing as long as there is a data connection, in some embodiments. In some embodiments, the frequency of the process may depend on the available bandwidth to the array local processing module 3500, e.g., that is, at off-peak times, the frequency may be increased. In an embodiment, server cached image updating module 4270 compares an image received from the array local processing module 3500, and, if the image has changes, replaces the cached version of the image with the newer image.

[00214] Referring now again to Fig. 1-E, Fig. 1-E shows user device 5600. In an embodiment, user device 5600 includes image containing selected target receiving module 5630 that may be configured to receive the image from server 4000, e.g., requested image transmission to user device module 4260 of server 4000 (e.g., depicted in Fig. 1-Y, with the data transmission indicated by a rightward-upward arrow passing through Fig. 1-Y and Fig. 1-0 (to the north) before arriving at Fig. 1-E.

[00215] Referring again to Fig. 1-E, Fig. 1-E shows received image presentation module 5640, which may display the requested pixels that include the selected target to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through an exemplary interface that allows the user to monitor the target, and which also may display information about the target (e.g., in an embodiment, as shown in the figures, the game statistics for the football player also may

52 be shown), which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[00216] Referring again to Fig. 1, Figs. 1-F, 1-P, 1-Z, and 1-AJ depict a MUVIA implementation according to an embodiment. This embodiment may be colloquially known as "live street view" in which one or more MUVIA systems allow for a user to move through an area similarly to the well known Google-branded Maps (or Google- Street), except with the cameras working in real time. For example, in an embodiment, referring now to Fig. 1-F, a user device 5700 may include a target selection reception module 5710. Target selection reception module 5710 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA may be focused on a city, and the target may be an address, a building, a car, or a person in the city. As an example, a user may select a street address as a "target." This may be facilitated by a target presentation module, e.g., image selection presentation module 5712, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the street address. In an embodiment, image selection presentation module 5712 may use static images that may or may not be sourced by the MUVIA system, and, in another embodiment, image selection presentation module 5712 may use current or cached views from the MUVIA system.

[00217] In an embodiment, image selection presentation module 5712 may include an audible target selection module 5714 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00218] Referring again to Fig. 1, e.g., Fig. 1-F, in an embodiment, user device 5700 may include selected target transmission module 5720. Selected target transmission module 5720 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00219] Referring now to Fig. 1-P, Fig. 1-P depicts a server 4000 of the MUVIA system according to embodiments. In an embodiment, server 4000 may include a selected target

53 reception module 4310. Selected target reception module 4310 may receive the selected target from the user device 3700. In an embodiment, server 4000 may provide all or most of the data that facilitates the selection of the target, that is, the images and the interface, which may be provided, e.g., through a web portal.

[00220] Referring again to Fig. 1-P, in an embodiment, server 4000 may include a selected image pre-processing module 4320. Selected image pre-processing module 4320 may perform one or more tasks of pre-processing the image, some of which are described herein for exemplary purposes. For example, in an embodiment, selected image pre-processing module 4320 may include a resolution determination module 4322 which may be configured to determine the resolution for the image in order to show the target (and here, resolution is merely a stand-in for any facet of the image, e.g., color depth, size, shadow, pixilation, filter, etc.). In an embodiment, selected image preprocessing module 4320 may include a cached pixel fill-in module 4324. Cached pixel fill-in module 4324 may be configured to manage which portions of the requested image are recovered from a cache, and which are updated, in order to improve performance. For example, if a view of a street is requested, certain features of the street (e.g., buildings, trees, etc., may not need to be retrieved each time, but can be filled in with a cached version, or, in another embodiment, can be filled in by an earlier version. A check can be done to see if a red parked car is still in the same spot as it was in an hour ago; if so, that part of the image may not need to be updated. Using lower

resolution/prior images stored in a memory 4215, as well as other image processing techniques, cached pixel fill-in module determines which portions of the image do not need to be retrieved, thus reducing bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00221] Referring again to Fig. 1-P, in an embodiment, selected image pre-processing module 4320 of server 4000 may include a static object obtaining module 4326, which may operate similarly to cached pixel fill-in module 4324. For example, as in the example shown in Fig. 1-B, static object obtaining module 4326 may obtain prior versions of static objects, e.g., buildings, trees, fixtures, landmarks, etc., which may save

54 bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00222] Referring again to Fig. 1-P, in an embodiment, pixel information transmission to ISA module 4330 may transmit the request for pixels (e.g., an image, after the preprocessing) to the array local processing module 3600 (e.g., as shown in Figs. 1-Z and 1- AI, with the downward extending dataflow arrow).

[00223] Referring now to Figs. 1-Z and 1-AI, in an embodiment, an array local processing module 3600, that may be connected by a higher bandwidth connection to an image sensor array 3200, may be present.

[00224] . Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00225] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3605 to the array local storage and processing module 3600. In an embodiment, array local storage and processing module 3600 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3600 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3605" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3600 to the remote server, which may be, but is not required to be, located further away temporally.

55 [00226] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00227] Referring again to Fig. 1-AJ and Fig. 1-Z, the image sensor array 3200 may capture an image that is received by image capturing module 3605. Image capturing module 3605 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3610.

Consolidated user request reception module 3610 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3620 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00228] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3630. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3617. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3615. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3600, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module may include or communicate with a lower resolution module 3614, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

56 [00229] Referring again to Fig. 1-AJ, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3640. Selected pixel transmission module 3640 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00230] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3600 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3600 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00231] Referring now again to Fig. 1-P, in an embodiment, server 4000 may include image receiving from ISA module 4340. Image receiving from ISA module 4340 may receive the image data from the array local processing module 3600 (e.g., in the arrow coming "north" from Fig. 1-AJ via Fig. 1-Z). The image may include the pixels that were requested from the image sensor array 3200. In an embodiment, server 4000 also may include received image post-processing module 4350, which may, among other postprocessing tasks, fill in objects and pixels into the image that were determined not to be needed by selected image pre-processing module 4320, as previously described. In an embodiment, server 4000 may include received image transmission to user device module 4360, which may be configured to transmit the requested image to the user

57 device 5700. Requested image transmission to user device module 4360 may include components necessary to communicate with user device 5700 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00232] Referring now again to Fig. 1-F, user device 5700 may include a server image reception module 5730. Server image reception module 5730 may receive an image from sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-F.

[00233] In an embodiment, as shown in Figs. 1-F and 1-G, server image reception module 5730 may include an audio stream reception module 5732 and a video stream reception module 5734. In an embodiment, as discussed throughout this application, the MUVIA system may capture still images, video, and also sound, as well as other electromagnetic waves and other signals and data. In an embodiment, the audio signals and the video signals may be handled together, or they may be handled separately, as separate streams. Although not every module in the instant diagram separately shows audio streams and video streams, it is noted here that all implementations of MUVIA contemplate both audio and video coverage, as well as still image and other data collection.

[00234] Referring now to Fig. 1-G, which shows another portion of user device 5700, Fig. 1-G may include a display 5755 and a memory 5765, which may be used to facilitate presentation and/or storage of the received images.

[00235] Figs. 1-H, 1-R, 1-AA, and 1-AB show an embodiment of a MUVIA

implementation. For example, referring now to Fig. 1-H, Fig. 1-H shows an embodiment of a user device 5800. For exemplary purposes, the user device 5800 may be an augmented reality device that shows a user looking down a "street" at which the user is not actually present, e.g., a "virtual tourism" where the user may use their augmented

58 reality device (e.g., googles, e.g., an Oculus Rift-type headgear device) which may be a wearable computer. It is noted that this embodiment is not limited to wearable computers or augmented reality, but as in all of the embodiments described in this disclosure, may be any device. The use of a wearable augmented/virtual reality device is merely used to for illustrative and exemplary purposes.

[00236] In an embodiment, user device 5800 may have a field of view 5810, as shown in Fig. 1-H. The field of view for the user 5810 may be illustrated in Fig. 1-H as follows. The most internal rectangle, shown by the dot hatching, represents the user's "field of view" as they look at their "virtual world." The second most internal rectangle, with the straight line hatching, represents the "nearest" objects to the user, that is, a range where the user is likely to "look" next, by turning their head or moving their eyes. In an embodiment, this area of the image may already be loaded on the device, e.g., through use of a particular codec, which will be discussed in more detail herein. The outermost rectangle, which is the image without hatching, represents further outside the user's viewpoint. This area, too, may already be loaded on the device. By loading areas where the user may eventually look, the system can reduce latency and make a user's motions, e.g., movement of head, eyes, and body, appear "natural" to the system.

[00237] Referring now to Figs. 1-AA and 1-AB, these figures show an array local processing module 3700 that is connected to an image sensor array 3200 (e.g., as shown in Fig. 1-AK, and "viewing" a city as shown in Fig. 1-AJ). The image sensor array 3200 may operate as previously described in this document. In an embodiment, array local processing module 3700 may include a captured image receiving module 3710, which may receive the entire scene captured by the image sensor array 3200, through the higher-bandwidth communication channel 3505. As described previously in this application, these pixels may be "cropped" or "decimated" into the relevant portion of the captured image, as described by one or more of the user device 5800, the server 4000, and the processing done at the array local processing module 3700. This process may occur as previously described. The relevant pixels may be handled by relevant portion of captured image receiving module 3720.

59 [00238] Referring now to Fig. 1-AB, in an embodiment, the relevant pixels for the image that are processed by relevant portion of captured image receiving module 3720 may be encoded using a particular codec at relevant portion encoding module 3730. In an embodiment, the codec may be configured to encode the innermost rectangle, e.g., the portion that represents the current user's field of view, e.g., portion 3716, at a higher resolution, or a different compression, or a combination of both. The codec may be further configured to encode the second rectangle, e.g., with the vertical line hashing, e.g., portion 3714, at a different resolution and/or a different (e.g., a higher) compression. Similarly, the outermost portion of the image, e.g., the clear portion 3712, may again be coded at still another resolution and/or a different compression. In an embodiment, the codec itself handles the algorithm for encoding the image, and as such, in an

embodiment, the codec may include information about user device 5800.

[00239] As shown in Fig. 1-AB, the encoded portion of the image, including portions 3716, 3714, and 3712, may be transmitted using encoded relevant portion transmitting module 3740. It is noted that "lower compression," "more compression," and "higher compression," are merely used as one example for the kind of processing done by the codec. For example, instead of lower compression, a different sampling algorithm or compacting algorithm may be used, or a lossier algorithm may be implemented for various parts of the encoded relevant portion.

[00240] Referring now to Fig. 1-R, Fig. 1-R depicts a server 4000 in a MUVIA system according to an embodiment. For example, as shown in Fig. 1-R, server 4000 may include, in addition to portions previously described, an encoded image receiving module 4410. Encoded image receiving module 4410 may receive the encoded image, encoded as previously described, from encoded relevant portion transmitting module 3740 of array local processing module 3700.

[00241] Referring again to Fig. 1-R, server 4000 may include an encoded image transmission controlling module 4420. Encoded image transmission controlling module 4420 may transmit portions of the image to the user device 5800. In an embodiment, at least partially depending on the bandwidth and the particulars of the user device, the

60 server may send all of the encoded image to the user device, and let the user device decode the portions as needed, or may decode the image and send portions in piecemeal, or with a different encoding, depending on the needs of the user device, and the complexity that can be handled by the user device.

[00242] Referring again to Fig. 1-H, user device 5800 may include an encoded image transmission receiving module 5720, which may be configured to receive the image that is coded in a particular way, e.g., as will be disclosed in more detail herein. Fig. 1-H also may include an encoded image processing module 5830 that may handle the processing of the image, that is, encoding and decoding portions of the image, or other processing necessary to provide the image to the user.

[00243] Referring now to Fig. 1-AL, Fig. 1-AL shows an implementation of an

Application Programming Interface (API) for the various MUVIA components.

Specifically, image sensor array API 7800 may include, among other elements, a programming specification 7810, that may include, for example, libraries, classes, specifications, templates, or other coding elements that generally make up an API, and an access authentication module 7820 that governs API access to the various image sensor arrays. The API allows third party developers to access the workings of the image sensor array and the array local processing module 3700, so that the third party developers can write applications for the array local processing module 3700, as well as determine which data captured by the image sensor array 3200 (which often may be multiple gigabytes or more of data per second) should be kept or stored or transmitted. In an embodiment, API access to certain functions may be limited. For example, a tiered system may allow a certain number of API calls to the MUVIA data per second, per minute, per hour, or per day. In an embodiment, a third party might pay fees or perform a registration that would allow more or less access to the MUVIA data. In an embodiment, the third party could host their application on a separate web site, and let that web site access the image sensor array 3200 and/or the array local processing module 3700 directly.

61 [00244] Referring again to Fig. 1, Figs. 1-1, 1-J, 1-S, 1-T, 1-AC, 1-AD, 1-AM, and 1- AN, in an embodiment, show a MUVIA implementation that allows insertion of advertising (or other context-sensitive material) into the images displayed to the user.

[00245] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection accepting module 5910. User selection accepting module 5910 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-1, the user selection accepting module 5910 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00246] In an embodiment, the "user selection" facilitated by the user selection accepting module 5910 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00247] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection transmitting module 5920. The user selection transmitting module 5920 may take the user selection from user selection transmitting module 5920, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5900 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server 4000. Following the thick- line arrow leftward from user selection transmitting module 5920 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed

62 herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00248] Referring again to Fig. 1-1, Fig. 1-1 also includes a selected image receiving module 5930 and a user selection presenting module 5940, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00249] Referring now to Fig. 1-T (graphically represented as "down" and "to the right" of Fig. 1-1), in an embodiment, a server 4000 may include a selected image reception module 4510. In an embodiment, selected image reception module 4510 of server 4000 may receive the selected target from the user device 5900. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00250] Referring again to Fig. 1-T, in an embodiment, server 4000 may include selected image pre-processing module 4520. Selected image pre-processing module 4520 may perform one or more tasks of pre-processing the image, some of which have been previously described with respect to other embodiments. In an embodiment, server 4000 also may include pixel information transmission to ISA module 4330 configured to transmit the image request data to the image search array 3200, as has been previously described.

[00251] Referring now to Figs. 1-AD and 1-AN, array local processing module 3700 may be connected to an image sensor array 3200 through a higher-bandwidth

communication link 3505, e.g., a USB or PCI port. In an embodiment, image sensor array 3200 may include a request reception module 3710. Request reception module 3710 may receive the request for an image from the server 4000, as previously described.

63 Request reception module 3710 may transmit the data to a pixel selection module 3720, which may receive the pixels captured from image sensor array 3200, and select the ones that are to be kept. That is, in an embodiment, through use of the (sometimes

consolidated) user requests and the captured image, pixel selection module 3720 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00252] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3730. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3717. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3715. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3700, or may be subject to other manipulations or processing separate from the user requests, as described in previous embodiments. In an embodiment, unused pixel decimation module 3730 may be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to fulfill the request of the user.

[00253] Referring again to Fig. 1-AN, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3740. Selected pixel transmission module 3740 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3710. Similarly to lower-bandwidth

communication 3715, the lower-bandwidth communication 3710 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00254] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module

64 3700 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3700 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00255] Referring now again to Fig. 1-T, in an embodiment, server 4000 may include received image post-processing module 4550. Received image post-processing module 4550 may receive the image data from the array local processing module 3700 (e.g., in the arrow coming "north" from Fig. 1-AN via Fig. 1-AD). The image may include the pixels that were requested from the image sensor array 3200.

[00256] In an embodiment, server 4000 also may include advertisement insertion module 4560. Advertisement insertion module 4560 may insert an advertisement into the received image. The advertisement may be based one or more of the contents of the image, a characteristic of a user or the user device, or a setting of the advertisement server component 7700 (see, e.g., Fig. 1-AC, as will be discussed in more detail herein). The advertisement insertion module 4560 may place the advertisement into the image using any known image combination techniques, or, in another embodiment, the advertisement image may be in a separate layer, overlay, or any other data structure. In an embodiment, advertisement insertion module 4560 may include context-based advertisement insertion module 4562, which may be configured to add advertisements that are based on the context of the image. For example, if the image is a live street view of a department store, the context of the image may show advertisements related to products sold by that department store, e.g., clothing, cosmetics, or power tools.

[00257] Referring again to Fig. 1-T, server 4000 may include a received image with advertisement transmission to user device module 4570 configured to transmit the ima

65 5900. Received image with advertisement transmission to user device module 4570 may include components necessary to communicate with user device 5900 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00258] Referring again to Fig. 1-1, user device 5900 may include a selected image receiving module 5930, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5940, which may display the requested pixels to the user, including the advertisement, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-1.

[00259] Referring now to Fig. 1-AC, Fig. 1-AC shows an advertisement server component 7700 configured to deliver advertisements to the server 4000 for insertion into the images prior to delivery to the user. In an embodiment, advertisement server component 7700 may be integrated with server 4000. In another embodiment,

advertisement server component may be separate from server 4000 and may

communicate with server 4000. In yet another embodiment, rather than interacting with server 4000, advertisement server component 7700 may interact directly with the user device 5900, and insert the advertisement into the image after the image has been received, or, in another embodiment, cause the user device to display the advertisement concurrently with the image (e.g., overlapping or adjacent to). In such embodiments, some of the described modules of server 4000 may be incorporated into user device 5900, but the functionality of those modules would operate similarly to as previously described.

[00260] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a user data collection module 7705. User data collection module 7705 may collect data from user device 5900, and use that data to drive placement of advertisements (e.g., based on a user's browser history, e.g., to sports sites, and the like).

66 [00261] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include advertisement database 7715 which includes

advertisements that are ready to be inserted into images. In an embodiment, these advertisements may be created on the fly.

[00262] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include an advertisement request reception module 7710 which receives a request to add an advertisement into the drawing (the receipt of the request is not shown to ease understanding of the drawings). In an embodiment, advertisement server component 7700 may include advertisement selection module 7720, which may include an image analysis module 7722 configured to analyze the image to determine the best context-based advertisement to place into the image. In an embodiment, that decision may be made by the server 4000, or partly at the server 4000 and partly at the advertisement server component 7700 (e.g., the advertisement server component may have a set of advertisements from which a particular one may be chosen). In an embodiment, various third parties may compensate the operators of server component 7700, server 4000, or any other component of the system, in order to receive preferential treatment.

[00263] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a selected advertisement transmission module 7730, which may transmit the selected advertisement (or a set of selected advertisements) to the server 4000. In an embodiment, selected advertisement transmission module 7730 may send the complete image with the advertisement overlaid, e.g., in an implementation in which the advertisement server component 7700 also handles the placement of the advertisement. In an embodiment in which advertisement server component 7700 is integrated with server 4000, this module may be an internal transmission module, as may all such transmission/reception modules.

Exemplary Environment 200

[00264] Referring now to Fig. 2A, Fig. 2A illustrates an example environment 200 in which methods, systems, circuitry, articles of manufacture, and computer program

67 products and architecture, in accordance with various embodiments, may be implemented by at least one server device 230. Image device 220 may include a number of individual sensors that capture data. Although commonly referred to throughout this application as "image data," this is merely shorthand for data that can be collected by the sensors.

Other data, including video data, audio data, electromagnetic spectrum data (e.g., infrared, ultraviolet, radio, microwave data), thermal data, and the like, may be collected by the sensors.

[00265] Referring again to Fig. 2A, in an embodiment, image device 220 may operate in an environment 200. Specifically, in an embodiment, image device 220 may capture a scene 215. The scene 215 may be captured by a number of sensors 243. Sensors 243 may be grouped in an array, which in this context means they may be grouped in any pattern, on any plane, but have a fixed position relative to one another. Sensors 243 may capture the image in parts, which may be stitched back together by processor 222. There may be overlap in the images captured by sensors 243 of scene 215, which may be removed.

[00266] Upon capture of the scene in image device 220, in processes and systems that will be described in more detail herein, the requested pixels are selected. Specifically, pixels that have been identified by a remote user, by a server, by the local device, by another device, by a program written by an outside user with an API, by a component or other hardware or software in communication with the image device, and the like, are transmitted to a remote location via a communications network 240. The pixels that are to be transmitted may be illustrated in Fig. 2A as selected portion 255, however this is a simplified expression meant for illustrative purposes only.

[00267] Referring again to Fig. 2A, in an embodiment, server device 230 may be any device or group of devices that is connected to a communication network. Although in some examples, server device 230 is distant from image device 220, that is not required. Server device 230 may be "remote" from image device 220, which may be that they are separate components, but does not necessarily imply a specific distance. The

communications network may be a local transmission component, e.g., a PCI bus. Server

68 device 230 may include a request handling module 232 that handles requests for images from user devices, e.g., user device 250A and 240B. Request handling module 232 also may handle other remote computers and/or users that want to take active control of the image device, e.g., through an API, or through more direct control.

[00268] Server device 230 also may include an image device management module, which may perform some of the processing to determine which of the captured pixels of image device 220 are kept. For example, image device management module 234 may do some pattern recognition, e.g., to recognize objects of interest in the scene, e.g., a particular football player, as shown in the example of Fig. 2A. In other embodiments, this processing may be handled at the image device 220 or at the user device 250. In an embodiment, server device 230 limits a size of the selected portion by a screen resolution of the requesting user device.

[00269] Server device 230 then may transmit the requested portions to the user devices, e.g., user device 250A and user device 250B. In another embodiment, the user device or devices may directly communicate with image device 220, cutting out server device 230 from the system.

[00270] In an embodiment, user device 250A and 250B are shown, however user devices may be any electronic device or combination of devices, which may be located together or spread across multiple devices and/or locations. Image device 220 may be a server device, or may be a user-level device, e.g., including, but not limited to, a cellular phone, a network phone, a smartphone, a tablet, a music player, a walkie-talkie, a radio, an augmented reality device (e.g., augmented reality glasses and/or headphones), wearable electronics, e.g., watches, belts, earphones, or "smart" clothing, earphones, headphones, audio/visual equipment, media player, television, projection screen, flat screen, monitor, clock, appliance (e.g., microwave, convection oven, stove, refrigerator, freezer), a navigation system (e.g., a Global Positioning System ("GPS") system), a medical alert device, a remote control, a peripheral, an electronic safe, an electronic lock, an electronic security system, a video camera, a personal video recorder, a personal audio recorder, and the like. Device 220 may include a device interface 243 which may allow the device 220

69 to output data to the client in sensory (e.g., visual or any other sense) form, and/or allow the device 220 to receive data from the client, e.g., through touch, typing, or moving a pointing device (e.g., a mouse). User device 250 may include a viewfinder or a viewport that allows a user to "look" through the lens of image device 220, regardless of whether the user device 250 is spatially close to the image device 220.

[00271] Referring again to Fig. 2A, in various embodiments, the communication network 240 may include one or more of a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a wireless local area network (WLAN), a personal area network (PAN), a Worldwide Interoperability for Microwave Access (WiMAX), public switched telephone network (PTSN), a general packet radio service (GPRS) network, a cellular network, and so forth. The communication networks 240 may be wired, wireless, or a combination of wired and wireless networks. It is noted that "communication network" as it is used in this application refers to one or more communication networks, which may or may not interact with each other.

[00272] Referring now to Fig. 2B, Fig. 2B shows a more detailed version of server device 230, according to an embodiment. The server device 230 may include a device memory 245. In an embodiment, device memory 245 may include memory, random access memory ("RAM"), read only memory ("ROM"), flash memory, hard drives, disk- based media, disc-based media, magnetic storage, optical storage, volatile memory, nonvolatile memory, and any combination thereof. In an embodiment, device memory 245 may be separated from the device, e.g., available on a different device on a network, or over the air. For example, in a networked system, there may be more than one server device 230 whose device memories 245 may be located at a central server that may be a few feet away or located across an ocean. In an embodiment, device memory 245 may include of one or more of one or more mass storage devices, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), cache memory such as random access memory (RAM), flash memory, synchronous random access memory (SRAM), dynamic random access memory

(DRAM), and/or other types of memory devices. In an embodiment, memory 245 may be located at a single network site. In an embodiment, memory 245 may be located at

70 multiple network sites, including sites that are distant from each other. In an embodiment, device memory 245 may include one or more of cached images 245A and previously retained image data 245B, as will be discussed in more detail further herein.

[00273] Referring again to Fig. 2B, in an embodiment, server device 230 may include an optional viewport 247, which may be used to view images captured by server device 230. This optional viewport 247 may be physical (e.g., glass) or electrical (e.g., LCD screen), or may be at a distance from server device 230.

[00274] Referring again to Fig. 2B, Fig. 2B shows a more detailed description of server device 230. In an embodiment, device 220 may include a processor 222. Processor 222 may include one or more microprocessors, Central Processing Units ("CPU"), a Graphics Processing Units ("GPU"), Physics Processing Units, Digital Signal Processors, Network Processors, Floating Point Processors, and the like. In an embodiment, processor 222 may be a server. In an embodiment, processor 222 may be a distributed-core processor. Although processor 222 is as a single processor that is part of a single device 220, processor 222 may be multiple processors distributed over one or many devices 220, which may or may not be configured to operate together.

[00275] Processor 222 is illustrated as being configured to execute computer readable instructions in order to execute one or more operations described above, and as illustrated in Fig. 10, Figs. 11A-11G, Figs. 12A-12E, Figs. 13A-13C, and Figs. 14A-14E. In an embodiment, processor 222 is designed to be configured to operate as processing module 250, which may include one or more of a request for particular image data that is part of a scene acquiring module 252, a request for particular image data transmitting to an image sensor array module 254 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, a particular image data from the image sensor array exclusive receiving module 256 configured to transmit the selected particular portion from the scene to a remote location, and a received particular image data transmitting to at least one requestor module 258 configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

71 Exemplary Environment 300A

[00276] Referring now to Fig. 3 A, Fig. 3 A shows an exemplary embodiment of an image device, e.g., image device 220A operating in an environment 300A. In an embodiment, image device 220A may include an array 310 of image sensors 312 as shown in Fig. 3. The array of image sensors in this image is shown in a rectangular grid, however this is merely exemplary to show that image sensors 312 may be arranged in any format. In an embodiment, each image sensor 312 may capture a portion of scene 315, which portions are then processed by processor 350. Although processor 350 is shown as local to image device 220A, it may be remote to image device 220A, with a sufficiently high-bandwidth connection to receive all of the data from the array of image sensors (e.g., multiple USB 3.0 lines). In an embodiment, the selected portions from the scene (e.g., the portions shown in the shaded box, e.g., selected portion 315), may be transmitted to a remote device 330, which may be a user device or a server device, as previously described. In an embodiment, the pixels that are not transmitted to remote device 330 may be stored in a local memory 340 or discarded.

Exemplary Environment 300B

[00277] Referring now to Fig. 4, Fig. 4 shows an exemplary embodiment of an image device, e.g., image device 320B operating in an environment 300B. In an embodiment, image device 320B may include an image sensor array 320B, e.g., an array of image sensors, which, in this example, are arranged around a polygon to increase the field of view that can be captured, that is, they can capture scene 315, illustrated in Fig. 3B as a natural landmark that can be viewed in a virtual tourism setting. Processor 322 receives the scene 315B and selects the pixels from the scene 315B that have been requested by a user, e.g., requested portions 317B. Requested portions 317B may include an

overlapping area 324B that is only transmitted once. In an embodiment, the requested portions 317B may be transmitted to a remote location via communications network 240.

Exemplary Environment 300C

[00278] Referring now to Fig. 3C, Fig. 3C shows an exemplary embodiment of an image device, e.g., image device 320C operating in an environment 300C. In an embodiment, image device 320C may capture a scene, of which a part of the scene, e.g., scene portion

72 315C, as previously described in other embodiments (e.g., some parts of image device 320C are omitted for simplicity of drawing). In an embodiment, e.g., scene portion 315C may show a street- level view of a busy road, e.g., for a virtual tourism or virtual reality simulator. In an embodiment, different portions of the scene portion 315C may be transmitted at different resolutions or at different times. For example, in an embodiment, a central part of the scene portion 315C, e.g., portion 516, which may correspond to what a user's eyes would see, is transmitted at a first resolution, or "full" resolution relative to what the user's device can handle. In an embodiment, an outer border outside portion 316, e.g., portion 314, may be transmitted at a second resolution, which may be lower, e.g., lower than the first resolution. In another embodiment, a further outside portion, e.g., portion 312, may be discarded, transmitted at a still lower rate, or transmitted asynchronously.

Exemplary Environment 400A

[00279] Referring now to Fig. 4A, Fig. 4A shows an exemplary embodiment of a server device, e.g., server device 430A. In an embodiment, an image device, e.g., image device 420 A may capture a scene 415. Scene 415 may be stored in local memory 440. The portions of scene 415 that are requested by the server device 430A may be transmitted (e.g., through requested image transfer 465) to requested pixel reception module 432 of server device 430A. In an embodiment, the requested pixels transmitted to requested pixel reception module 432 may correspond to images that were requested by various users and/or devices (not shown) in communication with server device 430A.

[00280] Referring again to Fig. 4A, in an embodiment, pixels not transmitted from local memory 440 of image device 420A may be stored in untransmitted pixel temporary storage 440B. These untransmitted pixels may be stored and transmitted to the server device 430A at a later time, e.g., an off-peak time for requests for images of scene 415. For example, in an embodiment, the pixels stored in untransmitted pixel temporary storage 440B may be transmitted to the unrequested pixel reception module 434 of server device 430A at night, or when other users are disconnected from the system, or when the available bandwidth to transfer pixels between image device 420A and server device 430A reaches a certain threshold value.

73 [00281] In an embodiment, server device 430A may analyze the pixels received by unrequested pixel reception module 434, for example, to provide a repository of static images from the scene 415 that do not need to be transmitted from the image device 420 A each time certain portions of scene 415 are requested.

Exemplary Environment 400B

[00282] Referring now to Fig. 4B, Fig. 4A shows an exemplary embodiment of a server device, e.g., server device 430B. In an embodiment, an image device, e.g., image device 420B may capture a scene 415B. Scene 415B may be stored in local memory 440B. In an embodiment, image device 420B may capture the same scene 415B multiple times. In an embodiment, scene 415B may include an unchanged area 416 A, which is a portion of the image that has not changed since the last time the scene 415B was captured by the image device 420B. In an embodiment, scene 415B also may include a changed area 416B, which may be a portion of the image that has changed since the last time the scene 415B was captured by the image device 420B. Although changed area 416B is illustrated as polygonal and contiguous in Fig. 4B, this is merely for illustrative purposes, and changed area 416B may be, in some embodiments, nonpolygonal and/or noncontiguous.

[00283] In an embodiment, image device 420B, upon capturing scene 415B use an image previously stored in local memory 440B to compare the previous image, e.g., previous image 441B, to the current image, e.g., current image 442B, and may determine which areas of the scene 415B have been changed. The changed areas may be transmitted to server device 430B, e.g., to changed area reception module 432B. This may occur through a changed area transmission 465, as indicated in Fig. 4B.

[00284] Referring again to Fig. 4B, in an embodiment, server device 430B receives the changed area at changed area reception module 432B. Server device 430B also may include an unchanged area addition module 434B, which adds the unchanged areas that were previously stored in a memory of server device 430B (not shown) from a previous transmission from image device 420B. In an embodiment, server device 430B also may include a complete image transmission module 436B configured to transmit the completed image to a user device, e.g., user device 450B, that requested the image.

74 Exemplary Environment 500A

[00285] Referring now to Fig. 5A, Fig. 5A shows an exemplary embodiment of a server device, e.g., server device 530A. In an embodiment, an image device 520 A may capture a scene 515 through use of an image sensor array 540, as previously described. The image may be temporarily stored in a local memory 540 (as pictured), or may be partially or wholly stored in a local memory before transmission to a server device 530A. In an embodiment, server device 530A may include an image data reception module 532A. Image data reception module 532A may receive the image from image device 520A. In an embodiment, server device 530A may include data addition module 534A, which may add additional data to the received image data. In an embodiment, the additional data may be visible or invisible, e.g., pixel data or metadata, for example. In an embodiment, the additional data may be advertising data. In an embodiment, the additional data may be context-dependent upon the image data, for example, if the image data is of a football player, the additional data may be statistics about that player, or an advertisement for an online shop that sells that player's jersey.

[00286] In an embodiment, the additional data may be stored in a memory of server device 530A (not shown). In another embodiment, the additional data may be retrieved from an advertising server or another data server. In an embodiment, the additional data may be tailored to one or more characteristics of the user or the user device, e.g., the user may have a setting that labels each player displayed on the screen with that player' s last name. Referring again to Fig. 5A, in an embodiment, server device 530A may include a modified data transmission module 536A, which may receive the modified data from data addition module 534A, and transmit the modified data to a user device, e.g., a user device that requested the image data, e.g., user device 550A.to the server device 430A at a later time, e.g., an off-peak time for requests for images of scene 415. For example, in an embodiment, the pixels stored in untransmitted pixel temporary storage 440B may be transmitted to the unrequested pixel reception module 434 of server device 430A at night, or when other users are disconnected from the system, or when the available bandwidth to transfer pixels between image device 420A and server device 430A reaches a certain threshold value.

75 [00287] In an embodiment, server device 430A may analyze the pixels received by unrequested pixel reception module 434, for example, to provide a repository of static images from the scene 415 that do not need to be transmitted from the image device 420 A each time certain portions of scene 415 are requested.

Exemplary Environment 500B

[00288] Referring now to Fig. 5B, Fig. 5B shows an exemplary embodiment of a server device, e.g., server device 530B. In an embodiment, multiple user devices, e.g., user device 502A, user device 502B, and user device 502C, each may send a request for image data from a scene, e.g., scene 515B. Each user device may send a request to a server device, e.g., server device 530B. Server device 530B may consolidate the requests, which may be for various resolutions, shapes, sizes, and other features, into a single combined request 570. Overlapping portions of the request, e.g., as shown in overlapping area 572, may be combined.

[00289] In an embodiment, server device 530B transmits the combined request 570 to the image device 520B. In an embodiment, image device 520B uses the combined request 570 to designate selected pixels 574, which then may be transmitted back to the server device 530B, where the process of combining the requests may be reversed, and each user device 502A, 502B, and 502C may receive the requested image. This process will be discussed in more detail further herein.

Exemplary Embodiments of the Various Modules of Portions of Processor 250

[00290] Figs. 6-9 illustrate exemplary embodiments of the various modules that form portions of processor 250. In an embodiment, the modules represent hardware, either that is hard-coded, e.g., as in an application- specific integrated circuit ("ASIC") or that is physically reconfigured through gate activation described by computer instructions, e.g., as in a central processing unit.

[00291] Referring now to Fig. 6, Fig. 6 illustrates an exemplary implementation of the request for particular image data that is part of a scene acquiring module 252. As illustrated in Fig. 6, the request for particular image data that is part of a scene acquiring module may include one or more sub-logic modules in various alternative

76 implementations and embodiments. For example, as shown in Fig. 6, e.g., Fig. 6A, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene and includes one or more images acquiring module 602 and request for particular image data that is part of a scene receiving module 604. In an embodiment, module 604 may include request for particular image data that is part of a scene receiving from a client device module 606. In an embodiment, module 606 may include one or more of request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene module 608 and request for particular image data that is part of a scene receiving from a client device configured to receive a selection of a particular image module 612. In an embodiment, module 608 may include request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene in a viewfinder module 610. In an embodiment, module 612 may include one or more of request for particular image data that is part of a scene receiving from a client device configured to receive a scene-based selection of a particular image module 614 and request for particular image data that is part of a scene receiving from one or more various devices configured to receive a scene-based selection of a particular image module 616.

[00292] Referring again to Fig. 6, e.g., Fig. 6B, as described above, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene that is the image data collected by the array of more than one image sensor acquiring module 618 and request for particular image data that is part of a scene that a representation of the image data collected by the array of more than one image sensor acquiring module 620. In an embodiment, module 620 may include one or more of request for particular image data that is part of a scene that a sampling of the image data collected by the array of more than one image sensor acquiring module 622, request for particular image data that is part of a scene that is a subset of the image data collected by the array of more than one image sensor acquiring module 624, and request for particular image data that is part of a scene that is a low-resolution version of the image data collected by the array of more than one image sensor acquiring module 626.

77 [00293] Referring again to Fig. 6, e.g., Fig. 6C, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene that is a football game acquiring module 628, request for particular image data that is part of a scene that is an area street view acquiring module 630, request for particular image data that is part of a scene that is a tourist destination acquiring module 632, and request for particular image data that is part of a scene that is inside a home acquiring module 634.

[00294] Referring again to Fig. 6, e.g., Fig. 6D, in an embodiment, module 252 may include request for particular image data that is an image that is a portion of the scene acquiring module 636. In an embodiment, module 636 may include one or more of request for particular image data that is an image that is a particular football player and a scene that is a football field acquiring module 638 and request for particular image data that is an image that is a vehicle license plate and a scene that is a highway bridge acquiring module 640.

[00295] Referring again to Fig. 6, e.g., Fig. 6E, in an embodiment, module 252 may include one or more of request for particular image object located in the scene acquiring module 642 and particular image data of the scene that contains the particular image object determining module 644. In an embodiment, module 642 may include one or more of request for particular person located in the scene acquiring module 646, request for a basketball located in the scene acquiring module 648, request for a motor vehicle located in the scene acquiring module 650, and request for a human object representation located in the scene acquiring module 652.

[00296] Referring again to Fig. 6, e.g., Fig. 6F, in an embodiment, module 252 may include one or more of first request for first particular image data from a first requestor receiving module 662, second request for first particular image data from a different second requestor receiving module 664, first received request for first particular image data and second received request for second particular image data combining module 666, first request for first particular image data and second request for second particular image data receiving module 670, and received first request and received second request combining module 672. In an embodiment, module 666 may include first received

78 request for first particular image data and second received request for second particular image data combining into the request for particular image data module 668. In an embodiment, module 672 may include received first request and received second request common pixel deduplicating module 674.

[00297] Referring again to Fig. 6, e.g., Fig. 6G, in an embodiment, module 252 may include one or more of request for particular video data that is part of a scene acquiring module 676, request for particular audio data that is part of a scene acquiring module 678, request for particular image data that is part of a scene receiving from a user device with an audio interface module 680, and request for particular image data that is part of a scene receiving from a microphone-equipped user device with an audio interface module 682.

[00298] Referring now to Fig. 7, Fig. 7 illustrates an exemplary implementation of request for particular image data transmitting to an image sensor array module 254. As illustrated in Fig. 7, the request for particular image data transmitting to an image sensor array module 254 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 7, e.g., Fig. 7A, in an embodiment, module 254 may include one or more of request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested particular image data 702, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes two angled image sensors and that is configured to capture the scene that is larger than the requested particular image data 704, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a grid and that is configured to capture the scene that is larger than the requested particular image data 706, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a line and that is configured to capture the scene that is larger than the requested particular image data 708, and request

79 for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one nonlinearly arranged stationary image sensor and that is configured to capture the scene that is larger than the requested particular image data 710.

[00299] Referring again to Fig. 7, e.g., Fig. 7B, in an embodiment, module 254 may include one or more of request for particular image data transmitting to an image sensor array that includes an array of static image sensors module 712, request for particular image data transmitting to an image sensor array that includes an array of image sensors mounted on a movable platform module 716, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more data than the requested particular image data 718, and request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents a greater field of view than the requested particular image data 724. In an embodiment, module 712 may include request for particular image data transmitting to an image sensor array that includes an array of static image sensors that have fixed focal length and fixed field of view module 714. In an embodiment, module 718 may include one or more of request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times as much data as the requested particular image data 720 and request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much data as the requested particular image data 722.

[00300] Referring again to Fig. 7, e.g., Fig. 7C, in an embodiment, module 254 may include one or more of request for particular image data modifying module 726 and modified request for particular image data transmitting to an image sensor array module

80 728. In an embodiment, module 726 may include designated image data removing from request for particular image data module 730. In an embodiment, module 730 may include designated image data removing from request for particular image data based on previously stored image data module 732. In an embodiment, module 732 may include one or more of designated image data removing from request for particular image data based on previously stored image data retrieved from the image sensor array module 734 and designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data module 736. In an embodiment, module 736 may include designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data that is a static object module 738.

[00301] Referring again to Fig. 7, e.g., Fig. 7D, in an embodiment, module 254 may include module 726 and module 728, as previously discussed. In an embodiment, module 726 may include one or more of designated image data removing from request for particular image data based on pixel data interpolation/extrapolation module 740, portion of the request for particular image data that was previously stored in memory identifying module 744, and identified portion of the request for the particular image data removing module 746. In an embodiment, module 740 may include designated image data corresponding to one or more static image objects removing from request for particular image data based on pixel data interpolation/extrapolation module 742. In an embodiment, module 744 may include one or more of portion of the request for the particular image data that was previously captured by the image sensor array identifying module 748 and portion of the request for the particular image data that includes at least one static image object that was previously captured by the image sensor array identifying module 750. In an embodiment, module 750 may include portion of the request for the particular image data that includes at least one static image object of a rock outcropping that was previously captured by the image sensor array identifying module 752.

81 [00302] Referring again to Fig. 7, e.g., Fig. 7E, in an embodiment, module 254 may include one or more of size of request for particular image data determining module 754 and determined- size request for particular image data transmitting to the image sensor array module 756. In an embodiment, module 754 may include one or more of size of request for particular image determining at least partially based on user device property module 758, size of request for particular image determining at least partially based on user device access level module 762, size of request for particular image determining at least partially based on available bandwidth module 764, size of request for particular image determining at least partially based on device usage time module 766, and size of request for particular image determining at least partially based on device available bandwidth module 768. In an embodiment, module 758 may include size of request for particular image determining at least partially based on user device resolution module 760.

[00303] Referring now to Fig. 8, Fig. 8 illustrates an exemplary implementation of particular image data from the image sensor array exclusive receiving module 256. As illustrated in Fig. 8 A, the particular image data from the image sensor array exclusive receiving module 256 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 8, e.g., Fig. 8A, in an embodiment, module 256 may include one or more of particular image data from the image sensor array in which other image data is discarded receiving module 802, particular image data from the image sensor array in which other image data is stored at the image sensor array receiving module 804, and particular image data from the image sensor array exclusive near-real-time receiving module 806.

[00304] Referring again to Fig. 8, e.g., Fig. 8B, in an embodiment, module 256 may include one or more of particular image data from the image sensor array exclusive near- real-time receiving module 808 and data from the scene other than the particular image data retrieving at a later time module 810. In an embodiment, module 810 may include one or more of data from the scene other than the particular image data retrieving at a time of available bandwidth module 812, data from the scene other than the particular image data retrieving at an off-peak usage time of the image sensor array module 814,

82 data from the scene other than the particular image data retrieving at a time when no particular image data requests are present at the image sensor array module 816, and data from the scene other than the particular image data retrieving at a time of available image sensor array capacity module 818.

[00305] Referring again to Fig. 8, e.g., Fig. 8C, in an embodiment, module 256 may include one or more of particular image data that includes audio data from the image sensor array exclusive receiving module 820 and particular image data that was determined to contain a particular requested image object from the image sensor array exclusive receiving module 822. In an embodiment, module 822 may include particular image data that was determined to contain a particular requested image object by the image sensor array exclusive receiving module 824.

[00306] Referring now to Fig. 9, Fig. 9 illustrates an exemplary implementation of received particular image data transmitting to at least one requestor module 258. As illustrated in Fig. 9 A, the received particular image data transmitting to at least one requestor module 258 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 9, e.g., Fig. 9A, in an embodiment, module 258 may include one or more of received particular image data transmitting to at least one user device requestor module 902, separation of the received particular data into set of one or more requested images executing module 906, and received particular image data transmitting to at least one user device that requested image data that is part of the received particular image data module 912. In an embodiment, module 902 may include received particular image data transmitting to at least one user device that requested at least a portion of the received particular data requestor module 904. In an embodiment, module 906 may include separation of the received particular data into a first requested image and a second requested image executing module 910.

[00307] Referring again to Fig. 9, e.g., Fig. 9B, in an embodiment, module 258 may include one or more of first portion of received particular image data transmitting to the first requestor module 914, second portion of received particular image data transmitting to a second requestor module 916, and received particular image data unaltered

83 transmitting to at least one requestor module 926. In an embodiment, module 914 may include first portion of received particular image data transmitting to the first requestor that requested the first portion module 918. In an embodiment, module 918 may include portion of received particular image data that includes a particular football player transmitting to a television device that requested the football player from a football game module 920. In an embodiment, module 916 may include second portion of received particular image data transmitting to the second requestor that requested the second portion module 922. In an embodiment, module 922 may include portion that contains a view of a motor vehicle transmitting to the second requestor that is a tablet device that requested the view of the motor vehicle module 924.

[00308] Referring again to Fig. 9, e.g., Fig. 9C, in an embodiment, module 258 may include one or more of supplemental data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 928 and generated transmission image data transmitting to at least one requestor module 930. In an embodiment, module 928 may include one or more of advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 932 and related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 938. In an embodiment, module 932 may include context-based advertisement data addition to at least a portion of the received particular image data to generate

transmission image data facilitating module 934. In an embodiment, module 934 may include animal rights donation fund advertisement data addition to at least a portion of the received particular image data that includes a jungle tiger at an oasis to generate transmission image data facilitating module 936. In an embodiment, module 938 may include related fantasy football statistical data addition to at least a portion of the received particular image data of a quarterback data to generate transmission image data facilitating module 940.

[00309] Referring again to Fig. 9, e.g., Fig. 9D, in an embodiment, module 258 may include one or more of portion of received particular image data modification to generate transmission image data facilitating module 942 and generated transmission image data transmitting to at least one requestor module 944. In an embodiment, module 942 may

84 include one or more of portion of received particular image data image manipulation modification to generate transmission image data facilitating module 946 and portion of received particular image data redaction to generate transmission image data facilitating module 952. In an embodiment, module 946 may include one or more of portion of received particular image data contrast balancing modification to generate transmission image data facilitating module 948 and portion of received particular image data color modification balancing to generate transmission image data facilitating module 950. In an embodiment, module 952 may include portion of received particular image data redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 954. In an embodiment, module 954 may include portion of received satellite image data that includes a tank redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 956.

[00310] Referring again to Fig. 9, e.g., Fig. 9D, in an embodiment, module 258 may include one or more of lower-resolution version of received particular image data transmitting to at least one requestor module 958 and full-resolution version of received particular image data transmitting to at least one requestor module 960.

[00311] In some implementations described herein, logic and similar implementations may include software or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of existing software or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software, firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

85 [00312] Following are a series of flowcharts depicting implementations. For ease of understanding, the flowcharts are organized such that the initial flowcharts present implementations via an example implementation and thereafter the following flowcharts present alternate implementations and/or expansions of the initial flowchart(s) as either sub-component operations or additional component operations building on one or more earlier-presented flowcharts. Those having skill in the art will appreciate that the style of presentation utilized herein (e.g., beginning with a presentation of a flowchart(s) presenting an example implementation and thereafter providing additions to and/or further details in subsequent flowcharts) generally allows for a rapid and easy

understanding of the various process implementations. In addition, those skilled in the art will further appreciate that the style of presentation used herein also lends itself well to modular and/or object-oriented program design paradigms.

Exemplary Operational Implementation of Processor 250 and Exemplary Variants

[00313] Further, in Fig. 10 and in the figures to follow thereafter, various operations may be depicted in a box-within-a-box manner. Such depictions may indicate that an operation in an internal box may comprise an optional example embodiment of the operational step illustrated in one or more external boxes. However, it should be understood that internal box operations may be viewed as independent operations separate from any associated external boxes and may be performed in any sequence with respect to all other illustrated operations, or may be performed concurrently. Still further, these operations illustrated in Fig. 10 as well as the other operations to be described herein may be performed by at least one of a machine, an article of manufacture, or a composition of matter.

[00314] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software, and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle

86 will vary with the context in which the processes and/or systems and/or other

technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software, and or firmware.

[00315] Throughout this application, examples and lists are given, with parentheses, the abbreviation "e.g.," or both. Unless explicitly otherwise stated, these examples and lists are merely exemplary and are non-exhaustive. In most cases, it would be prohibitive to list every example and every combination. Thus, smaller, illustrative lists and examples are used, with focus on imparting understanding of the claim terms rather than limiting the scope of such terms.

[00316] Referring now to Fig. 10, Fig. 10 shows operation 1000, e.g., an example operation of message processing device 230 operating in an environment 200. In an embodiment, operation 1000 may include operation 1002 depicting acquiring a request for particular image data that is part of a scene. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene acquiring module 252 acquiring (e.g., receiving, e.g., from a device that requested an image, that is any device or set of devices capable of displaying, storing, analyzing, or operating upon an image, e.g., television, computer, laptop, smartphone, etc., e.g., or from an entity that requested an image, e.g., a person, an automated monitoring system, an artificial intelligence, an intelligence amplification (e.g., a computer designed to watch for persons appearing on video or still shots), or otherwise obtaining (e.g., acquiring includes receiving, retrieving, creating, generating, generating a portion of, receiving a location of, receiving access

87 instructions for, receiving a password for, etc.) a request (e.g., data, in any format that indicates a computationally-based request for data, e.g., image data, from any source, whether properly-formed or not, and which may come from a communications network or port, or an input/output port, or from a human or other entity, or any device) for particular image data (e.g., a set of image data, e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), or other data, such as audio data and other data on the electromagnetic spectrum, e.g., infrared data, microwave data, etc.) that is part of a scene (e.g., a particular area, and/or data (including graphical data, audio data, and factual/derived data) that makes up the particular area, which may in some embodiments be all of the data, pixel data or otherwise, that is captured by the image sensor array or portions of the image sensor array).

[00317] capturing (e.g., collecting data, that includes visual data, e.g., pixel data, sound data, electromagnetic data, nonvisible spectrum data, and the like) that includes one or more images (e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), through use of an array (e.g., any grouping configured to work together in unison, regardless of arrangement, symmetry, or appearance) of more than one image sensor (e.g., a device, component, or collection of components configured to collect light, sound, or other electromagnetic spectrum data, and/or to convert the collected into digital data, or perform at least a portion of the foregoing actions).

[00318] Referring again to Fig. 10, operation 1000 may include operation 1004 depicting transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data. For example, Fig. 2, e.g., Fig. 2B, shows request for particular image data transmitting to an image sensor array module 254 selecting (e.g., whether actively or passively, choosing, flagging, designating, denoting, signifying, marking for, taking some action with regard to, changing a setting in a database, creating a pointer to, storing in a particular memory or part/address of a memory, etc.) a particular portion (e.g., some subset of the entire scene that includes

88 some pixel data, whether pre-or post-processing, which may or may not include data from multiple of the array of more than one image sensor) of the scene (e.g., the data, e.g., image data or otherwise (e.g., sound, electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, or post), that includes at least one image (e.g., a portion of pixel or other data that is related temporally or spatially (e.g., contiguous or partly contiguous), wherein the selected particular portion is smaller (e.g., some objectively measurable feature has a lower value, e.g., size, resolution, color, color depth, pixel data granularity, number of colors, hue, saturation, alpha value, shading) than the scene (e.g., the data, e.g., image data or otherwise (e.g., sound, electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, orpost).

[00319] Referring again to Fig. 10, operation 1000 may include operation 1006 depicting receiving only the particular image data from the image sensor array. For example, Fig. 2 e.g., Fig. 2B shows particular image data from the image sensor array exclusive receiving module 256 transmitting only (e.g., not transmitting the parts of the scene that are not part of the selected particular portion) the selected particular portion (e.g., the designated pixel data) from the scene (e.g., the data, e.g., image data or otherwise (e.g., sound, electromagnetic), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, or post) to a remote location (e.g., a device or other component that is separate from the image device, "remote" here not necessarily implying or excluding any particular distance, e.g., the remote device may be a server device, some combination of cloud devices, an individual user device, or some combination of devices).

[00320] Referring again to Fig. 10, operation 1000 may include operation 1008 depicting transmitting the received particular image data to at least one requestor. For example, Fig. 2, e.g., Fig. 2B, shows received particular image data transmitting to at least one requestor module 258 de-emphasizing (e.g., whether actively or passively, taking some action to separate pixels from the scene that are not part of the selected particular portion, including deleting, marking for deletion, storing in a separate location or memory

89 address, flagging, moving, or, in an embodiment, simply not saving the pixels in an area in which they can be readily retained.

[00321] Figs. 11A-11G depict various implementations of operation 1002, depicting acquiring a request for particular image data that is part of a scene according to embodiments. Referring now to Fig. 11 A, operation 1002 may include operation 1102 depicting acquiring the request for particular image data of the scene that includes one or more images. For example, Fig. 6, e.g., Fig. 6A shows request for particular image data that is part of a scene and includes one or more images acquiring module 602 acquiring (e.g., receiving, e.g., from a device that requested an image, that is any device or set of devices capable of displaying, storing, analyzing, or operating upon an image, e.g., television, computer, laptop, smartphone, etc., e.g., or from an entity that requested an image, e.g., a person, an automated monitoring system, an artificial intelligence, an intelligence amplification (e.g., a computer designed to watch for persons appearing on video or still shots), a request (e.g., data, in any format that requests an image) for particular image data of the scene that includes one or more images (e.g., the scene, e.g., a street corner, includes one or more images, e.g., images of a wristwatch worn by a person crossing the street corner, images of the building on the street corner, etc.).

[00322] Referring again to Fig. 11 A, operation 1002 may include operation 1104 depicting receiving the request for particular image data of the scene. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving module 604 receiving (e.g., from a device, e.g., a user's personal laptop device) the request for particular image data (e.g., a particular player from a game) of the scene (e.g., the portions of the game that are captured by the image sensor array).

[00323] Referring again to Fig. 11 A, operation 1104 may include operation 1106 depicting receiving the request for particular image data of the scene from a user device. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device module 606 receiving (e.g., receiving a call from an API that is accessing a remote server that sends commands to the image sensor array) the request for particular image data (e.g., a still shot of an area outside a building where any

90 movement has been detected, e.g., a security camera shot) of the scene from a user device (e.g., the API that was downloaded by an independent user is running on that user's device).

[00324] Referring again to Fig. 11 A, operation 1106 may include operation 1108 depicting receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene module 608 receiving the request for particular image data (e.g., image data of a particular animal) of the scene (e.g., image data that includes the sounds and video from an animal oasis) from a user device (e.g., a smart television) that is configured to display at least a portion of the scene (e.g., the data captured by an image sensor array of the animal oasis).

[00325] Referring again to Fig. 11 A, operation 1108 may include operation 1110 depicting receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene in a viewfinder module 610 receiving the request for particular image data (e.g., images of St. Peter's Basilica in Rome, Italy) of the scene (e.g., image data captured by the image sensor array of the Vatican) from a user device (e.g., a smartphone device) that is configured to display at least a portion of the scene (e.g., the Basilica, to be displayed on the screen as part of a virtual tourism app running on the smartphone) in a viewfinder (e.g., a screen or set of screens, whether real or virtual, that can display and/or process image data). It is noted that a viewfinder may be remote from where the image is captured.

[00326] Referring again to Fig. 11 A, operation 1106 may include operation 1112 depicting receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image. For example, Fig. 6, e.g., Fig. 6 A, shows request for particular image data that is part of a scene receiving from a

91 client device configured to receive a selection of a particular image module 612 receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image (e.g., the user device, e.g., a computer device, receives an audible command from a user regarding which portion of the scene the user wants to see (e.g., which may involve showing a "demo" version of the scene, e.g., a lower-resolution older version of the scene, for example), and the device receives this selection and then sends the request for the particular image data to the server device.

[00327] Referring again to Fig. 11 A, operation 1112 may include operation 1114 depicting receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to receive a scene-based selection of a particular image module 614 receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

[00328] Referring again to Fig. 11 A, operation 1112 may include operation 1116 depicting receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from one or more various devices configured to receive a scene-based selection of a particular image module 616 receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device

[00329] Referring now to Fig. 11B, operation 1002 may include operation 1118 depicting acquiring the request for particular image data of the scene, wherein the scene

92 is the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is the image data collected by the array of more than one image sensor acquiring module 618 acquiring the request for particular image data of the scene (e.g., a live street view of a corner in New York City near Madison Square Garden), wherein the scene is the image data (e.g., video and audio data) collected by the array of more than one image sensor (e.g., a set of twenty-five ten-megapixel CMOS sensors arranged at an angle to provide a full view).

[00330] Referring again to Fig. 11B, operation 1002 may include operation 1120 depicting acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that a representation of the image data collected by the array of more than one image sensor acquiring module 620 acquiring the request for particular image data (e.g., an image of a particular street vendor) of the scene (e.g., a city street in Alexandra, VA), wherein the scene is a representation (e.g., metadata, e.g., data about the image data, e.g., a sampling, a subset, a description, a retrieval location) of the image data (e.g., the pixel data) collected by the array of more than one image sensor (e.g., one thousand CMOS sensors of two megapixels each, mounted on a UAV).

[00331] Referring again to Fig. 11B, operation 1120 may include operation 1122 depicting acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that a sampling of the image data collected by the array of more than one image sensor acquiring module 622 acquiring the request for particular image data of the scene, wherein the scene is a sampling (e.g., a subset, selected randomly or through a pattern) of the image data (e.g., an image of a checkout line at a discount store) collected (e.g., gathered, read, stored) by the array of more than one image sensor (e.g., an array of two thirty megapixel sensors angled towards each other).

93 [00332] Referring again to Fig. 11B, operation 1120 may include operation 1124 depicting acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is a subset of the image data collected by the array of more than one image sensor acquiring module 624 acquiring the request for particular image data (e.g., a particular object inside of a house, e.g., a refrigerator) of the scene (e.g., an interior of a house), wherein the scene is a subset of the image data (e.g., a half of, or a sampling of the whole, or a selected area of, or only the contrast data, etc.,) collected by the array of more than one image sensor (e.g., a 10x10 grid of three-megapixel image sensors).

[00333] Referring again to Fig. 11B, operation 1120 may include operation 1126 depicting acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is a low-resolution version of the image data collected by the array of more than one image sensor acquiring module 626 acquiring the request for particular image data (e.g., an image of a particular car crossing a bridge) of the scene (e.g., a highway bridge), wherein the scene is a low-resolution (e.g., "low" here meaning "less than a possible resolution given the equipment that captured the image") version of the image data collected by the array of more than one image sensor.

[00334] Referring now to Fig. 11C, operation 1002 may include operation 1128 depicting acquiring the request for particular image data of a scene that is a football game. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is a football game acquiring module 628 acquiring the request for particular image data of a scene that is a football game.

[00335] Referring again to Fig. 11C, operation 1002 may include operation 1130 depicting acquiring the request for particular image data of a scene that is a street view of an area. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is an area street view acquiring module 630 acquiring the request for

94 particular image data that is a street view (e.g., a live or short-delayed view) of an area (e.g., a street corner, a garden oasis, a mountaintop, an airport, etc.).

[00336] Referring again to Fig. 11C, operation 1002 may include operation 1132 depicting acquiring the request for particular image data of a scene that is a tourist destination. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is a tourist destination acquiring module 632 acquiring the request for particular image data of a scene that is a tourist destination (e.g., the great pyramids of Giza).

[00337] Referring again to Fig. 11C, operation 1002 may include operation 1134 depicting acquiring the request for particular image data of a scene that is an inside of a home. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is inside a home acquiring module 634 acquiring the request for particular image data of a scene that is inside of a home (e.g., inside a kitchen).

[00338] Referring now to Fig. 11D, operation 1002 may include operation 1136 depicting acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a portion of the scene acquiring module 636 acquiring the request for particular image data (e.g., an image of a tiger in a wildlife preserve), wherein the particular image data (e.g., an image of a tiger) that is a portion of the scene (e.g., image data of the wildlife preserve).

[00339] Referring again to Fig. 11D, operation 1136 may include operation 1138 depicting acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a particular football player and a scene that is a football field acquiring module 638 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

95 [00340] Referring again to Fig. 11D, operation 1136 may include operation 1140 depicting acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a vehicle license plate and a scene that is a highway bridge acquiring module 640 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is a highway bridge (e.g., an image of the highway bridge).

[00341] Referring now to Fig. HE, operation 1002 may include operation 1142 depicting acquiring a request for a particular image object located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for particular image object located in the scene acquiring module 642 acquiring a request for a particular image object (e.g., a particular type of bird) located in the scene (e.g., a bird sanctuary).

[00342] Referring again to Fig. HE, operation 1002 may include operation 1144, which may appear in conjunction with operation 1142, operation 1144 depicting determining the particular image data of the scene that contains the particular image object. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining module 644 determining the particular image data (e.g., a 1920x1080 image that contains the particular type of bird) of the scene (e.g., the image of the bird sanctuary) that contains the particular image object (e.g., the particular type of bird).

[00343] Referring again to Fig. HE, operation 1142 may include operation 1146 depicting acquiring a request for a particular person located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for particular person located in the scene acquiring module 646 acquiring a request for a particular person (e.g., a person dressed a certain way, or loitering outside of a warehouse, or a particular celebrity or athlete, or a business tracking a specific worker) located in the scene (e.g., an image data).

96 [00344] Referring again to Fig. HE, operation 1142 may include operation 1148 depicting acquiring a request for a basketball located in the scene that is a basketball arena. For example, Fig. 6, e.g., Fig. 6E, shows request for a basketball located in the scene acquiring module 648 acquiring a request for a basketball (e.g., the image data corresponding to a basketball) located in the scene that is a basketball arena.

[00345] Referring again to Fig. HE, operation 1142 may include operation 1150 depicting acquiring a request for a motor vehicle located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for a motor vehicle located in the scene acquiring module 650 acquiring a request for a motor vehicle located in the scene.

[00346] Referring again to Fig. HE, operation 1142 may include operation 1152 depicting acquiring a request for any human object representations located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for a human object representation located in the scene acquiring module 652 acquiring a request for any human object representations (e.g., when any image data corresponding to a human walks by, e.g., for a security camera application, or an application that takes an action when a person approaches, e.g., an automated terminal) located in the scene.

[00347] Referring again to Fig. HE, operation 1142 may include operation 1153 depicting determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through automated pattern recognition application to scene data module 653 determining the particular image data of the scene (e.g., a tennis match) that contains the particular image object (e.g., a tennis player) through application of automated pattern recognition (e.g., recognizing human images through machine recognition, e.g., shape-based classification, head-shoulder detection, motion-based detection, and component-based detection) to scene image data.

[00348] Referring again to Fig. HE, operation 1144 may include operation 1154 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data

97 that is image data of the scene from at least one previous moment in time. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in previous scene data module 654 determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

[00349] Referring again to Fig. HE, operation 1144 may include operation 1156 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data module 656

[00350] Referring again to Fig. HE, operation 1156 may include operation 1158 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data previously transmitted from the image sensor array module 658 determining the particular image data of the scene that contains the particular image object (e.g., a particular landmark, or animal at a watering hole) through identification of the particular image object (e.g., a lion at a watering hole) in cached scene data that was previously transmitted from the image sensor array (e.g., a set of twenty five image sensors) that includes more than one image sensor (e.g., a three megapixel CMOS sensor).

[00351] Referring again to Fig. HE, operation 1156 may include operation 1160 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one

98 image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data previously transmitted from the image sensor array at a particular time module 660 determining the particular image data of the scene that contains the particular image object (e.g., a specific item in a shopping cart that doesn't match a cash-register generated list of what was purchased by the person wheeling the cart) through identification of the particular image object (e.g., the specific item, e.g., a toaster oven) in cached scene data (e.g., data that is stored in the server that was from a previous point in time, whether one-millionth of a second previously or years previously, although in the example the cached scene data is from a previous frame, e.g., less than one second prior) that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for connection to the image sensor array that includes more than one image sensor.

[00352] Referring now to Fig. 11F, operation 1002 may include operation 1162 depicting receiving a first request for first particular image data from the scene from a first requestor. For example, Fig. 6, e.g., Fig. 6F, shows first request for first particular image data from a first requestor receiving module 662 receiving a first request (e.g., a request for a 1920x180 "HD" view) for first particular image data (e.g., a first animal, e.g., a tiger, at a watering hole scene) from the scene (e.g., a watering hole) from a first requestor (e.g., a family watching the watering hole from an internet-connected television).

[00353] Referring again to Fig. 11F, operation 1002 may include operation 1164, which may appear in conjunction with operation 1162, operation 1164 depicting receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor. For example, Fig. 6, e.g., Fig. 6F, shows second request for first particular image data from a different second requestor receiving module receiving a second request (e.g., a 640x480 view for a smartphone) for second particular image data (e.g., a second animal, e.g., a pelican) from the scene (e.g., a watering hole)

99 from a second requestor (e.g., a person watching a stream of the watering hole on their smartphone).

[00354] Referring again to Fig. 11F, operation 1002 may include operation 1166 depicting combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene. For example, Fig. 6, e.g., Fig. 6F, shows first received request for first particular image data and second received request for second particular image data combining module 666 combining a received first request for first particular image data (e.g., a request to watch a running back of a football team) from the scene and a received second request for second particular image data (e.g., a request to watch a quarterback of the same football team) from the scene (e.g., a football game).

[00355] Referring again to Fig. 11F, operation 1166 may include operation 1168 depicting combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data. For example, Fig. 6, e.g., Fig. 6F, shows first received request for first particular image data and second received request for second particular image data combining into the request for particular image data module 668 combining the received first request for first particular image data (e.g., request from device 502A, as shown in Fig. 5B) from the scene and the received second request for second particular image data (e.g., the request from device 502B, as shown in Fig. 5B) from the scene into the request for particular image data that consolidates overlapping requested image data (e.g., the selected pixels 574, as shown in Fig. 5B).

[00356] Referring again to Fig. 11F, operation 1166 may include operation 1170 depicting receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene. For example, Fig. 6, e.g., Fig. 6F, shows first request for first particular image data and second request for second particular image data receiving module 670 receiving a first request for first particular image data (e.g., image data of a particular street corner from a street view) from the

100 scene (e.g., a live street view of DoG street in Alexandria, VA) and a second request for second particular image data (e.g., image data of the opposite corner of the live street view) from the scene (e.g., the live street view of DoG street in Alexandria, VA).

[00357] Referring again to Fig. 11F, operation 1002 may include operation 1172, which may appear in conjunction with operation 1170, operation 1172 depicting combining the received first request and the received second request into the request for particular image data. For example, Fig. 6, e.g., Fig. 6F, shows received first request and received second request combining module 672 combining the received first request (e.g., a 1920x1080 request for a virtual tourism view of the Sphinx) and the received second request (e.g., a 410x210 request for a virtual tourism view of an overlapping, but different part of the Sphinx) into the request for particular image data (e.g., the request that will be sent to the image sensor array that regards which pixels will be kept).

[00358] Referring again to Fig. 11F, operation 1172 may include operation 1174 depicting removing common pixel data between the received first request and the received second request. For example, Fig. 6, e.g., Fig. 6F, shows received first request and received second request common pixel deduplicating module 674 removing (e.g., deleting, marking, flagging, erasing, storing in a different format, storing in a different place, coding/compressing using a different algorithm, changing but not necessarily destroying, destroying, allowing to be written over by new data, etc.) common pixel data (e.g., pixel data that was part of more than one request) between the received first request (e.g., a request to view the left fielder of the Washington Nationals from a baseball game) and the received second request (e.g., a request to view the right fielder of the

Washington Nationals from a baseball game).

[00359] Referring now to Fig. 11G, operation 1002 may include operation 1176 depicting acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data. For example, Fig. 6, e.g., Fig. 6G, shows request for particular video data that is part of a scene acquiring module 676 acquiring the request for particular image data that is part of the scene, wherein the particular image

101 data includes video data (e.g., streaming data, e.g., as in a live street view of a corner near the Verizon Center in Washington, DC).

[00360] Referring again to Fig. 11G, operation 1002 may include operation 1178 depicting acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data. For example, Fig. 6, e.g., Fig. 6G, shows request for particular audio data that is part of a scene acquiring module 678 acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data (e.g., data of the sounds at an oasis, or of people in the image that are speaking).

[00361] Referring again to Fig. 11G, operation 1002 may include operation 1180 depicting acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface. For example, Fig. 6, e.g., Fig. 6G, shows request for particular image data that is part of a scene receiving from a user device with an audio interface module 680 acquiring the request for particular image data (e.g., to watch a particular person on the field at a football game, e.g., the quarterback) that is part of the scene (e.g., the scene of a football stadium during a game) from a user device (e.g., an internet-connected television) that receives the request for particular image data through an audio interface (e.g., the person speaks to an interface built into the television to instruct the television regarding which player to follow)

[00362] Referring again to Fig. 11G, operation 1002 may include operation 1182 depicting acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user. For example, Fig. 6, e.g., Fig. 6G, shows request for particular image data that is part of a scene receiving from a microphone-equipped user device with an audio interface module 682 acquiring the request for particular image data (e.g., an image of a cheetah at a jungle oasis) that is part of the scene (e.g., a jungle oasis) from a user device that has a microphone (e.g., a smartphone device) that receives a spoken request (e.g., "zoom in on the cheetah") for particular image data (e.g., an image of a cheetah at a

102 jungle oasis) for particular image data (e.g., an image of a cheetah at a jungle oasis) from the user (e.g., the person operating the smartphone device that wants to zoom in on the cheetah).

[00363] Figs. 12A-12E depict various implementations of operation 1004, depicting transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, according to embodiments. Referring now to Fig. 12A, operation 1004 may include operation 1202 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array that includes more than one image sensor and to capture a larger image module 702 transmitting the request for the particular image data of the scene to the image sensor array (e.g., an array of twelve sensors of ten megapixels each) that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data (e.g., the requested image data is 1920x1080 (e.g., roughly 2 million pixels), and the captured area is 12,000,000 pixels, minus overlap).

[00364] Referring again to Fig. 12A, operation 1004 may include operation 1204 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes two angled image sensors and that is configured to capture the scene that is larger than the requested particular image data 704 transmitting the request for the particular image data of the scene (e.g., a chemistry lab) to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested

103 image data (e.g., the requested image is a zoomed-out view of the lab that can be expressed in 1.7 million pixels, but the cameras capture 10.5 million pixels).

[00365] Referring again to Fig. 12A, operation 1004 may include operation 1206 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a grid and that is configured to capture the scene that is larger than the requested particular image data 706 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data (e.g., the image data requested is of a smaller area (e.g., the area around a football player) than the image (e.g., the entire football field)).

[00366] Referring again to Fig. 12A, operation 1004 may include operation 1208 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a line and that is configured to capture the scene that is larger than the requested particular image data 708 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image (e.g., an image of a highway) that is larger than the requested image data (e.g., an image of one or more of the cars on the highway).

104 [00367] Referring again to Fig. 12A, operation 1004 may include operation 1210 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one nonlinearly arranged stationary image sensor and that is configured to capture the scene that is larger than the requested particular image data 710 transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors (e.g., five-megapixel CCD sensors) and that is configured to capture an image that is larger than the requested image data.

[00368] Referring now to Fig. 12B, operation 1004 may include operation 1212 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of static image sensors module 712 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

[00369] Referring again to Fig. 12B, operation 1212 may include operation 1214 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of static image sensors that have fixed focal length and fixed field of view module 714 transmitting the request for the particular image data (e.g., an image of a black bear) of the scene (e.g., a mountain watering hole) to the image sensor array that includes the array of image

105 sensors (e.g., twenty- five megapixel CMOS sensors) that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data (e.g., the image requested is ultra high resolution but represents a smaller area than what is captured in the scene).

[00370] Referring again to Fig. 12B, operation 1004 may include operation 1216 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of image sensors mounted on a movable platform module 716 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform (e.g., a movable dish, or a UAV) and that is configured to capture the scene (e.g., the scene is a wide angle view of a city) that is larger than the requested image data (e.g., one building or street corner of the city).

[00371] Referring again to Fig. 12B, operation 1004 may include operation 1218 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more data than the requested particular image data 718 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular data.

[00372] Referring again to Fig. 12B, operation 1218 may include operation 1220 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture

106 the scene that represents more than ten times as much image data as the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times as much data as the requested particular image data 720 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times (e.g., twenty million pixels) as much image data as the requested particular image data (e.g., 1.8 million pixels).

[00373] Referring again to Fig. 12B, operation 1218 may include operation 1222 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much data as the requested particular image data 722 transmitting the request for the particular image data (e.g., a 1920x1080 image of a red truck crossing a bridge) of the scene (e.g., a highway bridge) to the image sensor array (e.g., a set of one hundred sensors) that includes more than one image sensor (e.g., twenty sensors each of two megapixel, four megapixel, six megapixel, eight megapixel, and ten megapixel) and that is configured to capture the scene that represents more than one hundred times (e.g. ,600 million pixels vs. the requested two million pixels) as much image data as the requested particular image data.

[00374] Referring again to Fig. 12B, operation 1004 may include operation 1224 depicting transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for

107 particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents a greater field of view than the requested particular image data 724 transmitting the request for the particular image data (e.g., an image of a bakery shop on a corner ) to the image sensor array that is configured to capture the scene (e.g., a live street view of a busy street corner) that represents a greater field of view (e.g., the entire corner) than the requested image data (e.g., just the bakery).

[00375] Referring now to Fig. 12C, operation 1004 may include operation 1226 modifying the request for the particular image data. For example, Fig. 7, e.g., Fig. 7C, shows request for particular image data modifying module 726 modifying (e.g., altering, changing, adding to, subtracting from, deleting, supplementing, changing the form of, changing an attribute of, etc.) the request for the particular image data (e.g., a request for an image of a baseball player).

[00376] Referring again to Fig. 12C, operation 1004 may include operation 1228, which may appear in conjunction with operation 1226, operation 1228 depicting transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data. For example, Fig. 7, e.g., Fig. 7C, shows modified request for particular image data transmitting to an image sensor array module 728 transmitting the modified request (e.g., the request increases the area around the baseball player that was requested) for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene (e.g., a baseball game at a baseball stadium) that is larger than the requested particular image data.

[00377] Referring again to Fig. 12C, operation 1226 may include operation 1230 depicting removing designated image data from the request for the particular image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data module 730 removing designated image data (e.g., image data of a static object that has already been captured and stored in memory, e.g., a building

108 from a live street view, or a car that has not moved since the last request) from the request for the particular image data (e.g., a request to see a part of the live street view).

[00378] Referring again to Fig. 12C, operation 1230 may include operation 1232 depicting removing designated image data from the request for the particular image data based on previously stored image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data module 732 removing designated image data (e.g., image data of a static object that has already been captured and stored in memory, e.g., a building from a live street view, or a car that has not moved since the last request) from the request for the particular image data (e.g., a request to see a part of the live street view) based on previously stored image data (e.g., the most previously requested image has the car in it already, and so it will not be checked again for another sixty frames of captured image data, for example).

[00379] Referring again to Fig. 12C, operation 1232 may include operation 1234 depicting removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data retrieved from the image sensor array module 734 removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array (e.g., the image sensor array sent the older version of the data that included a static object, e.g., a part of a bridge when the scene is a highway bridge, and so the request for the scene that includes part of the bridge, the part of the bridge that is static is removed).

[00380] Referring again to Fig. 12C, operation 1232 may include operation 1236 depicting removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data that is an

109 earlier-in-time version of the designated image data module 736 removing designated image data (e.g., portions of a stadium) from the request for the particular image data (e.g., a request to view a player inside a stadium for a game) based on previously stored image data (e.g., image data of the stadium) that is an earlier- in-time version of the designated image data (e.g., the image data of the stadium from one hour previous, or from one frame previous).

[00381] Referring again to Fig. 12C, operation 1236 may include operation 1238 depicting removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data that is a static object module 738 removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

[00382] Referring now to Fig. 12D, operation 1226 may include operation 1240 depicting removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows designated image data removing from request for particular image data based on pixel data interpolation/extrapolation module 740 removing portions of the request for the particular image data (e.g., portions of a uniform building) through pixel interpolation (e.g., filling in the middle of the building based on extrapolation of a known pattern of the building) of portions of the request for the particular image data (e.g., a request for a live street view that includes abuilding).

[00383] Referring again to Fig. 12D, operation 1240 may include operation 1242 depicting removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows designated image data corresponding to

110 one or more static image objects removing from request for particular image data based on pixel data interpolation/extrapolation module 742 removing one or more static objects (e.g., a brick of a pyramid) through pixel interpolation (e.g., filling in the middle of the pyramid based on extrapolation of a known pattern of the pyramid) of portions of the request for the particular image data (e.g., a request for a live street view that includes a building).

[00384] Referring again to Fig. 12D, operation 1226 may include operation 1244 depicting identifying at least one portion of the request for the particular image data that is already stored in a memory. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for particular image data that was previously stored in memory identifying module 744 identifying at least one portion of the request for the particular image data (e.g., a request for a virtual tourism exhibit of which a part has been cached in memory from a previous access) that is already stored in a memory (e.g., a memory of the server device, e.g., memory 245).

[00385] Referring again to Fig. 12D, operation 1226 may include operation 1246, which may appear in conjunction with operation 1244, operation 1246 depicting removing the identified portion of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows identified portion of the request for the particular image data removing module 746 removing the identified portion of the request for the particular image data (e.g., removing the part of the request that requests the image data that is already stored in a memory of the server).

[00386] Referring again to Fig. 12D, operation 1226 may include operation 1248 depicting identifying at least one portion of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that was previously captured by the image sensor array identifying module 748 identifying at least one portion of the request for particular image data that was previously captured by the image sensor array (e.g., an array of twenty-five two megapixel CMOS sensors).

Il l [00387] Referring again to Fig. 12D, operation 1226 may include operation 1250 depicting identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that includes at least one static image object that was previously captured by the image sensor array identifying module 750 identifying one or more static objects (e.g., buildings, roads, trees, etc.) of the request for particular image data (e.g., image data of a part of a rural town) that was previously captured by the image sensor array.

[00388] Referring again to Fig. 12D, operation 1250 may include operation 1252 depicting identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that includes at least one static image object of a rock outcropping that was previously captured by the image sensor array identifying module 752 identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array (e.g., an array of twenty-five two megapixel CMOS sensors).

[00389] Referring now to Fig. 12E, operation 1004 may include operation 1254 depicting determining a size of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image data determining module 754 determining a size (e.g., a number of pixels, or a transmission speed, or a number of frames per second) of the request for the particular image data (e.g., data of a lion at a jungle oasis).

[00390] Referring again to Fig. 12E, operation 1004 may include operation 1256, which may appear in conjunction with operation 1254, operation 1256 depicting transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene. For example, Fig. 7, e.g., Fig. 7E, shows determined- size request for particular image data transmitting to the image sensor array module 756 transmitting the request for the particular image data for which the size

112 has been determined to the image sensor array that is configured to capture the scene (e.g., a scene of an interior of a home).

[00391] Referring again to Fig. 12E, operation 1254 may include operation 1258 depicting determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device property module 758 determining the size (e.g., the horizontal and vertical resolutions, e.g., 1920x1080) of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

[00392] Referring again to Fig. 12E, operation 1258 may include operation 1260 depicting determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device resolution module 760 determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

[00393] Referring again to Fig. 12E, operation 1254 may include operation 1262 depicting determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device access level module 762 determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data (e.g., whether the user has paid for the service, or what level of service the user has subscribed to, or whether other "superusers" are present that demand higher bandwidth and receive priority in receiving images).

[00394] Referring again to Fig. 12E, operation 1254 may include operation 1264 depicting determining the size of the request for the particular image data at least partially

113 based on an available amount of bandwidth for communication with the image sensor array. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on available bandwidth module 764 determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array (e.g., a set of twenty- five image sensors lined on each face of a twenty- five sided polygonal structure)

[00395] Referring again to Fig. 12E, operation 1254 may include operation 1266 depicting determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on device usage time module 766 determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data (e.g., devices that have waited longer may get preference; or, once a device has been sent a requested image, that device may move to the back of a queue for image data requests).

[00396] Referring again to Fig. 12E, operation 1254 may include operation 1268 depicting determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on device available bandwidth module 768 determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data (e.g., based on a connection between the user device and the server, e.g., if the bandwidth to the user device is a limiting factor, that may be taken into account and used in setting the size of the request for the particular image data).

[00397] Figs. 13A-13C depict various implementations of operation 1006, depicting receiving only the particular image data from the image sensor array, according to

114 embodiments. Referring now to Fig. 13 A, operation 1006 may include operation 1302 depicting receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array in which other image data is discarded receiving module 802 receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded (e.g., the data may be stored, at least temporarily, but is not stored in a place where overwriting will be prevented, as in a persistent memory).

[00398] Referring again to Fig. 13A, operation 1006 may include operation 1304 depicting receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array in which other image data is stored at the image sensor array receiving module 804 receiving only the particular image data (e.g., an image of a polar bear and a penguin) from the image sensor array (e.g., twenty five CMOS sensors), wherein data from the scene (e.g., an Antarctic ice floe) other than the particular image data is stored at the image sensor array (e.g., a grouping of twenty- five CMOS sensors).

[00399] Referring again to Fig. 13A, operation 1006 may include operation 1306 depicting receiving the particular image data from the image sensor array in near-real time. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive near-real-time receiving module 806 receiving the particular image data from the image sensor array in near-real time (e.g., not necessarily as something is happening, but near enough to give an appearance of real-time).

[00400] Referring now to Fig. 13B, operation 1006 may include operation 1308 depicting receiving the particular image data from the image sensor array in near-real time. For example, Fig. 8, e.g., Fig. 8B, shows particular image data from the image sensor array exclusive near-real-time receiving module 808 receiving the particular image data from the image sensor array in near real time receiving the particular image data

115 (e.g., an image of a person walking across a street captured in a live street view setting) from the image sensor array (e.g., two hundred ten-megapixel sensors) in near-real time.

[00401] Referring again to Fig. 13B, operation 1006 may include operation 1310, which may appear in conjunction with operation 1308, operation 1310 depicting retrieving data from the scene other than the particular image data at a later time. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a later time module 810 retrieving data from the scene (e.g., a scene of a mountain pass) other than the particular image data (e.g., the data that was not requested- e.g., data that no user requested but that was captured by the image sensor array, but not transmitted to the remote server) at a later time (e.g., at an off-peak time when more bandwidth is available, e.g., fewer users are using the system).

[00402] Referring again to Fig. 13B, operation 1310 may include operation 1312 depicting retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time of available bandwidth module 812 retrieving data from the scene other than the particular image data (e.g., data that was not requested) at a time at which bandwidth is available to the image sensor array (e.g., the image sensor array is not using all of its allotted bandwidth to handle requests for portions of the scene, and has available bandwidth to transmit data that can be retrieved that is other than the requested particular image data).

[00403] Referring again to Fig. 13B, operation 1310 may include operation 1314 depicting retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at an off- peak usage time of the image sensor array module 814 retrieving data from the scene other than the particular image data (e.g., data that was not requested) at a time that represents off-peak usage (e.g., the image sensor array may be capturing a city street, so off-peak usage would be at night; or the image sensor array may be a security camera, so off-peak usage may be the middle of the day, or off-peak usage may be flexible based on

116 previous time period analysis, e.g., could also mean any time the image sensor array is not using all of its allotted bandwidth to handle requests for portions of the scene, and has available bandwidth to transmit data that can be retrieved that is other than the requested particular image data) for the image sensor array.

[00404] Referring again to Fig. 13B, operation 1310 may include operation 1316 depicting retrieving data from the scene other than the particular image data at a time when no particular image data is requested. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time when no particular image data requests are present at the image sensor array module 816 retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

[00405] Referring again to Fig. 13B, operation 1310 may include operation 1318 depicting retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time of available image sensor array capacity module 818 retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

[00406] Referring now to Fig. 13C, operation 1006 may include operation 1320 depicting receiving only the particular image data that includes audio data from the sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that includes audio data from the image sensor array exclusive receiving module 820 receiving only the particular image data (e.g., image data from a watering hole) that includes audio data (e.g., sound data, e.g., as picked up by a microphone) from the sensor array (e.g., the image sensor array may include one or more microphones or other sound-collecting devices, either separately from or linked to image capturing sensors).

[00407] Referring again to Fig. 13C, operation 1006 may include operation 1322 depicting receiving only the particular image data that was determined to contain a

117 particular requested image object from the image sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that was determined to contain a particular requested image object from the image sensor array exclusive receiving module 822 receiving only the particular image data that was determined to contain a particular requested image object (e.g., a particular football player from a football game that is the scene) from the image sensor array.

[00408] Referring again to Fig. 13C, operation 1322 may include operation 1324 depicting receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that was determined to contain a particular requested image object by the image sensor array exclusive receiving module 824.

receiving only the particular image data that was determined to contain a particular requested image object (e.g., a lion at a watering hole) by the image sensor array (e.g., the image sensor array performs the pattern recognition and identifies the particular image data, which may only have been identified as "the image data that contains the lion," and only that particular image data is transmitted and thus received by the server).

[00409] Figs. 14A-14E depict various implementations of operation 1008, depicting transmitting the received particular image data to at least one requestor, according to embodiments. Referring now to Fig. 14A, operation 1008 may include operation 1402 depicting transmitting the received particular image data to a user device. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device requestor module 902 transmitting the received particular image data (e.g., an image of a quarterback at a National Football League game) to a user device (e.g., a television connected to the internet).

[00410] Referring again to Fig. 14A, operation 1402 may include operation 1404 depicting transmitting at least a portion of the received particular image data to a user device that requested the particular image data. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device that requested at least a portion of the received particular data requestor module 904 transmitting at least a

118 portion of the received particular image data (e.g., a portion that corresponds to a particular request received from a device, e.g., a request for a particular segment of the scene that shows a lion at a watering hole) to a user device (e.g., a computer device with a CPU and monitor) that requested the particular image data (e.g., the computer device requested the portion of the scene at which the lion is visible).

[00411] Referring again to Fig. 14A, operation 1008 may include operation 1406 depicting separating the received particular image data into a set of one or more requested images. For example, Fig. 9, e.g., Fig. 9A, shows separation of the received particular data into set of one or more requested images executing module 906 separating the received particular image data into a set of one or more requested images (e.g., if there were five requests for portions of the scene data, and some of the requests overlapped, the image data may be duplicated and packaged such that each requesting device receives the pixels that were requested).

[00412] Referring again to Fig. 14A, operation 1008 may include operation 1408, which may appear in conjunction with operation 1406, operation 1408 depicting transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image. For example, Fig. 9, e.g., Fig. 9A, shows at least one image of the set of one or more requested images transmitting to a particular requestor that requested the one or more images transmitting module 908 transmitting at least one image of the set of one or more requested images to a particular requestor (e.g., a person operating a "virtual camera" that lets the person "see" the scene through the lens of a camera, even though the camera is temporally separated from the image sensor array, possibly by a large distance, because the image is transmitted to the camera).

[00413] Referring again to Fig. 14A, operation 1406 may include operation 1410 depicting separating the received particular image data into a first requested image and a second requested image. For example, Fig. 9, e.g., Fig. 9A, shows separation of the received particular data into a first requested image and a second requested image executing module 910 separating the received particular image data (e.g., image data

119 from a jungle oasis) into a first requested image (e.g., an image of a lion) and a second requested image (e.g., an image of a hippopotamus).

[00414] Referring again to Fig. 14A, operation 1008 may include operation 1412 depicting transmitting the received particular image data to a user device that requested an image that is part of the received particular image data. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device that requested image data that is part of the received particular image data module 912 transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

[00415] Referring now to Fig. 14B, operation 1008 may include operation 1414 depicting transmitting a first portion of the received particular image data to a first requestor. For example, Fig. 9, e.g., Fig. 9B, shows first portion of received particular image data transmitting to a first requestor module 914 transmitting a first portion (e.g., a part of an animal oasis that contains a zebra) of the received particular image data (e.g., image data from the oasis that contains a zebra) to a first requestor (e.g., device that requested video feed that is the portion of the oasis that contains the zebra, e.g., a television device).

[00416] Referring again to Fig. 14B, operation 1008 may include operation 1416, which may appear in conjunction with operation 1414, operation 1416 depicting transmitting a second portion of the received particular image data to a second requestor. For example, Fig. 9, e.g., Fig. 9B, shows second portion of received particular image data transmitting to a second requestor module 916 transmitting a second portion (e.g., a portion of the oasis that contains birds) of the received particular image data (e.g., image data from the oasis) to a second requestor (e.g., a device that requested the image that is the portion of the oasis that contains birds, e.g., a tablet device).

[00417] Referring again to Fig. 14B, operation 1414 may include operation 1418 depicting transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9B, shows first portion of received particular image data

120 transmitting to a first requestor that requested the first portion module 918 transmitting the first portion of the received particular image data (e.g., a portion that contains a particular landmark in a virtual tourism setting) to the first requestor that requested the first portion of the received particular image data.

[00418] Referring again to Fig. 14B, operation 1418 may include operation 1420 depicting transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player. For example, Fig. 9, e.g., Fig. 9B, shows portion of received particular image data that includes a particular football player transmitting to a television device that requested the football player from a football game module 920 transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

[00419] Referring again to Fig. 14B, operation 1416 may include operation 1422 depicting transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9B, shows second portion of received particular image data transmitting to the second requestor that requested the second portion module 922 transmitting the second portion of the received particular image data (e.g., the portion of the received particular image data that includes the lion) to the second requestor that requested the second portion of the received particular image data (e.g., a person watching the feed on their television).

[00420] Referring again to Fig. 14B, operation 1422 may include operation 1424 depicting transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city. For example, Fig. 9, e.g., Fig. 9B, shows portion that contains a view of a motor vehicle transmitting to the second requestor that is a tablet device that requested the view of the motor vehicle module 924 transmitting an image that contains a view of a motor vehicle (e.g., a Honda

121 Accord) to a tablet device that requested a street view image of a particular corner of a city (e.g., Alexandria, VA).

[00421] Referring again to Fig. 14B, operation 1008 may include operation 1426 depicting transmitting at least a portion of the received particular image data without alteration to at least one requestor. For example, Fig. 9, e.g., Fig. 9B, shows received particular image data unaltered transmitting to at least one requestor module 926 transmitting at least a portion of the received particular image data (e.g., an image of animals at an oasis) without alteration (e.g., without altering how the image appears to human eyes, e.g., there may be data manipulation that is not visible) to at least one requestor (e.g., the device that requested the image, e.g., a mobile device).

[00422] Referring now to Fig. 14C, operation 1008 may include operation 1428 depicting adding supplemental data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows supplemental data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 928 adding supplemental data (e.g., context data, or advertisement data, or processing assistance data, or data regarding how to display or cache the image, whether visible in the image or embedded therein, or otherwise associated with the image) to at least a portion of the received particular image data (e.g., images from an animal watering hole) to generate transmission image data (e.g., image data that will be transmitted to the requestor, e.g., a user of a desktop computer).

[00423] Referring again to Fig. 14C, operation 1008 may include operation 1430, which may appear in conjunction with operation 1428, operation 1430 depicting transmitting the generated transmission image data to at least one requestor. For example, Fig. 9, e.g., Fig. 9C, shows generated transmission image data transmitting to at least one requestor module 930 transmitting the generated transmission image data to at least one requestor transmitting the generated transmission image data (e.g., image data of a football player at a football game with statistical data of that football player overlaid in the image) to at least one requestor (e.g., a person watching the game on their mobile tablet device).

122 [00424] Referring again to Fig. 14C, operation 1428 may include operation 1432 depicting adding advertisement data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 932 adding advertisement data (e.g., data for an advertisement for buying tickets to the next soccer game and an advertisement for buying a soccer jersey of the player that is pictured) to at least a portion of the received particular image data (e.g., images of a soccer game and/or players in the soccer game) to generate transmission image data.

[00425] Referring again to Fig. 14C, operation 1432 may include operation 1434 depicting adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9C, shows context-based advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 934 adding context-based advertisement data (e.g., an ad for travel services to a place that is being viewed in a virtual tourism setting, e.g., the Great Pyramids) that is at least partially based on the received particular image data (e.g., visual image data from the Great Pyramids) to at least the portion of the received particular image data.

[00426] Referring again to Fig. 14C, operation 1434 may include operation 1436 depicting adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis. For example, Fig. 9, e.g., Fig. 9C, shows animal rights donation fund advertisement data addition to at least a portion of the received particular image data that includes a jungle tiger at an oasis to generate transmission image data facilitating module 936 adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

123 [00427] Referring again to Fig. 14C, operation 1428 may include operation 1438 depicting adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 938 adding related visual data (e.g., the name of an animal being shown, or a make and model year of a car being shown, or, if a product is shown in the frame, the name of the website that has it for the cheapest price right now) related to the received particular image data (e.g., an animal, a car, or a product) to at least a portion of the received particular image data to generate transmission image data (e.g., data to be transmitted to the receiving device).

[00428] Referring again to Fig. 14C, operation 1438 may include operation 1440 depicting adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows related fantasy football statistical data addition to at least a portion of the received particular image data of a quarterback data to generate transmission image data facilitating module 940 adding fantasy football statistical data (e.g., passes completed, receptions, rushing yards gained, receiving yards gained, total points scored, player name, etc.) to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data (e.g., image data that is to be transmitted to the requesting device, e.g., a television).

[00429] Referring now to Fig. 14D, operation 1008 may include operation 1442 depicting modifying data of a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data modification to generate transmission image data facilitating module 942 modifying data (e.g., adding to, subtracting from, or changing the data of a characteristic of, e.g., alpha data, color data, saturation data, either on individual bytes or on the image as a whole) of a portion of the received particular image data (e.g., the

124 image data sent from the camera array) to generate transmission image data (e.g., data to be transmitted to the device that requested the data, e.g., a smartphone device).

[00430] Referring again to Fig. 14D, operation 1008 may include operation 1444 depicting transmitting at the generated transmission image data to at least one requestor. For example, Fig. 9, e.g., Fig. 9D, shows generated transmission image data transmitting to at least one requestor module 944 transmitting the generated transmission image data (e.g., the image data that was generated by the server device to transmit to the requesting device) to at least one requestor (e.g., the requesting device, e.g., a laptop computer of a family at home running a virtual tourism program in a web page).

[00431] Referring again to Fig. 14D, operation 1442 may include operation 1446 depicting performing image manipulation modifications of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data image manipulation modification to generate transmission image data facilitating module 946 performing image manipulation modifications (e.g., editing a feature of a captured image) of the received particular image data (e.g., a live street view of an area with a lot of shading from tall skyscrapers) to generate transmission image data (e.g., the image data to be transmitted to the device that requested the data, e.g., a camera device).

[00432] Referring again to Fig. 14D, operation 1446 may include operation 1448 depicting performing contrast balancing on the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data contrast balancing modification to generate transmission image data facilitating module 948 performing contrast balancing on the received particular image data (e.g., a live street view of an area with a lot of shading from tall skyscrapers) to generate transmission image data (e.g., the image data to be transmitted to the device that requested the data, e.g., a camera device).

[00433] Referring again to Fig. 14D, operation 1446 may include operation 1450 depicting performing color modification balancing on the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of

125 received particular image data color modification balancing to generate transmission image data facilitating module 950 performing color modification balancing on the received particular image data (e.g., an image of a lion at an animal watering hole) to generate transmission image data (e.g., the image data that will be transmitted to the device).

[00434] Referring again to Fig. 14D, operation 1442 may include operation 1452 depicting redacting at least a portion of the received particular image data to generate the transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data redaction to generate transmission image data facilitating module 952 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) at least a portion of the received particular image data to generate the transmission image data (e.g., the image data that will be transmitted to the device or designated for transmission to the device).

[00435] Referring again to Fig. 14D, operation 1452 may include operation 1454 depicting redacting at least a portion of the received particular image data based on a security clearance level of the requestor. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 954 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) at least a portion of the received particular image data (e.g., the faces of people, or the license plates of cars) based on a security clearance level of the requestor (e.g., a device that requested the image may have a security clearance based on what that device is allowed to view, and if the security clearance level is below a certain threshold, data like license plates and people's faces may beredacted).

126 [00436] Referring again to Fig. 14D, operation 1454 may include operation 1456 depicting redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received satellite image data that includes a tank redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 956 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) a tank from the received particular image data that includes a satellite image (e.g., the image sensor array that captured the image is at least partially mounted on a satellite) that includes a military base, based on an insufficient security clearance level (e.g., some data indicates that the device does not have a security level sufficient to approve seeing the tank) of a device that requested the particular image data).

[00437] Referring now to Fig. 14E, operation 1008 may include operation 1458 depicting transmitting a lower-resolution version of the received particular image data to the at least one requestor. For example, Fig. 9, e.g., Fig. 9E, shows lower-resolution version of received particular image data transmitting to at least one requestor module 958 transmitting a lower-resolution version (e.g., a version of the image data that is at a lower resolution than what the device that requested the particular image data is capable of displaying) of the received particular image data (e.g., an image of a baseball player at a baseball game) to the at least one requestor (e.g., the device that requested the data).

[00438] Referring again to Fig. 14E, operation 1008 may include operation 1460, which may appear in conjunction with operation 1458, operation 1460 depicting transmitting a full-resolution version of the received particular image data to the at least one requestor. For example, Fig. 9, e.g., Fig. 9F, shows full-resolution version of received particular image data transmitting to at least one requestor module 960 transmitting a full-resolution version (e.g., a version that is at a resolution of the device that requested the image) of the received particular image data (e.g., an image of an animal at an animal oasis).

127 [00439] All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in any Application Data Sheet, are incorporated herein by reference, to the extent not inconsistent herewith.

[00440] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware, or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101. In an embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101, and that designing the circuitry and/or writing the code for the software (e.g., a high-level computer program serving as a hardware specification) and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape,

128 a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.)

[00441] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[00442] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited

129 number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[00443] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[00444] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[00445] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify

130 and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[00446] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[00447] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes

131 and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

STRUCTURED DISCLOSURE DIRECTED TOWARD ONE OF SKILL IN THE ART [START]

Those skilled in the art will appreciate the indentations and cross-referencing of the following as instructive. In general, each immediately following numbered paragraph in this "Structured Disclosure Directed Toward One Of Skill In The Art Section" should be treated as preceded by the word "Clause," as can be seen by the Clauses that cross- reference against them, and each cross-referencing paragraph is to be understood as permissively incorporating its parent clause(s) by reference.

132 As used i n the herein, and i n pa rticu la r the following, thing/operation disclosures, the word "com prising" ca n genera lly be interpreted as " incl uding but not limited to" :

1. A computationally-implemented thing/operation disclosure, comprising:

acquiring a request for particular image data that is part of a scene;

transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

receiving only the particular image data from the image sensor array; and transmitting the received particular image data to at least one requestor.

2. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene that includes one or more images.

3. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving the request for particular image data of the scene.

4. The computationally-implemented thing/operation disclosure of clause 3, wherein said receiving the request for particular image data of the scene comprises:

receiving the request for particular image data of the scene from a user device.

5. The computationally-implemented thing/operation disclosure of clause 4, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

receiving the request for particular image data of the scene from a user device that

133 is configured to display at least a portion of the scene.

134 6. The computationally-implemented thing/operation disclosure of clause 5, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in aviewfinder.

7. The computationally-implemented thing/operation disclosure of clause 4, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

8. The computationally-implemented thing/operation disclosure of clause 7, wherein said receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

9. The computationally-implemented thing/operation disclosure of clause 7, wherein said receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

10. The computationally- implemented thing/operation disclosure of clause 1,

135 wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the the image data collected by the array of more than one image sensor.

136 11. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

12. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

13. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

14. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

15. The computationally-implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene

137 comprises:

acquiring the request for particular image data of a scene that is a football game.

138 16. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is a street view of an area.

17. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is a tourist destination.

18. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is an inside of a home.

19. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

20. The computationally-implemented thing/operation disclosure of clause 19, wherein said acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises:

acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

21. The computationally- implemented thing/operation disclosure of clause 19,

139 wherein said acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises:

140 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

22. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring a request for a particular image object located in the scene; and determining the particular image data of the scene that contains the particular image object.

23. The computationally- implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a particular person located in the scene.

24. The computationally-implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a basketball located in the scene that is a basketball arena.

25. The computationally- implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a motor vehicle located in the scene.

26. The computationally-implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for any human object representations located in the scene.

27. The computationally- implemented thing/operation disclosure of clause 22,

141 wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

142 28. The computationally- implemented thing/operation disclosure of clause 22, wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

29. The computationally-implemented thing/operation disclosure of clause 22, wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

30. The computationally-implemented thing/operation disclosure of clause 29, wherein said determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

31. The computationally- implemented thing/operation disclosure of clause 29, wherein said determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes

143 more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

144 32. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving a first request for first particular image data from the scene from a first requestor; and

receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

33. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

34. The computationally-implemented thing/operation disclosure of clause 33, wherein said combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

35. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and

combining the received first request and the received second request into the request for particular image data.

The computationally-implemented thing/operation disclosure of clause 35,

145 wherein said combining the received first request and the received second request into the request for particular image data comprises:

146 removing common pixel data between the received first request and the received second request.

37. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

38. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

39. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

40. The computationally-implemented thing/operation disclosure of clause 39, wherein said acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

41. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor

147 array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

148 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

42. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

43. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

44. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

149 45. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

46. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

47. The computationally- implemented thing/operation disclosure of clause 46, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

48. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

150 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data.

49. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

50. The computationally-implemented thing/operation disclosure of clause 49, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

51. The computationally- implemented thing/operation disclosure of clause 49, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image

151 data as the requested particular image data.

152 52. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

53. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

modifying the request for the particular image data; and

transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

54. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

removing designated image data from the request for the particular image data.

55. The computationally- implemented thing/operation disclosure of clause 54, wherein said removing designated image data from the request for the particular image data comprises:

removing designated image data from the request for the particular image data based on previously stored image data.

56. The computationally-implemented thing/operation disclosure of clause 55, wherein said removing designated image data from the request for the particular image data based on previously stored image data comprises:

153 removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

57. The computationally-implemented thing/operation disclosure of clause 55, wherein said removing designated image data from the request for the particular image data based on previously stored image data comprises:

removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

58. The computationally-implemented thing/operation disclosure of clause 57, wherein said removing designated image data from the request for the particular image data based on previously stored image data that is an earlier- in-time version of the designated image data comprises:

removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

59. The computationally-implemented thing/operation disclosure of clause 58, wherein said removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

60. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

154 removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

61. The computationally- implemented thing/operation disclosure of clause 60, wherein said removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

62. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

identifying at least one portion of the request for the particular image data that is already stored in a memory; and

removing the identified portion of the request for the particular image data.

63. The computationally-implemented thing/operation disclosure of clause 62, wherein said identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

64. The computationally-implemented thing/operation disclosure of clause 62, wherein said identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

155 65. The computationally- implemented thing/operation disclosure of clause 64, wherein said identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array comprises:

identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

66. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

determining a size of the request for the particular image data; and

transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

67. The computationally- implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

68. The computationally-implemented thing/operation disclosure of clause 67, wherein said determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

69. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data.

156 70. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

71. The computationally- implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

72. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

73. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded.

74. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

157 75. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving the particular image data from the image sensor array in near-real time.

76. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving the particular image data from the image sensor array in near-real time; and

retrieving data from the scene other than the particular image data at a later time.

77. The computationally- implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

78. The computationally- implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

79. The computationally-implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

80. The computationally-implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at time at

158 which fewer users are requesting particular image data than for which the sensor array has capacity.

159 81. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data that includes audio data from the sensor array.

82. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

83. The computationally- implemented thing/operation disclosure of clause 82, wherein said receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array comprises:

receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

84. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting the received particular image data to a user device.

85. The computationally- implemented thing/operation disclosure of clause 84, wherein said transmitting the received particular image data to a user thing/operation disclosure comprises:

transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

86. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

separating the received particular image data into a set of one or more requested images; and

160 transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

161 87. The computationally- implemented thing/operation disclosure of clause 86, wherein said separating the received particular image data into a set of one or more requested images comprises:

separating the received particular image data into a first requested image and a second requested image.

88. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting the received particular image data to a user thing/operation disclosure that requested an image that is part of the received particular image data.

89. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting a first portion of the received particular image data to a first requestor; and

transmitting a second portion of the received particular image data to a second requestor.

90. The computationally-implemented thing/operation disclosure of clause 89, wherein said transmitting a first portion of the received particular image data to a first requestor comprises:

transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

91. The computationally- implemented thing/operation disclosure of clause 90, wherein said transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

162 transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

163 92. The computationally-implemented thing/operation disclosure of clause 89, wherein said transmitting a second portion of the received particular image data to a second requestor comprises:

transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

93. The computationally- implemented thing/operation disclosure of clause 92, wherein said transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

94. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting at least a portion of the received particular image data without alteration to at least one requestor.

95. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

transmitting the generated transmission image data to at least one requestor.

96. The computationally-implemented thing/operation disclosure of clause 95, wherein said adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

164 97. The computationally- implemented thing/operation disclosure of clause 96, wherein said adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

165 adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

98. The computationally-implemented thing/operation disclosure of clause 97, wherein said adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

99. The computationally-implemented thing/operation disclosure of clause 95, wherein said adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

100. The computationally-implemented thing/operation disclosure of clause 99, wherein said adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

101. The computationally-implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

modifying data of a portion of the received particular image data to generate transmission image data; and

transmitting at the generated transmission image data to at least one requestor.

166 102. The computationally-implemented thing/operation disclosure of clause 101, wherein said modifying data of a portion of the received particular image data to generate transmission image data comprises:

performing image manipulation modifications of the received particular image data to generate transmission image data.

103. The computationally-implemented thing/operation disclosure of clause 102, wherein said performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

performing contrast balancing on the received particular image data to generate transmission image data.

104. The computationally-implemented thing/operation disclosure of clause 102, wherein said performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

performing color modification balancing on the received particular image data to generate transmission image data.

105. The computationally-implemented thing/operation disclosure of clause 101, wherein said modifying data of a portion of the received particular image data to generate transmission image data comprises:

redacting at least a portion of the received particular image data to generate the transmission image data.

106. The computationally-implemented thing/operation disclosure of clause 105, wherein said redacting at least a portion of the received particular image data to generate the transmission image data comprises:

redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

167 107. The computationally-implemented thing/operation disclosure of clause 106, wherein said redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

108. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

transmitting a full-resolution version of the received particular image data to the at least one requestor.

109. A computationally-implemented thing/operation disclosure, comprising

means for acquiring a request for particular image data that is part of a scene; means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

means for receiving only the particular image data from the image sensor array; and

means for transmitting the received particular image data to at least one requestor.

110. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene that includes one or more images.

111. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a

168 scene comprises:

means for receiving the request for particular image data of the scene.

169 112. The computationally- implemented thing/operation disclosure of clause 111, wherein said means for receiving the request for particular image data of the scene comprises:

means for receiving the request for particular image data of the scene from a user device.

113. The computationally-implemented thing/operation disclosure of clause 112, wherein said means for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

means for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene.

114. The computationally-implemented thing/operation disclosure of clause 113, wherein means for receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

means for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder.

115. The computationally-implemented thing/operation disclosure of clause 112, wherein said means for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

means for receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

116. The computationally-implemented thing/operation disclosure of clause 115, wherein said means for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

means for receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said

170 selection based on a view of the scene.

171 117. The computationally-implemented thing/operation disclosure of clause 115, wherein said means for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

means for receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

118. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is the image data collected by the array of more than one image sensor.

119. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

120. The computationally- implemented thing/operation disclosure of clause 119, wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

121. The computationally-implemented thing/operation disclosure of clause 119,

172 wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a

173 representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

122. The computationally- implemented thing/operation disclosure of clause 119, wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

123. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a football game.

124. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a street view of an area.

125. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a tourist destination.

174 126. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

175 means for acquiring the request for particular image data of a scene that is an inside of a home.

127. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

128. The computationally-implemented thing/operation disclosure of clause 127, wherein said means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: means for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

129. The computationally-implemented thing/operation disclosure of clause 127, wherein said means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: means for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

130. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring a request for a particular image object located in the scene; and

means for determining the particular image data of the scene that contains the particular image object.

131. The computationally-implemented thing/operation disclosure of clause 130,

176 wherein said means for acquiring a request for a particular image object located in the scene comprises:

177 means for acquiring a request for a particular person located in the scene.

132. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for a basketball located in the scene that is a basketball arena.

133. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for a motor vehicle located in the scene.

134. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for any human object representations located in the scene.

135. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

136. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous

178 moment in time.

179 137. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

138. The computationally-implemented thing/operation disclosure of clause 137, wherein said means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

139. The computationally-implemented thing/operation disclosure of clause 137, wherein said means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

140. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for receiving a first request for first particular image data from the scene from a first requestor; and

180 means for receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

141. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

142. The computationally- implemented thing/operation disclosure of clause 141, wherein said means for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

means for combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

143. The computationally-implemented thing/operation disclosure of clause 142, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and means for combining the received first request and the received second request into the request for particular image data.

144. The computationally-implemented thing/operation disclosure of clause 143, wherein said means for combining the received first request and the received second request into the request for particular image data comprises:

means for removing common pixel data between the received first request and the received second request.

181 145. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

146. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

147. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

148. The computationally-implemented thing/operation disclosure of clause 147, wherein said means for acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

means for acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

149. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

182 means for transmitting the request for the particular image data of the scene image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

183 150. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

151. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

152. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

153. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that

184 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

154. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

155. The computationally-implemented thing/operation disclosure of clause 154, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data

comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

156. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a

185 movable platform and that is configured to capture the scene that is larger than the requested image data.

157. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

158. The computationally-implemented thing/operation disclosure of clause 157, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

159. The computationally-implemented thing/operation disclosure of clause 157, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

186 160. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises: means for transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

161. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises: means for modifying the request for the particular image data; and

means for transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

162. The computationally-implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for removing designated image data from the request for the particular image data.

163. The computationally-implemented thing/operation disclosure of clause 162, wherein said means for removing designated image data from the request for the particular image data comprises:

means for removing designated image data from the request for the particular image data based on previously stored image data.

164. The computationally-implemented thing/operation disclosure of clause 163, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data comprises:

187 means for removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

165. The computationally-implemented thing/operation disclosure of clause 163, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data comprises:

means for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

166. The computationally-implemented thing/operation disclosure of clause 165, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data comprises:

means for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

167. The computationally-implemented thing/operation disclosure of clause 166, wherein said means for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

means for removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

188 168. The computationally-implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

169. The computationally-implemented thing/operation disclosure of clause 168, wherein said means for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

means for removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

170. The computationally- implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for identifying at least one portion of the request for the particular image data that is already stored in a memory; and

means for removing the identified portion of the request for the particular image data.

171. The computationally- implemented thing/operation disclosure of clause 170, wherein said means for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

means for identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

172. The computationally-implemented thing/operation disclosure of clause 170, wherein said means for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

means for identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

189 173. The computationally-implemented thing/operation disclosure of clause 172, wherein said means for determining the size of the request for the particular image data at least partially based on a resolution of the user thing/operation disclosure that requested the particular image data comprises:

means for identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

174. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for determining a size of the request for the particular image data; and means for transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

175. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

176. The computationally-implemented thing/operation disclosure of clause 175, wherein said means for determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

190 177. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data.

178. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

179. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

180. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

181. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data from the image sensor array,

191 wherein data from the scene other than the particular image data is discarded.

182. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

192 means for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

183. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving the particular image data from the image sensor array in near- real time.

184. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving the particular image data from the image sensor array in near- real time; and

means for retrieving data from the scene other than the particular image data at a later time.

185. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

186. The computationally- implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

187. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image

193 data at a later time comprises:

194 means for retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

188. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

189. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data that includes audio data from the sensor array.

190. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

191. The computationally-implemented thing/operation disclosure of clause 190, wherein said means for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array:

means for receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

192. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting the received particular image data to a user device.

195 193. The computationally-implemented thing/operation disclosure of clause 192, wherein said means for transmitting the received particular image data to a user thing/operation disclosure comprises:

means for transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

194. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for separating the received particular image data into a set of one or more requested images; and

means for transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

195. The computationally-implemented thing/operation disclosure of clause 194, wherein said means for separating the received particular image data into a set of one or more requested images comprises:

means for separating the received particular image data into a first requested image and a second requested image.

196. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

197. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting a first portion of the received particular image data to a first requestor; and

means for transmitting a second portion of the received particular image data to a

196 second requestor.

197 198. The computationally-implemented thing/operation disclosure of clause 197, wherein said means for transmitting a first portion of the received particular image data to a first requestor comprises:

means for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

199. The computationally-implemented thing/operation disclosure of clause 198, wherein said means for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

means for transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

200. The computationally-implemented thing/operation disclosure of clause 197, wherein said means for transmitting a second portion of the received particular image data to a second requestor comprises:

means for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

201. The computationally- implemented thing/operation disclosure of clause 200, wherein said means for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

means for transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

202. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

198 means for transmitting at least a portion of the received particular image data without alteration to at least one requestor.

203. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

means for transmitting the generated transmission image data to at least one requestor.

204. The computationally-implemented thing/operation disclosure of clause 203, wherein said means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

205. The computationally-implemented thing/operation disclosure of clause 204, wherein said means for adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

206. The computationally-implemented thing/operation disclosure of clause 205, wherein said means for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

means for adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

199 207. The computationally-implemented thing/operation disclosure of clause 203, wherein said means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

208. The computationally-implemented thing/operation disclosure of clause 207, wherein said means for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

209. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for modifying data of a portion of the received particular image data to generate transmission image data; and

means for transmitting at the generated transmission image data to at least one requestor.

210. The computationally- implemented thing/operation disclosure of clause 209, wherein said means for modifying data of a portion of the received particular image data to generate transmission image data comprises:

means for performing image manipulation modifications of the received particular image data to generate transmission image data.

211. The computationally-implemented thing/operation disclosure of clause 210, wherein said means for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

200 means for performing contrast balancing on the received particular image data to generate transmission image data.

212. The computationally-implemented thing/operation disclosure of clause 210, wherein said means for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

means for performing color modification balancing on the received particular image data to generate transmission image data.

213. The computationally-implemented thing/operation disclosure of clause 209, wherein said means for modifying data of a portion of the received particular image data to generate transmission image data comprises:

means for redacting at least a portion of the received particular image data to generate the transmission image data.

214. The computationally-implemented thing/operation disclosure of clause 213, wherein said means for redacting at least a portion of the received particular image data to generate the transmission image data comprises:

means for redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

215. The computationally-implemented thing/operation disclosure of clause 214, wherein said means for redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

means for redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

216. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

201 means for transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

means for transmitting a full-resolution version of the received particular image data to the at least one requestor.

217. A computationally-implemented thing/operation disclosure, comprising

circuitry for acquiring a request for particular image data that is part of a scene; circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

circuitry for receiving only the particular image data from the image sensor array; and

circuitry for transmitting the received particular image data to at least one requestor.

218. The computationally- implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene that includes one or more images.

219. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving the request for particular image data of the scene.

220. The computationally-implemented thing/operation disclosure of clause 219, wherein said circuitry for receiving the request for particular image data of the scene comprises:

circuitry for receiving the request for particular image data of the scene from a

202 user device.

203 221. The computationally- implemented thing/operation disclosure of clause 220, wherein said circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

circuitry for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene.

222. The computationally-implemented thing/operation disclosure of clause 221, wherein circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

circuitry for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder.

223. The computationally-implemented thing/operation disclosure of clause 220, wherein said circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

circuitry for receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

224. The computationally-implemented thing/operation disclosure of clause 223, wherein said circuitry for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

circuitry for receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

225. The computationally-implemented thing/operation disclosure of clause 223, wherein said circuitry for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

204 circuitry for receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein

205 the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

226. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is the image data collected by the array of more than one image sensor.

227. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

228. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

229. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

206 230. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

231. The computationally- implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a football game.

232. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a street view of an area.

233. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a tourist destination.

234. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is an inside of a home.

207 235. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

208 circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

236. The computationally-implemented thing/operation disclosure of clause 235, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: circuitry for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

237. The computationally-implemented thing/operation disclosure of clause 235, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: circuitry for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

238. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring a request for a particular image object located in the scene; and

circuitry for determining the particular image data of the scene that contains the particular image object.

239. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for a particular person located in the scene.

240. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the

209 scene comprises:

210 circuitry for acquiring a request for a basketball located in the scene that is a basketball arena.

241. The computationally- implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for a motor vehicle located in the scene.

242. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for any human object representations located in the scene.

243. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

244. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

245. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

211 circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

246. The computationally-implemented thing/operation disclosure of clause 245, wherein said circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

247. The computationally-implemented thing/operation disclosure of clause 245, wherein said circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

248. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving a first request for first particular image data from the scene from a first requestor; and

circuitry for receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

212 249. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

250. The computationally-implemented thing/operation disclosure of clause 249, wherein said circuitry for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

circuitry for combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

251. The computationally- implemented thing/operation disclosure of clause 250, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and circuitry for combining the received first request and the received second request into the request for particular image data.

252. The computationally- implemented thing/operation disclosure of clause 251, wherein said circuitry for combining the received first request and the received second request into the request for particular image data comprises:

circuitry for removing common pixel data between the received first request and the received second request.

253. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a

213 scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

214 254. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

255. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

256. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

circuitry for acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

257. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

258. The computationally-implemented thing/operation disclosure of clause 217,

215 wherein said circuitry for transmitting the request for the particular image data to an image sensor array that

216 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

259. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

260. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

261. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly,nonsequentially

217 arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

262. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

263. The computationally-implemented thing/operation disclosure of clause 262, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data comprises: circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

264. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data.

265. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that

218 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

266. The computationally-implemented thing/operation disclosure of clause 265, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

267. The computationally-implemented thing/operation disclosure of clause 265, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

268. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

219 circuitry for transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

269. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for modifying the request for the particular image data; and

circuitry for transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

270. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises: circuitry for removing designated image data from the request for the particular image data.

271. The computationally- implemented thing/operation disclosure of clause 270, wherein said circuitry for removing designated image data from the request for the particular image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data.

272. The computationally- implemented thing/operation disclosure of clause 271, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

220 273. The computationally- implemented thing/operation disclosure of clause 271, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

274. The computationally-implemented thing/operation disclosure of clause 273, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data comprises:

circuitry for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

275. The computationally-implemented thing/operation disclosure of clause 274, wherein said circuitry for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

circuitry for removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

276. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises:

221 circuitry for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

277. The computationally-implemented thing/operation disclosure of clause 276, wherein said circuitry for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

circuitry for removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

278. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises: circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory; and

circuitry for removing the identified portion of the request for the particular image data.

279. The computationally-implemented thing/operation disclosure of clause 278, wherein said circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

circuitry for identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

280. The computationally-implemented thing/operation disclosure of clause 278, wherein said circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

circuitry for identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

222 281. The computationally- implemented thing/operation disclosure of clause 280, wherein said circuitry for determining the size of the request for the particular image data at least partially based on a resolution of the user thing/operation disclosure that requested the particular image data comprises:

circuitry for identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

282. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for determining a size of the request for the particular image data; and circuitry for transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

283. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

284. The computationally-implemented thing/operation disclosure of clause 283, wherein said circuitry for determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

285. The computationally-implemented thing/operation disclosure of clause 282,

223 wherein said circuitry for determining a size of the request for the particular image data comprises:

224 circuitry for determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data.

286. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

287. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

288. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

289. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded.

290. The computationally-implemented thing/operation disclosure of clause 217,

225 wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

226 circuitry for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

291. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving the particular image data from the image sensor array in near-real time.

292. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving the particular image data from the image sensor array in near-real time; and

circuitry for retrieving data from the scene other than the particular image data at a later time.

293. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

294. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

295. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image

227 data at a later time comprises:

228 circuitry for retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

296. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

297. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data that includes audio data from the sensor array.

298. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

299. The computationally-implemented thing/operation disclosure of clause 298, wherein said circuitry for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array:

circuitry for receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

300. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting the received particular image data to a user device.

229 301. The computationally- implemented thing/operation disclosure of clause 300, wherein said circuitry for transmitting the received particular image data to a user thing/operation disclosure comprises:

circuitry for transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

302. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for separating the received particular image data into a set of one or more requested images; and

circuitry for transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

303. The computationally-implemented thing/operation disclosure of clause 302, wherein said circuitry for separating the received particular image data into a set of one or more requested images comprises:

circuitry for separating the received particular image data into a first requested image and a second requested image.

304. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

305. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting a first portion of the received particular image data to a first requestor; and

circuitry for transmitting a second portion of the received particular image data to

230 a second requestor.

231 306. The computationally-implemented thing/operation disclosure of clause 305, wherein said circuitry for transmitting a first portion of the received particular image data to a first requestor comprises:

circuitry for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

307. The computationally-implemented thing/operation disclosure of clause 306, wherein said circuitry for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

circuitry for transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

308. The computationally-implemented thing/operation disclosure of clause 305, wherein said circuitry for transmitting a second portion of the received particular image data to a second requestor comprises:

circuitry for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

309. The computationally-implemented thing/operation disclosure of clause 308, wherein said circuitry for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

circuitry for transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

310. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

232 circuitry for transmitting at least a portion of the received particular image data without alteration to at least one requestor.

311. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

circuitry for transmitting the generated transmission image data to at least one requestor.

312. The computationally-implemented thing/operation disclosure of clause 311, wherein said circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

313. The computationally-implemented thing/operation disclosure of clause 312, wherein said circuitry for adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

314. The computationally-implemented thing/operation disclosure of clause 313, wherein said circuitry for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

circuitry for adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

200 315. The computationally-implemented thing/operation disclosure of clause 311, wherein said circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

316. The computationally-implemented thing/operation disclosure of clause 315, wherein said circuitry for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

317. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for modifying data of a portion of the received particular image data to generate transmission image data; and

circuitry for transmitting at the generated transmission image data to at least one requestor.

318. The computationally-implemented thing/operation disclosure of clause 317, wherein said circuitry for modifying data of a portion of the received particular image data to generate transmission image data comprises:

circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data.

319. The computationally-implemented thing/operation disclosure of clause 318, wherein said circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

201 circuitry for performing contrast balancing on the received particular image data to generate transmission image data.

320. The computationally- implemented thing/operation disclosure of clause 318, wherein said circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

circuitry for performing color modification balancing on the received particular image data to generate transmission image data.

321. The computationally- implemented thing/operation disclosure of clause 317, wherein said circuitry for modifying data of a portion of the received particular image data to generate transmission image data comprises:

circuitry for redacting at least a portion of the received particular image data to generate the transmission image data.

322. The computationally- implemented thing/operation disclosure of clause 321, wherein said circuitry for redacting at least a portion of the received particular image data to generate the transmission image data comprises:

circuitry for redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

323. The computationally-implemented thing/operation disclosure of clause 322, wherein said circuitry for redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

circuitry for redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

324. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

202 circuitry for transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

circuitry for transmitting a full-resolution version of the received particular image data to the at least one requestor.

325. A thing/operation disclosure , comprising:

a signal -bearing medium bearing:

one or more instructions for acquiring a request for particular image data that is part of a scene;

one or more instructions for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

one or more instructions for receiving only the particular image data from the image sensor array; and

one or more instructions for transmitting the received particular image data to at least one requestor.

326. A thing/operation disclosure defined by a computational language comprising: one or more interchained physical machines ordered for acquiring a request for particular image data that is part of a scene;

one or more interchained physical machines ordered for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

one or more interchained physical machines ordered for receiving only the particular image data from the image sensor array; and

one or more interchained physical machines ordered for transmitting the received particular image data to at least one requestor.

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [EN D]

203 j— This Roman Numeral Section, And the Corresponding

Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If" Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date, Such Clerical Issues to Be Cured by Subsequent Amendment

Devices, Methods and Systems for Visual Imaging Arrays

DETAILED DESCRIPTION --

High-Level System Architecture

[00151] Fig. 1, including Figs. 1-A-l-AN, shows partial views that, when assembled, form a complete view of an entire system, of which at least a portion will be described in more detail. An overview of the entire system of Fig. 1 is now described herein, with a more specific reference to at least one subsystem of Fig. 1 to be described later with respect to Figs. 2-14D.

[00152] Fig. 1 shows various implementations of the overall system. At a high level, Fig. 1 shows various implementations of a multiple user video imaging array (hereinafter interchangeably referred to as a "MUVIA"). It is noted that the designation "MUVIA" is merely shorthand and descriptive of an exemplary embodiment, and not a limiting term. Although "multiple user" appears in the name MUVIA, multiple users or even a single user are not required. Further, "video" is used in the designation "MUVIA," but MUVIA systems also may capture still images, multiple images, audio data, electromagnetic waves outside the visible spectrum, and other data as will be described herein. Further, "imaging array" may be used in the MUVIA designation, but the image sensor in MUVIA is not necessarily an array or even multiple sensors (although commonly implemented as larger groups of image sensors, single-sensor implementations are also contemplated), and "array" here does not necessarily imply any specific structure, but rather any grouping of one or more sensors.

[00153] Generally, although not necessarily required, a MUVIA system may include one or more of a user device (e.g., hereinafter interchangeably referred to as a "client device," in recognition that a user may not necessarily be a human, living, or organic"), a server,

204 and an image sensor array. A "server" in the context of this application may refer to any device, program, or module that is not directly connected to the image sensor array or to the client device, including any and all "cloud" storage, applications, and/or processing.

[00154] For example, in an embodiment, e.g., as shown in Fig. 1-A, Fig. 1-K, Fig. 1-U, Fig. 1-AE, and Fig. 1-AF, in an embodiment, the system may include one or more of image sensor array 3200, array local storage and processing module 3300, server 4000, and user device 5200. Each of these portions will be discussed in more detail herein.

205 [00155] Referring now to Fig. 1-A, Fig. 1-A depicts user device 5200, which is a device that may be operated or controlled by a user of a MUVIA system. It is noted here that "user" is merely provided as a designation for ease of understanding, and does not imply control by a human or other organism, sentient or otherwise. In an embodiment, for example, in a security-type embodiment, the user device 5200 may be mostly or completely unmonitored, or may be monitored by an artificial intelligence, or by a combination of artificial intelligence, pseudo-artificial intelligence (e.g., that is intelligence amplification) and human intelligence.

[00156] User device 5200 may be, but is not limited to, a wearable device (e.g., glasses, goggles, headgear, a watch, clothing), an implant (e.g., a retinal-implant display), a computer of any kind (e.g., a laptop computer, desktop computer, mainframe, server, etc.), a tablet or other portable device, a phone or other similar device (e.g., smartphone, personal digital assistant), a personal electronic device (e.g., music player, CD player), a home appliance (e.g., a television, a refrigerator, or any other so-called "smart" device), a piece of office equipment (e.g., a copier, scanner, fax device, etc.), a camera or other camera-like device, a video game system, an entertainment/media center, or any other electrical equipment that has a functionality of presenting an image (whether visual or by other means, e.g., a screen, but also other sensory stimulating work).

[00157] User device 5200 may be capable of presenting an image, which, for purposes of clarity and conciseness will be referred to as displaying an image, although

communication through forms other than generating light waves through the visible light spectrum, although the image is not required to be presented at all times or even at all. For example, in an embodiment, user device 5200 may receive images from server 4000 (or directly from the image sensor array 3200, as will be discussed herein), and may store the images for later viewing, or for processing internally, or for any other reason.

[00158] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection accepting module 5210. User selection accepting module 5210 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-A in the exemplary interface 5212, the user selection accepting module 5210

206 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00159] In an embodiment, the user selection accepting module may accept a selection of a particular thing- e.g., a building, an animal, or any other object whose representation is present on the screen. Moreover, a user may use a text box to "search" the image for a particular thing, and processing, done at the user device 5200 or at the server 4000, may determine the image and the zoom level for viewing that thing. The search for a particular thing may include a generic search, e.g., "search for people," or "search for penguins," or a more specific search, e.g., "search for the Space Needle," or "search for the White House." The search for a particular thing may take on any known contextual search, e.g., an address, a text string, or any other input.

[00160] In an embodiment, the "user selection" facilitated by the user selection accepting module 5210 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00161] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection transmitting module 5220. The user selection transmitting module 5220 may take the user selection from user selection transmitting module 5220, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5200 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server

207 4000. Following the thick- line arrow leftward from user selection transmitting module 5220 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00162] Referring again to Fig. 1-A, Fig. 1-A also includes a selected image receiving module 5230 and a user selection presenting module 5240, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00163] Referring now to Fig. 1-K, Figs. 1-K and 1-U show an embodiment of a server 4000 that communicates with one or both of user device 5200 and array local storage and processing module 3300. Sever 4000 may be a single computing device, or may be many computing devices, which may or may not be in proximity with each other.

[00164] Referring again to Fig. 1-K, server 4000 may include a user request reception module 4010. The user request reception module 4010 may receive the transmitted request from user selection transmitting module 5220. The user request reception module 4010 may then turn over processing to user request validation module 4020, which may perform, among other things, a check to make sure the user is not requesting more resolution than what their device can handle. For example, if the server has learned (e.g., through gathered information, or through information that was transmitted with the user request or in a same session as the user request), that the user is requesting a 1900x1080 resolution image, and the maximum resolution for the device is 1334x750, then the request will be modified so that no more than the maximum resolution that can be handled by the device is requested. In an embodiment, this may conserve the bandwidth required to transmit from the MUVIA to the server 4000 and/or the user device 3200

[00165] Referring again to Fig. 1-K, in an embodiment, server 4000 may include a user request latency management module 4030. User request latency management module 4030 may, in conjunction with user device 3200, attempt to reduce the latency from the

208 time a specific image is requested by user device 3200 to the time the request is acted upon and data is transmitted to the user. The details for this latency management will be described in more detail herein, with varying techniques that may be carried out by any or all of the devices in the chain (e.g., user device, camera array, and server). As an example, in an embodiment, a lower resolution version of the image, e.g., that is stored locally or on the server, may be sent to the user immediately upon the request, and then that image is updated with the actual image taken by the camera. In an embodiment, user request latency management module 4030 also may handle static gap-filling, that is, if the image captured by the camera is unchanging, e.g., has not changed for a particular period of time, then a new image is not necessary to be captured, and an older image, that may be stored on server 4000, may be transmitted to the user device 3200. This process also will be discussed in more detail herein.

[00166] Referring now to Fig. 1-U, which shows more of server 4000, in an

embodiment, server 4000 may include a consolidated user request transmission module 4040, which may be configured to consolidate all the user requests, perform any necessary pre-processing on those requests, and send the request for particular sets of pixels to the array local storage and processing module 3300. The process for

consolidating the user requests and performing pre-processing will be described in more detail herein with respect to some of the other exemplary embodiments. In this embodiment, however, server consolidated user request transmission module 4040 transmits the request (exiting leftward from Fig. 1-U and traveling downward to Fig. 1- AE, through a pathway identified in Fig. 1-AE as lower-bandwidth communication from remote server 3515. It is noted here that "lower bandwidth communication" does not necessarily mean "low bandwidth" or imply any specific number about the bandwidth— it is simply lower than the relatively higher bandwidth communication from the actual image sensor array 3505 to the array local storage and processing module 3300, which will be discussed in more detail herein.

[00167] Referring again to Fig. 1-U, server 4000 also may include requested pixel reception module 4050, user request preparation module 4060, and user request

209 transmission module 4070 (shown in Fig. 1-T), which will be discussed in more detail herein, with respect to the dataflow of this embodiment

[00168] Referring now to Figs. 1-AE and 1-AF, Figs. 1-AE and 1-AF show an image sensor array ("ISA") 3200 and an array local storage and processing module 3300, each of which will now be described in more detail.

[00169] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00170] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3300. In an embodiment, array local storage and processing module 3300 is integrated into the image sensor array 3200. In another embodiment, the array local storage and processing module 3300 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3300 to the remote server, which may be, but is not required to be, located further away temporally.

[00171] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically

210 changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00172] Referring again to Fig. 1-AE, the image sensor array 3300 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3310. Consolidated user request reception module 3310 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00173] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests.

[00174] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

211 [00175] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3300 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3300 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00176] Referring back to Fig. 1-U, the transmitted pixels transmitted from selected pixel transmission module 3340 of array local processing module 3300 may be received by server 4000, e.g., at requested pixel reception module 4050. Requested pixel reception module 4050 may receive the requested pixels and turn them over to user request preparation module 4060, which may "unpack" the requested pixels, e.g., determining which pixels go to which user, and at what resolutions, along with any postprocessing, including image adjustment, adding in missing cached data, or adding additional data to the images (e.g., advertisements or other data). In an embodiment, server 4000 also may include a user request transmission module 4070, which may be configured to transmit the requested pixels back to the user device 5200.

[00177] Referring again to Fig. 1-A, user device 5200 may include a selected image receiving module 5230, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

212 [00178] Figs. 1-B, 1-C, 1-M, 1-W, 1-AG, and 1-AH show another embodiment of the MUVIA system, in which multiple user devices 5510, 5520, and 5530 may request images captured by the same image sensor array 3200.

[00179] Referring now to Figs. 1-B and 1-C, user device 5510, user device 5520, and user device 5530 are shown. In an embodiment, user devices 5510, 5520, and 5530 may have some or all of the same components as user device 5200, but are not shown here for clarity and ease of understanding the drawing. For each of user devices 5510, 5520, and 5530, exemplary screen resolutions have been chosen. There is nothing specific about these numbers that have been chosen, however, they are merely illustrated for exemplary purposes, and any other numbers could have been chosen in their place.

[00180] For example, in an embodiment, referring to Fig. 1-B, user device 5510 may have a screen resolution of 1920x1080 (e.g., colloquially referred to as "HD quality"). User device 5510 may send an image request to the server 4000, and may also send data regarding the screen resolution of the device.

[00181] Referring now to Fig. 1-C, user device 5520 may have a screen resolution of 1334x750. User device 5520 may send another image request to the server 4000, and, in an embodiment, instead of sending data regarding the screen resolution of the device, may send data that identifies what kind of device it is (e.g., an Apple-branded

smartphone). Server 4000 may use this data to determine the screen resolution for user device 5520 through an internal database, or through contacting an external source, e.g., a manufacturer of the device or a third party supplier of data about devices.

[00182] Referring again to Fig. 1-C, user device 5530 may have a screen resolution of 640x480, and may send the request by itself to the server 4000, without any additional data. In addition, server 4000 may receive independent requests from various users to change their current viewing area on the device.

[00183] Referring now to Fig. 1-M, server 4000 may include user request reception module 4110. User request reception module 4110 may receive requests from multiple user devices, e.g., user devices 5510, 5520, and 5530. Server 4000 also may include an

213 independent user view change request reception module 4115, which, in an embodiment, may be a part of user request reception module 4110, and may be configured to receive requests from users that are already connected to the system, to change the view of what they are currently seeing.

[00184] Referring again to Fig. 1-M, server 4000 may include relevant pixel selection module 4120 configured to combine the user selections into a single area, as shown in Fig. 1-M. It is noted that, in an embodiment, the different user devices may request areas that overlap each other. In this case, there may be one or more overlapping areas, e.g., overlapping areas 4122. In an embodiment, the overlapping areas are only transmitted once, in order to save data/transmission costs and increase efficiency.

[00185] Referring now to Fig. 1-W, server 4000 may include selected pixel transmission to ISA module 4130. Module 4130 may take the relevant selected pixels, and transmit them to the array local processing module 3400 of image sensor array 3200. Selected pixel transmission to ISA module 4130 may include communication components, which may be shared with other transmission and/or reception modules.

[00186] Referring now to Fig. 1-AG, array local processing module 3400 may communicate with image sensor array 3200. Similarly to Figs. 1-AE and 1-AF, Figs. 1- AG and 1-AH show array local processing module 3400 and image sensor array 3200, respectively.

[00187] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

214 [00188] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3400. In an embodiment, array local storage and processing module 3400 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3400 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3400 to the remote server, which may be, but is not required to be, located further away temporally.

[00189] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00190] Referring again to Fig. 1-AG, the image sensor array 3200 may capture an image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3410. Consolidated user request reception module 3410 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3420 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00191] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3430. In an

215 embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3417. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3415. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3400, or may be subject to other manipulations or processing separate from the user requests.

[00192] Referring gain to Fig. 1-AG, array local processing module 3400 may include flagged selected pixel transmission module 3440, which takes the pixels identified as requested (e.g., "flagged") and transmits them back to the server 4000 for further processing. Similarly to as previously described, this transmission may utilize a lower- bandwidth channel, and module 3440 may include all necessary hardware to effect that lower-bandwidth transmission to server 4000.

[00193] Referring again to Fig. 1-W, the flagged selected pixel transmission module 3440 of array local processing module 3400 may transmit the flagged pixels to server 4000. Specifically, flagged selected pixel transmission module 3440 may transmit the pixels to flagged selected pixel reception from ISA module 4140 of server 4000, as shown in Fig. 1-W.

[00194] Referring again to Fig. 1-W, server 4000 also may include flagged selected pixel separation and duplication module 4150, which may, effectively, reverse the process of combining the pixels from the various selections, duplicating overlapping areas where necessary, and creating the requested images for each of the user devices that requested images. Flagged selected pixel separation and duplication module 4150 also may include the post-processing done to the image, including filling in cached versions of images, image adjustments based on the device preferences and/or the user preferences, and any other image post-processing.

[00195] Referring now to Fig. 1-M (as data flows "northward" from Fig. 1-W from module 4150), server 4000 may include pixel transmission to user device module 4160, which may be configured to transmit the pixels that have been separated out and processed to the specific users that requested the image. Pixel transmission to user

216 device module 4160 may handle the transmission of images to the user devices 5510, 5520, and 5530. In an embodiment, pixel transmission to user device module 4160 may have some or all components in common with user request reception module 4110.

[00196] Following the arrow of data flow to the right and upward from module 4160 of server 4000, the requested user images arrive at user device 5510, user device 5520, and user device 5530, as shown in Figs. 1-B and 1-C. The user devices 5510, 5520, and 5530 may present the received images as previously discussed and/or as further discussed herein.

[00197] Referring again to Fig. 1, Figs. 1-E, l-O, 1-Y, 1-AH, and 1-AI depict a MUVIA implementation according to an embodiment. In an embodiment, referring now to Fig. 1- E, a user device 5600 may include a target selection reception module 5610. Target selection reception module 5610 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA array is pointed at a football stadium, e.g., CenturyLink Field. As an example, a user may select one of the football players visible on the field as a "target." This may be facilitated by a target presentation module, e.g., target presentation module 5612, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the football player.

[00198] In an embodiment, target selection reception module 5610 may include an audible target selection module 5614 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00199] Referring again to Fig. 1, e.g., Fig. 1-E, in an embodiment, user device 5600 may include selected target transmission module 5620. Selected target transmission module 5620 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00200] Referring now to Fig. l-O, Fig. 1-0 (and Fig. 1-Y to the direct "south" of Fig. 1- O) shows an embodiment of server 4000. For example, in an embodiment, server 4000

47 may include a selected target reception module 4210. In an embodiment, selected target reception module 4210 of server 4000 may receive the selected target from the user device 5600. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00201] Referring again to Fig. l-O, in an embodiment, server 4000 may include selected target identification module 4220, which may be configured to take the target data received by selected target reception module 4210 and determine an image that needs to be captured in order to obtain an image that contains the selected target (e.g., in the shown example, the football player). In an embodiment, selected target identification module 4220 may use images previously received (or, in an embodiment, current images) from the image sensor array 3200 to determine the parameters of an image that contains the selected target. For example, in an embodiment, lower-resolution images from the image sensor array 3200 may be transmitted to server 4000 for determining where the target is located within the image, and then specific requests for portions of the image may be transmitted to the image sensor array 3200, as will be discussed herein.

[00202] In an embodiment, server 4000 may perform processing on the selected target data, and/or on image data that is received, in order to create a request that is to be transmitted to the image sensor array 3200. For example, in the given example, the selected target data is a football player. The server 4000, that is, selected target identification module 4220 may perform image recognition on one or more images captured from the image sensor array to determine a "sector" of the entire scene that contains the selected target. In another embodiment, the selected target identification module 4220 may use other, external sources of data to determine where the target is. In yet another embodiment, the selected target data was selected by the user from the scene displayed by the image sensor array, so such processing may not be necessary.

48 [00203] Referring again to Fig. 1-0, in an embodiment, server 4000 may include pixel information selection module 4230, which may select the pixels needed to capture the target, and which may determine the size of the image that should be transmitted from the image sensor array. The size of the image may be determined based on a type of target that is selected, one or more parameters (set by the user, by the device, or by the server, which may or may not be based on the selected target), by the screen resolution of the device, or by any other algorithm. Pixel information selection module 4230 may determine the pixels to be captured in order to express the target, and may update based on changes in the target's status (e.g., if the target is moving, e.g., in the football example, once a play has started and the football player is moving in a certain direction).

[00204] Referring now to Fig. 1-Y, Fig. 1Y includes more of server 4000 according to an embodiment. In an embodiment, server 4000 may include pixel information transmission to ISA module 4240. Pixel information transmission to ISA module 4240 may transmit the selected pixels to the array local processing module 3500 associated with image sensor array 3200.

[00205] Referring now to Figs. 1-AH and 1-AI, Fig. 1-AH depicts an image sensor array 3200, which in this example is pointed at a football stadium, e.g., CenturyLink field. Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00206] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3500. In an embodiment, array local storage and processing module

49 3500 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3500 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3500 to the remote server, which may be, but is not required to be, located further away temporally.

[00207] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00208] Referring again to Fig. 1-AE, the image sensor array 3200 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3510. Consolidated user request reception module 3510 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the

consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00209] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels

50 may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module 3330 may include or communicate with a lower resolution module 3314, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[00210] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00211] Referring now again to Fig. 1-Y, server 4000 may include a requested image reception from ISA module 4250. Requested image reception from ISA module 4250 may receive the image data from the array local processing module 3500 (e.g., in the arrow coming "north" from Fig. 1-AI. That image, as depicted in Fig. 1-Y, may include the target (e.g., the football player), as well as some surrounding area (e.g., the area of the field around the football player). The "surrounding area" and the specifics of what is included/transmitted from the array local processing module may be specified by the user (directly or indirectly, e.g., through a set of preferences), or may be determined by the server, e.g., in the pixel information selection module 4230 (shown in Fig. l-O).

[00212] Referring again to Fig. 1-Y, server 4000 may also include a requested image transmission to user device module 4260. Requested image transmission to user device module 4260 may transmit the requested image to the user device 5600. Requested

51 image transmission to user device module 4260 may include components necessary to communicate with user device 5600 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00213] Referring again to Fig. 1-Y, server 4000 may include a server cached image updating module 4270. Server cached image updating module 4270 may take the images received from the array local processing module 3500 (e.g., which may include the image to be sent to the user), and compare the received images with stored or "cached" images on the server, in order to determine if the cached images should be updated. This process may happen frequently or infrequently, depending on embodiments, and may be continuously ongoing as long as there is a data connection, in some embodiments. In some embodiments, the frequency of the process may depend on the available bandwidth to the array local processing module 3500, e.g., that is, at off-peak times, the frequency may be increased. In an embodiment, server cached image updating module 4270 compares an image received from the array local processing module 3500, and, if the image has changes, replaces the cached version of the image with the newer image.

[00214] Referring now again to Fig. 1-E, Fig. 1-E shows user device 5600. In an embodiment, user device 5600 includes image containing selected target receiving module 5630 that may be configured to receive the image from server 4000, e.g., requested image transmission to user device module 4260 of server 4000 (e.g., depicted in Fig. 1-Y, with the data transmission indicated by a rightward-upward arrow passing through Fig. 1-Y and Fig. 1-0 (to the north) before arriving at Fig. 1-E.

[00215] Referring again to Fig. 1-E, Fig. 1-E shows received image presentation module 5640, which may display the requested pixels that include the selected target to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through an exemplary interface that allows the user to monitor the target, and which also may display information about the target (e.g., in an embodiment, as shown in the figures, the game statistics for the football player also may

52 be shown), which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[00216] Referring again to Fig. 1, Figs. 1-F, 1-P, 1-Z, and 1-AJ depict a MUVIA implementation according to an embodiment. This embodiment may be colloquially known as "live street view" in which one or more MUVIA systems allow for a user to move through an area similarly to the well known Google-branded Maps (or Google- Street), except with the cameras working in real time. For example, in an embodiment, referring now to Fig. 1-F, a user device 5700 may include a target selection reception module 5710. Target selection reception module 5710 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA may be focused on a city, and the target may be an address, a building, a car, or a person in the city. As an example, a user may select a street address as a "target." This may be facilitated by a target presentation module, e.g., image selection presentation module 5712, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the street address. In an embodiment, image selection presentation module 5712 may use static images that may or may not be sourced by the MUVIA system, and, in another embodiment, image selection presentation module 5712 may use current or cached views from the MUVIA system.

[00217] In an embodiment, image selection presentation module 5712 may include an audible target selection module 5714 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[00218] Referring again to Fig. 1, e.g., Fig. 1-F, in an embodiment, user device 5700 may include selected target transmission module 5720. Selected target transmission module 5720 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[00219] Referring now to Fig. 1-P, Fig. 1-P depicts a server 4000 of the MUVIA system according to embodiments. In an embodiment, server 4000 may include a selected target

53 reception module 4310. Selected target reception module 4310 may receive the selected target from the user device 3700. In an embodiment, server 4000 may provide all or most of the data that facilitates the selection of the target, that is, the images and the interface, which may be provided, e.g., through a web portal.

[00220] Referring again to Fig. 1-P, in an embodiment, server 4000 may include a selected image pre-processing module 4320. Selected image pre-processing module 4320 may perform one or more tasks of pre-processing the image, some of which are described herein for exemplary purposes. For example, in an embodiment, selected image pre-processing module 4320 may include a resolution determination module 4322 which may be configured to determine the resolution for the image in order to show the target (and here, resolution is merely a stand-in for any facet of the image, e.g., color depth, size, shadow, pixilation, filter, etc.). In an embodiment, selected image preprocessing module 4320 may include a cached pixel fill-in module 4324. Cached pixel fill-in module 4324 may be configured to manage which portions of the requested image are recovered from a cache, and which are updated, in order to improve performance. For example, if a view of a street is requested, certain features of the street (e.g., buildings, trees, etc., may not need to be retrieved each time, but can be filled in with a cached version, or, in another embodiment, can be filled in by an earlier version. A check can be done to see if a red parked car is still in the same spot as it was in an hour ago; if so, that part of the image may not need to be updated. Using lower

resolution/prior images stored in a memory 4215, as well as other image processing techniques, cached pixel fill-in module determines which portions of the image do not need to be retrieved, thus reducing bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00221] Referring again to Fig. 1-P, in an embodiment, selected image pre-processing module 4320 of server 4000 may include a static object obtaining module 4326, which may operate similarly to cached pixel fill-in module 4324. For example, as in the example shown in Fig. 1-B, static object obtaining module 4326 may obtain prior versions of static objects, e.g., buildings, trees, fixtures, landmarks, etc., which may save

54 bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[00222] Referring again to Fig. 1-P, in an embodiment, pixel information transmission to ISA module 4330 may transmit the request for pixels (e.g., an image, after the preprocessing) to the array local processing module 3600 (e.g., as shown in Figs. 1-Z and 1- AI, with the downward extending dataflow arrow).

[00223] Referring now to Figs. 1-Z and 1-AI, in an embodiment, an array local processing module 3600, that may be connected by a higher bandwidth connection to an image sensor array 3200, may be present.

[00224] . Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[00225] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3605 to the array local storage and processing module 3600. In an embodiment, array local storage and processing module 3600 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3600 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3605" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3600 to the remote server, which may be, but is not required to be, located further away temporally.

55 [00226] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[00227] Referring again to Fig. 1-AJ and Fig. 1-Z, the image sensor array 3200 may capture an image that is received by image capturing module 3605. Image capturing module 3605 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3610.

Consolidated user request reception module 3610 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3620 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00228] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3630. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3617. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3615. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3600, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module may include or communicate with a lower resolution module 3614, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

56 [00229] Referring again to Fig. 1-AJ, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3640. Selected pixel transmission module 3640 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth

communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00230] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3600 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3600 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00231] Referring now again to Fig. 1-P, in an embodiment, server 4000 may include image receiving from ISA module 4340. Image receiving from ISA module 4340 may receive the image data from the array local processing module 3600 (e.g., in the arrow coming "north" from Fig. 1-AJ via Fig. 1-Z). The image may include the pixels that were requested from the image sensor array 3200. In an embodiment, server 4000 also may include received image post-processing module 4350, which may, among other postprocessing tasks, fill in objects and pixels into the image that were determined not to be needed by selected image pre-processing module 4320, as previously described. In an embodiment, server 4000 may include received image transmission to user device module 4360, which may be configured to transmit the requested image to the user

57 device 5700. Requested image transmission to user device module 4360 may include components necessary to communicate with user device 5700 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00232] Referring now again to Fig. 1-F, user device 5700 may include a server image reception module 5730. Server image reception module 5730 may receive an image from sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-F.

[00233] In an embodiment, as shown in Figs. 1-F and 1-G, server image reception module 5730 may include an audio stream reception module 5732 and a video stream reception module 5734. In an embodiment, as discussed throughout this application, the MUVIA system may capture still images, video, and also sound, as well as other electromagnetic waves and other signals and data. In an embodiment, the audio signals and the video signals may be handled together, or they may be handled separately, as separate streams. Although not every module in the instant diagram separately shows audio streams and video streams, it is noted here that all implementations of MUVIA contemplate both audio and video coverage, as well as still image and other data collection.

[00234] Referring now to Fig. 1-G, which shows another portion of user device 5700, Fig. 1-G may include a display 5755 and a memory 5765, which may be used to facilitate presentation and/or storage of the received images.

[00235] Figs. 1-H, 1-R, 1-AA, and 1-AB show an embodiment of a MUVIA

implementation. For example, referring now to Fig. 1-H, Fig. 1-H shows an embodiment of a user device 5800. For exemplary purposes, the user device 5800 may be an augmented reality device that shows a user looking down a "street" at which the user is not actually present, e.g., a "virtual tourism" where the user may use their augmented

58 reality device (e.g., googles, e.g., an Oculus Rift-type headgear device) which may be a wearable computer. It is noted that this embodiment is not limited to wearable computers or augmented reality, but as in all of the embodiments described in this disclosure, may be any device. The use of a wearable augmented/virtual reality device is merely used to for illustrative and exemplary purposes.

[00236] In an embodiment, user device 5800 may have a field of view 5810, as shown in Fig. 1-H. The field of view for the user 5810 may be illustrated in Fig. 1-H as follows. The most internal rectangle, shown by the dot hatching, represents the user's "field of view" as they look at their "virtual world." The second most internal rectangle, with the straight line hatching, represents the "nearest" objects to the user, that is, a range where the user is likely to "look" next, by turning their head or moving their eyes. In an embodiment, this area of the image may already be loaded on the device, e.g., through use of a particular codec, which will be discussed in more detail herein. The outermost rectangle, which is the image without hatching, represents further outside the user's viewpoint. This area, too, may already be loaded on the device. By loading areas where the user may eventually look, the system can reduce latency and make a user's motions, e.g., movement of head, eyes, and body, appear "natural" to the system.

[00237] Referring now to Figs. 1-AA and 1-AB, these figures show an array local processing module 3700 that is connected to an image sensor array 3200 (e.g., as shown in Fig. 1-AK, and "viewing" a city as shown in Fig. 1-AJ). The image sensor array 3200 may operate as previously described in this document. In an embodiment, array local processing module 3700 may include a captured image receiving module 3710, which may receive the entire scene captured by the image sensor array 3200, through the higher-bandwidth communication channel 3505. As described previously in this application, these pixels may be "cropped" or "decimated" into the relevant portion of the captured image, as described by one or more of the user device 5800, the server 4000, and the processing done at the array local processing module 3700. This process may occur as previously described. The relevant pixels may be handled by relevant portion of captured image receiving module 3720.

59 [00238] Referring now to Fig. 1-AB, in an embodiment, the relevant pixels for the image that are processed by relevant portion of captured image receiving module 3720 may be encoded using a particular codec at relevant portion encoding module 3730. In an embodiment, the codec may be configured to encode the innermost rectangle, e.g., the portion that represents the current user's field of view, e.g., portion 3716, at a higher resolution, or a different compression, or a combination of both. The codec may be further configured to encode the second rectangle, e.g., with the vertical line hashing, e.g., portion 3714, at a different resolution and/or a different (e.g., a higher) compression. Similarly, the outermost portion of the image, e.g., the clear portion 3712, may again be coded at still another resolution and/or a different compression. In an embodiment, the codec itself handles the algorithm for encoding the image, and as such, in an

embodiment, the codec may include information about user device 5800.

[00239] As shown in Fig. 1-AB, the encoded portion of the image, including portions 3716, 3714, and 3712, may be transmitted using encoded relevant portion transmitting module 3740. It is noted that "lower compression," "more compression," and "higher compression," are merely used as one example for the kind of processing done by the codec. For example, instead of lower compression, a different sampling algorithm or compacting algorithm may be used, or a lossier algorithm may be implemented for various parts of the encoded relevant portion.

[00240] Referring now to Fig. 1-R, Fig. 1-R depicts a server 4000 in a MUVIA system according to an embodiment. For example, as shown in Fig. 1-R, server 4000 may include, in addition to portions previously described, an encoded image receiving module 4410. Encoded image receiving module 4410 may receive the encoded image, encoded as previously described, from encoded relevant portion transmitting module 3740 of array local processing module 3700.

[00241] Referring again to Fig. 1-R, server 4000 may include an encoded image transmission controlling module 4420. Encoded image transmission controlling module 4420 may transmit portions of the image to the user device 5800. In an embodiment, at least partially depending on the bandwidth and the particulars of the user device, the

60 server may send all of the encoded image to the user device, and let the user device decode the portions as needed, or may decode the image and send portions in piecemeal, or with a different encoding, depending on the needs of the user device, and the complexity that can be handled by the user device.

[00242] Referring again to Fig. 1-H, user device 5800 may include an encoded image transmission receiving module 5720, which may be configured to receive the image that is coded in a particular way, e.g., as will be disclosed in more detail herein. Fig. 1-H also may include an encoded image processing module 5830 that may handle the processing of the image, that is, encoding and decoding portions of the image, or other processing necessary to provide the image to the user.

[00243] Referring now to Fig. 1-AL, Fig. 1-AL shows an implementation of an

Application Programming Interface (API) for the various MUVIA components.

Specifically, image sensor array API 7800 may include, among other elements, a programming specification 7810, that may include, for example, libraries, classes, specifications, templates, or other coding elements that generally make up an API, and an access authentication module 7820 that governs API access to the various image sensor arrays. The API allows third party developers to access the workings of the image sensor array and the array local processing module 3700, so that the third party developers can write applications for the array local processing module 3700, as well as determine which data captured by the image sensor array 3200 (which often may be multiple gigabytes or more of data per second) should be kept or stored or transmitted. In an embodiment, API access to certain functions may be limited. For example, a tiered system may allow a certain number of API calls to the MUVIA data per second, per minute, per hour, or per day. In an embodiment, a third party might pay fees or perform a registration that would allow more or less access to the MUVIA data. In an embodiment, the third party could host their application on a separate web site, and let that web site access the image sensor array 3200 and/or the array local processing module 3700 directly.

61 [00244] Referring again to Fig. 1, Figs. 1-1, 1-J, 1-S, 1-T, 1-AC, 1-AD, 1-AM, and 1- AN, in an embodiment, show a MUVIA implementation that allows insertion of advertising (or other context-sensitive material) into the images displayed to the user.

[00245] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection accepting module 5910. User selection accepting module 5910 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-1, the user selection accepting module 5910 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[00246] In an embodiment, the "user selection" facilitated by the user selection accepting module 5910 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[00247] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection transmitting module 5920. The user selection transmitting module 5920 may take the user selection from user selection transmitting module 5920, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5900 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server 4000. Following the thick- line arrow leftward from user selection transmitting module 5920 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed

62 herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[00248] Referring again to Fig. 1-1, Fig. 1-1 also includes a selected image receiving module 5930 and a user selection presenting module 5940, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[00249] Referring now to Fig. 1-T (graphically represented as "down" and "to the right" of Fig. 1-1), in an embodiment, a server 4000 may include a selected image reception module 4510. In an embodiment, selected image reception module 4510 of server 4000 may receive the selected target from the user device 5900. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00250] Referring again to Fig. 1-T, in an embodiment, server 4000 may include selected image pre-processing module 4520. Selected image pre-processing module 4520 may perform one or more tasks of pre-processing the image, some of which have been previously described with respect to other embodiments. In an embodiment, server 4000 also may include pixel information transmission to ISA module 4330 configured to transmit the image request data to the image search array 3200, as has been previously described.

[00251] Referring now to Figs. 1-AD and 1-AN, array local processing module 3700 may be connected to an image sensor array 3200 through a higher-bandwidth

communication link 3505, e.g., a USB or PCI port. In an embodiment, image sensor array 3200 may include a request reception module 3710. Request reception module 3710 may receive the request for an image from the server 4000, as previously described.

63 Request reception module 3710 may transmit the data to a pixel selection module 3720, which may receive the pixels captured from image sensor array 3200, and select the ones that are to be kept. That is, in an embodiment, through use of the (sometimes

consolidated) user requests and the captured image, pixel selection module 3720 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[00252] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3730. In an

embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3717. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3715. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3700, or may be subject to other manipulations or processing separate from the user requests, as described in previous embodiments. In an embodiment, unused pixel decimation module 3730 may be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to fulfill the request of the user.

[00253] Referring again to Fig. 1-AN, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3740. Selected pixel transmission module 3740 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3710. Similarly to lower-bandwidth

communication 3715, the lower-bandwidth communication 3710 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00254] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module

64 3700 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3700 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00255] Referring now again to Fig. 1-T, in an embodiment, server 4000 may include received image post-processing module 4550. Received image post-processing module 4550 may receive the image data from the array local processing module 3700 (e.g., in the arrow coming "north" from Fig. 1-AN via Fig. 1-AD). The image may include the pixels that were requested from the image sensor array 3200.

[00256] In an embodiment, server 4000 also may include advertisement insertion module 4560. Advertisement insertion module 4560 may insert an advertisement into the received image. The advertisement may be based one or more of the contents of the image, a characteristic of a user or the user device, or a setting of the advertisement server component 7700 (see, e.g., Fig. 1-AC, as will be discussed in more detail herein). The advertisement insertion module 4560 may place the advertisement into the image using any known image combination techniques, or, in another embodiment, the advertisement image may be in a separate layer, overlay, or any other data structure. In an embodiment, advertisement insertion module 4560 may include context-based advertisement insertion module 4562, which may be configured to add advertisements that are based on the context of the image. For example, if the image is a live street view of a department store, the context of the image may show advertisements related to products sold by that department store, e.g., clothing, cosmetics, or power tools.

[00257] Referring again to Fig. 1-T, server 4000 may include a received image with advertisement transmission to user device module 4570 configured to transmit the ima

65 5900. Received image with advertisement transmission to user device module 4570 may include components necessary to communicate with user device 5900 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00258] Referring again to Fig. 1-1, user device 5900 may include a selected image receiving module 5930, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5940, which may display the requested pixels to the user, including the advertisement, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-1.

[00259] Referring now to Fig. 1-AC, Fig. 1-AC shows an advertisement server component 7700 configured to deliver advertisements to the server 4000 for insertion into the images prior to delivery to the user. In an embodiment, advertisement server component 7700 may be integrated with server 4000. In another embodiment,

advertisement server component may be separate from server 4000 and may

communicate with server 4000. In yet another embodiment, rather than interacting with server 4000, advertisement server component 7700 may interact directly with the user device 5900, and insert the advertisement into the image after the image has been received, or, in another embodiment, cause the user device to display the advertisement concurrently with the image (e.g., overlapping or adjacent to). In such embodiments, some of the described modules of server 4000 may be incorporated into user device 5900, but the functionality of those modules would operate similarly to as previously described.

[00260] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a user data collection module 7705. User data collection module 7705 may collect data from user device 5900, and use that data to drive placement of advertisements (e.g., based on a user's browser history, e.g., to sports sites, and the like).

66 [00261] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include advertisement database 7715 which includes

advertisements that are ready to be inserted into images. In an embodiment, these advertisements may be created on the fly.

[00262] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include an advertisement request reception module 7710 which receives a request to add an advertisement into the drawing (the receipt of the request is not shown to ease understanding of the drawings). In an embodiment, advertisement server component 7700 may include advertisement selection module 7720, which may include an image analysis module 7722 configured to analyze the image to determine the best context-based advertisement to place into the image. In an embodiment, that decision may be made by the server 4000, or partly at the server 4000 and partly at the advertisement server component 7700 (e.g., the advertisement server component may have a set of advertisements from which a particular one may be chosen). In an embodiment, various third parties may compensate the operators of server component 7700, server 4000, or any other component of the system, in order to receive preferential treatment.

[00263] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a selected advertisement transmission module 7730, which may transmit the selected advertisement (or a set of selected advertisements) to the server 4000. In an embodiment, selected advertisement transmission module 7730 may send the complete image with the advertisement overlaid, e.g., in an implementation in which the advertisement server component 7700 also handles the placement of the advertisement. In an embodiment in which advertisement server component 7700 is integrated with server 4000, this module may be an internal transmission module, as may all such transmission/reception modules.

Exemplary Environment 200

[00264] Referring now to Fig. 2A, Fig. 2A illustrates an example environment 200 in which methods, systems, circuitry, articles of manufacture, and computer program

67 products and architecture, in accordance with various embodiments, may be implemented by at least one server device 230. Image device 220 may include a number of individual sensors that capture data. Although commonly referred to throughout this application as "image data," this is merely shorthand for data that can be collected by the sensors.

Other data, including video data, audio data, electromagnetic spectrum data (e.g., infrared, ultraviolet, radio, microwave data), thermal data, and the like, may be collected by the sensors.

[00265] Referring again to Fig. 2A, in an embodiment, image device 220 may operate in an environment 200. Specifically, in an embodiment, image device 220 may capture a scene 215. The scene 215 may be captured by a number of sensors 243. Sensors 243 may be grouped in an array, which in this context means they may be grouped in any pattern, on any plane, but have a fixed position relative to one another. Sensors 243 may capture the image in parts, which may be stitched back together by processor 222. There may be overlap in the images captured by sensors 243 of scene 215, which may be removed.

[00266] Upon capture of the scene in image device 220, in processes and systems that will be described in more detail herein, the requested pixels are selected. Specifically, pixels that have been identified by a remote user, by a server, by the local device, by another device, by a program written by an outside user with an API, by a component or other hardware or software in communication with the image device, and the like, are transmitted to a remote location via a communications network 240. The pixels that are to be transmitted may be illustrated in Fig. 2A as selected portion 255, however this is a simplified expression meant for illustrative purposes only.

[00267] Referring again to Fig. 2A, in an embodiment, server device 230 may be any device or group of devices that is connected to a communication network. Although in some examples, server device 230 is distant from image device 220, that is not required. Server device 230 may be "remote" from image device 220, which may be that they are separate components, but does not necessarily imply a specific distance. The

communications network may be a local transmission component, e.g., a PCI bus. Server

68 device 230 may include a request handling module 232 that handles requests for images from user devices, e.g., user device 250A and 240B. Request handling module 232 also may handle other remote computers and/or users that want to take active control of the image device, e.g., through an API, or through more direct control.

[00268] Server device 230 also may include an image device management module, which may perform some of the processing to determine which of the captured pixels of image device 220 are kept. For example, image device management module 234 may do some pattern recognition, e.g., to recognize objects of interest in the scene, e.g., a particular football player, as shown in the example of Fig. 2A. In other embodiments, this processing may be handled at the image device 220 or at the user device 250. In an embodiment, server device 230 limits a size of the selected portion by a screen resolution of the requesting user device.

[00269] Server device 230 then may transmit the requested portions to the user devices, e.g., user device 250A and user device 250B. In another embodiment, the user device or devices may directly communicate with image device 220, cutting out server device 230 from the system.

[00270] In an embodiment, user device 250A and 250B are shown, however user devices may be any electronic device or combination of devices, which may be located together or spread across multiple devices and/or locations. Image device 220 may be a server device, or may be a user-level device, e.g., including, but not limited to, a cellular phone, a network phone, a smartphone, a tablet, a music player, a walkie-talkie, a radio, an augmented reality device (e.g., augmented reality glasses and/or headphones), wearable electronics, e.g., watches, belts, earphones, or "smart" clothing, earphones, headphones, audio/visual equipment, media player, television, projection screen, flat screen, monitor, clock, appliance (e.g., microwave, convection oven, stove, refrigerator, freezer), a navigation system (e.g., a Global Positioning System ("GPS") system), a medical alert device, a remote control, a peripheral, an electronic safe, an electronic lock, an electronic security system, a video camera, a personal video recorder, a personal audio recorder, and the like. Device 220 may include a device interface 243 which may allow the device 220

69 to output data to the client in sensory (e.g., visual or any other sense) form, and/or allow the device 220 to receive data from the client, e.g., through touch, typing, or moving a pointing device (e.g., a mouse). User device 250 may include a viewfinder or a viewport that allows a user to "look" through the lens of image device 220, regardless of whether the user device 250 is spatially close to the image device 220.

[00271] Referring again to Fig. 2A, in various embodiments, the communication network 240 may include one or more of a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a wireless local area network (WLAN), a personal area network (PAN), a Worldwide Interoperability for Microwave Access (WiMAX), public switched telephone network (PTSN), a general packet radio service (GPRS) network, a cellular network, and so forth. The communication networks 240 may be wired, wireless, or a combination of wired and wireless networks. It is noted that "communication network" as it is used in this application refers to one or more communication networks, which may or may not interact with each other.

[00272] Referring now to Fig. 2B, Fig. 2B shows a more detailed version of server device 230, according to an embodiment. The server device 230 may include a device memory 245. In an embodiment, device memory 245 may include memory, random access memory ("RAM"), read only memory ("ROM"), flash memory, hard drives, disk- based media, disc-based media, magnetic storage, optical storage, volatile memory, nonvolatile memory, and any combination thereof. In an embodiment, device memory 245 may be separated from the device, e.g., available on a different device on a network, or over the air. For example, in a networked system, there may be more than one server device 230 whose device memories 245 may be located at a central server that may be a few feet away or located across an ocean. In an embodiment, device memory 245 may include of one or more of one or more mass storage devices, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), cache memory such as random access memory (RAM), flash memory, synchronous random access memory (SRAM), dynamic random access memory

(DRAM), and/or other types of memory devices. In an embodiment, memory 245 may be located at a single network site. In an embodiment, memory 245 may be located at

70 multiple network sites, including sites that are distant from each other. In an embodiment, device memory 245 may include one or more of cached images 245A and previously retained image data 245B, as will be discussed in more detail further herein.

[00273] Referring again to Fig. 2B, in an embodiment, server device 230 may include an optional viewport 247, which may be used to view images captured by server device 230. This optional viewport 247 may be physical (e.g., glass) or electrical (e.g., LCD screen), or may be at a distance from server device 230.

[00274] Referring again to Fig. 2B, Fig. 2B shows a more detailed description of server device 230. In an embodiment, device 220 may include a processor 222. Processor 222 may include one or more microprocessors, Central Processing Units ("CPU"), a Graphics Processing Units ("GPU"), Physics Processing Units, Digital Signal Processors, Network Processors, Floating Point Processors, and the like. In an embodiment, processor 222 may be a server. In an embodiment, processor 222 may be a distributed-core processor. Although processor 222 is as a single processor that is part of a single device 220, processor 222 may be multiple processors distributed over one or many devices 220, which may or may not be configured to operate together.

[00275] Processor 222 is illustrated as being configured to execute computer readable instructions in order to execute one or more operations described above, and as illustrated in Fig. 10, Figs. 11A-11G, Figs. 12A-12E, Figs. 13A-13C, and Figs. 14A-14E. In an embodiment, processor 222 is designed to be configured to operate as processing module 250, which may include one or more of a request for particular image data that is part of a scene acquiring module 252, a request for particular image data transmitting to an image sensor array module 254 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, a particular image data from the image sensor array exclusive receiving module 256 configured to transmit the selected particular portion from the scene to a remote location, and a received particular image data transmitting to at least one requestor module 258 configured to de-emphasize pixels from the scene that are not part of the selected particular portion of the scene.

71 Exemplary Environment 300A

[00276] Referring now to Fig. 3 A, Fig. 3 A shows an exemplary embodiment of an image device, e.g., image device 220A operating in an environment 300A. In an embodiment, image device 220A may include an array 310 of image sensors 312 as shown in Fig. 3. The array of image sensors in this image is shown in a rectangular grid, however this is merely exemplary to show that image sensors 312 may be arranged in any format. In an embodiment, each image sensor 312 may capture a portion of scene 315, which portions are then processed by processor 350. Although processor 350 is shown as local to image device 220A, it may be remote to image device 220A, with a sufficiently high-bandwidth connection to receive all of the data from the array of image sensors (e.g., multiple USB 3.0 lines). In an embodiment, the selected portions from the scene (e.g., the portions shown in the shaded box, e.g., selected portion 315), may be transmitted to a remote device 330, which may be a user device or a server device, as previously described. In an embodiment, the pixels that are not transmitted to remote device 330 may be stored in a local memory 340 or discarded.

Exemplary Environment 300B

[00277] Referring now to Fig. 4, Fig. 4 shows an exemplary embodiment of an image device, e.g., image device 320B operating in an environment 300B. In an embodiment, image device 320B may include an image sensor array 320B, e.g., an array of image sensors, which, in this example, are arranged around a polygon to increase the field of view that can be captured, that is, they can capture scene 315, illustrated in Fig. 3B as a natural landmark that can be viewed in a virtual tourism setting. Processor 322 receives the scene 315B and selects the pixels from the scene 315B that have been requested by a user, e.g., requested portions 317B. Requested portions 317B may include an

overlapping area 324B that is only transmitted once. In an embodiment, the requested portions 317B may be transmitted to a remote location via communications network 240.

Exemplary Environment 300C

[00278] Referring now to Fig. 3C, Fig. 3C shows an exemplary embodiment of an image device, e.g., image device 320C operating in an environment 300C. In an embodiment, image device 320C may capture a scene, of which a part of the scene, e.g., scene portion

72 315C, as previously described in other embodiments (e.g., some parts of image device 320C are omitted for simplicity of drawing). In an embodiment, e.g., scene portion 315C may show a street- level view of a busy road, e.g., for a virtual tourism or virtual reality simulator. In an embodiment, different portions of the scene portion 315C may be transmitted at different resolutions or at different times. For example, in an embodiment, a central part of the scene portion 315C, e.g., portion 516, which may correspond to what a user's eyes would see, is transmitted at a first resolution, or "full" resolution relative to what the user's device can handle. In an embodiment, an outer border outside portion 316, e.g., portion 314, may be transmitted at a second resolution, which may be lower, e.g., lower than the first resolution. In another embodiment, a further outside portion, e.g., portion 312, may be discarded, transmitted at a still lower rate, or transmitted asynchronously.

Exemplary Environment 400A

[00279] Referring now to Fig. 4A, Fig. 4A shows an exemplary embodiment of a server device, e.g., server device 430A. In an embodiment, an image device, e.g., image device 420 A may capture a scene 415. Scene 415 may be stored in local memory 440. The portions of scene 415 that are requested by the server device 430A may be transmitted (e.g., through requested image transfer 465) to requested pixel reception module 432 of server device 430A. In an embodiment, the requested pixels transmitted to requested pixel reception module 432 may correspond to images that were requested by various users and/or devices (not shown) in communication with server device 430A.

[00280] Referring again to Fig. 4A, in an embodiment, pixels not transmitted from local memory 440 of image device 420A may be stored in untransmitted pixel temporary storage 440B. These untransmitted pixels may be stored and transmitted to the server device 430A at a later time, e.g., an off-peak time for requests for images of scene 415. For example, in an embodiment, the pixels stored in untransmitted pixel temporary storage 440B may be transmitted to the unrequested pixel reception module 434 of server device 430A at night, or when other users are disconnected from the system, or when the available bandwidth to transfer pixels between image device 420A and server device 430A reaches a certain threshold value.

73 [00281] In an embodiment, server device 430A may analyze the pixels received by unrequested pixel reception module 434, for example, to provide a repository of static images from the scene 415 that do not need to be transmitted from the image device 420 A each time certain portions of scene 415 are requested.

Exemplary Environment 400B

[00282] Referring now to Fig. 4B, Fig. 4A shows an exemplary embodiment of a server device, e.g., server device 430B. In an embodiment, an image device, e.g., image device 420B may capture a scene 415B. Scene 415B may be stored in local memory 440B. In an embodiment, image device 420B may capture the same scene 415B multiple times. In an embodiment, scene 415B may include an unchanged area 416 A, which is a portion of the image that has not changed since the last time the scene 415B was captured by the image device 420B. In an embodiment, scene 415B also may include a changed area 416B, which may be a portion of the image that has changed since the last time the scene 415B was captured by the image device 420B. Although changed area 416B is illustrated as polygonal and contiguous in Fig. 4B, this is merely for illustrative purposes, and changed area 416B may be, in some embodiments, nonpolygonal and/or noncontiguous.

[00283] In an embodiment, image device 420B, upon capturing scene 415B use an image previously stored in local memory 440B to compare the previous image, e.g., previous image 441B, to the current image, e.g., current image 442B, and may determine which areas of the scene 415B have been changed. The changed areas may be transmitted to server device 430B, e.g., to changed area reception module 432B. This may occur through a changed area transmission 465, as indicated in Fig. 4B.

[00284] Referring again to Fig. 4B, in an embodiment, server device 430B receives the changed area at changed area reception module 432B. Server device 430B also may include an unchanged area addition module 434B, which adds the unchanged areas that were previously stored in a memory of server device 430B (not shown) from a previous transmission from image device 420B. In an embodiment, server device 430B also may include a complete image transmission module 436B configured to transmit the completed image to a user device, e.g., user device 450B, that requested the image.

74 Exemplary Environment 500A

[00285] Referring now to Fig. 5A, Fig. 5A shows an exemplary embodiment of a server device, e.g., server device 530A. In an embodiment, an image device 520 A may capture a scene 515 through use of an image sensor array 540, as previously described. The image may be temporarily stored in a local memory 540 (as pictured), or may be partially or wholly stored in a local memory before transmission to a server device 530A. In an embodiment, server device 530A may include an image data reception module 532A. Image data reception module 532A may receive the image from image device 520A. In an embodiment, server device 530A may include data addition module 534A, which may add additional data to the received image data. In an embodiment, the additional data may be visible or invisible, e.g., pixel data or metadata, for example. In an embodiment, the additional data may be advertising data. In an embodiment, the additional data may be context-dependent upon the image data, for example, if the image data is of a football player, the additional data may be statistics about that player, or an advertisement for an online shop that sells that player's jersey.

[00286] In an embodiment, the additional data may be stored in a memory of server device 530A (not shown). In another embodiment, the additional data may be retrieved from an advertising server or another data server. In an embodiment, the additional data may be tailored to one or more characteristics of the user or the user device, e.g., the user may have a setting that labels each player displayed on the screen with that player' s last name. Referring again to Fig. 5A, in an embodiment, server device 530A may include a modified data transmission module 536A, which may receive the modified data from data addition module 534A, and transmit the modified data to a user device, e.g., a user device that requested the image data, e.g., user device 550A.to the server device 430A at a later time, e.g., an off-peak time for requests for images of scene 415. For example, in an embodiment, the pixels stored in untransmitted pixel temporary storage 440B may be transmitted to the unrequested pixel reception module 434 of server device 430A at night, or when other users are disconnected from the system, or when the available bandwidth to transfer pixels between image device 420A and server device 430A reaches a certain threshold value.

75 [00287] In an embodiment, server device 430A may analyze the pixels received by unrequested pixel reception module 434, for example, to provide a repository of static images from the scene 415 that do not need to be transmitted from the image device 420 A each time certain portions of scene 415 are requested.

Exemplary Environment 500B

[00288] Referring now to Fig. 5B, Fig. 5B shows an exemplary embodiment of a server device, e.g., server device 530B. In an embodiment, multiple user devices, e.g., user device 502A, user device 502B, and user device 502C, each may send a request for image data from a scene, e.g., scene 515B. Each user device may send a request to a server device, e.g., server device 530B. Server device 530B may consolidate the requests, which may be for various resolutions, shapes, sizes, and other features, into a single combined request 570. Overlapping portions of the request, e.g., as shown in overlapping area 572, may be combined.

[00289] In an embodiment, server device 530B transmits the combined request 570 to the image device 520B. In an embodiment, image device 520B uses the combined request 570 to designate selected pixels 574, which then may be transmitted back to the server device 530B, where the process of combining the requests may be reversed, and each user device 502A, 502B, and 502C may receive the requested image. This process will be discussed in more detail further herein.

Exemplary Embodiments of the Various Modules of Portions of Processor 250

[00290] Figs. 6-9 illustrate exemplary embodiments of the various modules that form portions of processor 250. In an embodiment, the modules represent hardware, either that is hard-coded, e.g., as in an application- specific integrated circuit ("ASIC") or that is physically reconfigured through gate activation described by computer instructions, e.g., as in a central processing unit.

[00291] Referring now to Fig. 6, Fig. 6 illustrates an exemplary implementation of the request for particular image data that is part of a scene acquiring module 252. As illustrated in Fig. 6, the request for particular image data that is part of a scene acquiring module may include one or more sub-logic modules in various alternative

76 implementations and embodiments. For example, as shown in Fig. 6, e.g., Fig. 6A, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene and includes one or more images acquiring module 602 and request for particular image data that is part of a scene receiving module 604. In an embodiment, module 604 may include request for particular image data that is part of a scene receiving from a client device module 606. In an embodiment, module 606 may include one or more of request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene module 608 and request for particular image data that is part of a scene receiving from a client device configured to receive a selection of a particular image module 612. In an embodiment, module 608 may include request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene in a viewfinder module 610. In an embodiment, module 612 may include one or more of request for particular image data that is part of a scene receiving from a client device configured to receive a scene-based selection of a particular image module 614 and request for particular image data that is part of a scene receiving from one or more various devices configured to receive a scene-based selection of a particular image module 616.

[00292] Referring again to Fig. 6, e.g., Fig. 6B, as described above, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene that is the image data collected by the array of more than one image sensor acquiring module 618 and request for particular image data that is part of a scene that a representation of the image data collected by the array of more than one image sensor acquiring module 620. In an embodiment, module 620 may include one or more of request for particular image data that is part of a scene that a sampling of the image data collected by the array of more than one image sensor acquiring module 622, request for particular image data that is part of a scene that is a subset of the image data collected by the array of more than one image sensor acquiring module 624, and request for particular image data that is part of a scene that is a low-resolution version of the image data collected by the array of more than one image sensor acquiring module 626.

77 [00293] Referring again to Fig. 6, e.g., Fig. 6C, in an embodiment, module 252 may include one or more of request for particular image data that is part of a scene that is a football game acquiring module 628, request for particular image data that is part of a scene that is an area street view acquiring module 630, request for particular image data that is part of a scene that is a tourist destination acquiring module 632, and request for particular image data that is part of a scene that is inside a home acquiring module 634.

[00294] Referring again to Fig. 6, e.g., Fig. 6D, in an embodiment, module 252 may include request for particular image data that is an image that is a portion of the scene acquiring module 636. In an embodiment, module 636 may include one or more of request for particular image data that is an image that is a particular football player and a scene that is a football field acquiring module 638 and request for particular image data that is an image that is a vehicle license plate and a scene that is a highway bridge acquiring module 640.

[00295] Referring again to Fig. 6, e.g., Fig. 6E, in an embodiment, module 252 may include one or more of request for particular image object located in the scene acquiring module 642 and particular image data of the scene that contains the particular image object determining module 644. In an embodiment, module 642 may include one or more of request for particular person located in the scene acquiring module 646, request for a basketball located in the scene acquiring module 648, request for a motor vehicle located in the scene acquiring module 650, and request for a human object representation located in the scene acquiring module 652.

[00296] Referring again to Fig. 6, e.g., Fig. 6F, in an embodiment, module 252 may include one or more of first request for first particular image data from a first requestor receiving module 662, second request for first particular image data from a different second requestor receiving module 664, first received request for first particular image data and second received request for second particular image data combining module 666, first request for first particular image data and second request for second particular image data receiving module 670, and received first request and received second request combining module 672. In an embodiment, module 666 may include first received

78 request for first particular image data and second received request for second particular image data combining into the request for particular image data module 668. In an embodiment, module 672 may include received first request and received second request common pixel deduplicating module 674.

[00297] Referring again to Fig. 6, e.g., Fig. 6G, in an embodiment, module 252 may include one or more of request for particular video data that is part of a scene acquiring module 676, request for particular audio data that is part of a scene acquiring module 678, request for particular image data that is part of a scene receiving from a user device with an audio interface module 680, and request for particular image data that is part of a scene receiving from a microphone-equipped user device with an audio interface module 682.

[00298] Referring now to Fig. 7, Fig. 7 illustrates an exemplary implementation of request for particular image data transmitting to an image sensor array module 254. As illustrated in Fig. 7, the request for particular image data transmitting to an image sensor array module 254 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 7, e.g., Fig. 7A, in an embodiment, module 254 may include one or more of request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested particular image data 702, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes two angled image sensors and that is configured to capture the scene that is larger than the requested particular image data 704, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a grid and that is configured to capture the scene that is larger than the requested particular image data 706, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a line and that is configured to capture the scene that is larger than the requested particular image data 708, and request

79 for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one nonlinearly arranged stationary image sensor and that is configured to capture the scene that is larger than the requested particular image data 710.

[00299] Referring again to Fig. 7, e.g., Fig. 7B, in an embodiment, module 254 may include one or more of request for particular image data transmitting to an image sensor array that includes an array of static image sensors module 712, request for particular image data transmitting to an image sensor array that includes an array of image sensors mounted on a movable platform module 716, request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more data than the requested particular image data 718, and request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents a greater field of view than the requested particular image data 724. In an embodiment, module 712 may include request for particular image data transmitting to an image sensor array that includes an array of static image sensors that have fixed focal length and fixed field of view module 714. In an embodiment, module 718 may include one or more of request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times as much data as the requested particular image data 720 and request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much data as the requested particular image data 722.

[00300] Referring again to Fig. 7, e.g., Fig. 7C, in an embodiment, module 254 may include one or more of request for particular image data modifying module 726 and modified request for particular image data transmitting to an image sensor array module

80 728. In an embodiment, module 726 may include designated image data removing from request for particular image data module 730. In an embodiment, module 730 may include designated image data removing from request for particular image data based on previously stored image data module 732. In an embodiment, module 732 may include one or more of designated image data removing from request for particular image data based on previously stored image data retrieved from the image sensor array module 734 and designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data module 736. In an embodiment, module 736 may include designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data that is a static object module 738.

[00301] Referring again to Fig. 7, e.g., Fig. 7D, in an embodiment, module 254 may include module 726 and module 728, as previously discussed. In an embodiment, module 726 may include one or more of designated image data removing from request for particular image data based on pixel data interpolation/extrapolation module 740, portion of the request for particular image data that was previously stored in memory identifying module 744, and identified portion of the request for the particular image data removing module 746. In an embodiment, module 740 may include designated image data corresponding to one or more static image objects removing from request for particular image data based on pixel data interpolation/extrapolation module 742. In an embodiment, module 744 may include one or more of portion of the request for the particular image data that was previously captured by the image sensor array identifying module 748 and portion of the request for the particular image data that includes at least one static image object that was previously captured by the image sensor array identifying module 750. In an embodiment, module 750 may include portion of the request for the particular image data that includes at least one static image object of a rock outcropping that was previously captured by the image sensor array identifying module 752.

81 [00302] Referring again to Fig. 7, e.g., Fig. 7E, in an embodiment, module 254 may include one or more of size of request for particular image data determining module 754 and determined- size request for particular image data transmitting to the image sensor array module 756. In an embodiment, module 754 may include one or more of size of request for particular image determining at least partially based on user device property module 758, size of request for particular image determining at least partially based on user device access level module 762, size of request for particular image determining at least partially based on available bandwidth module 764, size of request for particular image determining at least partially based on device usage time module 766, and size of request for particular image determining at least partially based on device available bandwidth module 768. In an embodiment, module 758 may include size of request for particular image determining at least partially based on user device resolution module 760.

[00303] Referring now to Fig. 8, Fig. 8 illustrates an exemplary implementation of particular image data from the image sensor array exclusive receiving module 256. As illustrated in Fig. 8 A, the particular image data from the image sensor array exclusive receiving module 256 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 8, e.g., Fig. 8A, in an embodiment, module 256 may include one or more of particular image data from the image sensor array in which other image data is discarded receiving module 802, particular image data from the image sensor array in which other image data is stored at the image sensor array receiving module 804, and particular image data from the image sensor array exclusive near-real-time receiving module 806.

[00304] Referring again to Fig. 8, e.g., Fig. 8B, in an embodiment, module 256 may include one or more of particular image data from the image sensor array exclusive near- real-time receiving module 808 and data from the scene other than the particular image data retrieving at a later time module 810. In an embodiment, module 810 may include one or more of data from the scene other than the particular image data retrieving at a time of available bandwidth module 812, data from the scene other than the particular image data retrieving at an off-peak usage time of the image sensor array module 814,

82 data from the scene other than the particular image data retrieving at a time when no particular image data requests are present at the image sensor array module 816, and data from the scene other than the particular image data retrieving at a time of available image sensor array capacity module 818.

[00305] Referring again to Fig. 8, e.g., Fig. 8C, in an embodiment, module 256 may include one or more of particular image data that includes audio data from the image sensor array exclusive receiving module 820 and particular image data that was determined to contain a particular requested image object from the image sensor array exclusive receiving module 822. In an embodiment, module 822 may include particular image data that was determined to contain a particular requested image object by the image sensor array exclusive receiving module 824.

[00306] Referring now to Fig. 9, Fig. 9 illustrates an exemplary implementation of received particular image data transmitting to at least one requestor module 258. As illustrated in Fig. 9 A, the received particular image data transmitting to at least one requestor module 258 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 9, e.g., Fig. 9A, in an embodiment, module 258 may include one or more of received particular image data transmitting to at least one user device requestor module 902, separation of the received particular data into set of one or more requested images executing module 906, and received particular image data transmitting to at least one user device that requested image data that is part of the received particular image data module 912. In an embodiment, module 902 may include received particular image data transmitting to at least one user device that requested at least a portion of the received particular data requestor module 904. In an embodiment, module 906 may include separation of the received particular data into a first requested image and a second requested image executing module 910.

[00307] Referring again to Fig. 9, e.g., Fig. 9B, in an embodiment, module 258 may include one or more of first portion of received particular image data transmitting to the first requestor module 914, second portion of received particular image data transmitting to a second requestor module 916, and received particular image data unaltered

83 transmitting to at least one requestor module 926. In an embodiment, module 914 may include first portion of received particular image data transmitting to the first requestor that requested the first portion module 918. In an embodiment, module 918 may include portion of received particular image data that includes a particular football player transmitting to a television device that requested the football player from a football game module 920. In an embodiment, module 916 may include second portion of received particular image data transmitting to the second requestor that requested the second portion module 922. In an embodiment, module 922 may include portion that contains a view of a motor vehicle transmitting to the second requestor that is a tablet device that requested the view of the motor vehicle module 924.

[00308] Referring again to Fig. 9, e.g., Fig. 9C, in an embodiment, module 258 may include one or more of supplemental data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 928 and generated transmission image data transmitting to at least one requestor module 930. In an embodiment, module 928 may include one or more of advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 932 and related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 938. In an embodiment, module 932 may include context-based advertisement data addition to at least a portion of the received particular image data to generate

transmission image data facilitating module 934. In an embodiment, module 934 may include animal rights donation fund advertisement data addition to at least a portion of the received particular image data that includes a jungle tiger at an oasis to generate transmission image data facilitating module 936. In an embodiment, module 938 may include related fantasy football statistical data addition to at least a portion of the received particular image data of a quarterback data to generate transmission image data facilitating module 940.

[00309] Referring again to Fig. 9, e.g., Fig. 9D, in an embodiment, module 258 may include one or more of portion of received particular image data modification to generate transmission image data facilitating module 942 and generated transmission image data transmitting to at least one requestor module 944. In an embodiment, module 942 may

84 include one or more of portion of received particular image data image manipulation modification to generate transmission image data facilitating module 946 and portion of received particular image data redaction to generate transmission image data facilitating module 952. In an embodiment, module 946 may include one or more of portion of received particular image data contrast balancing modification to generate transmission image data facilitating module 948 and portion of received particular image data color modification balancing to generate transmission image data facilitating module 950. In an embodiment, module 952 may include portion of received particular image data redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 954. In an embodiment, module 954 may include portion of received satellite image data that includes a tank redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 956.

[00310] Referring again to Fig. 9, e.g., Fig. 9D, in an embodiment, module 258 may include one or more of lower-resolution version of received particular image data transmitting to at least one requestor module 958 and full-resolution version of received particular image data transmitting to at least one requestor module 960.

[00311] In some implementations described herein, logic and similar implementations may include software or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of existing software or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an implementation may include special-purpose hardware, software, firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

85 [00312] Following are a series of flowcharts depicting implementations. For ease of understanding, the flowcharts are organized such that the initial flowcharts present implementations via an example implementation and thereafter the following flowcharts present alternate implementations and/or expansions of the initial flowchart(s) as either sub-component operations or additional component operations building on one or more earlier-presented flowcharts. Those having skill in the art will appreciate that the style of presentation utilized herein (e.g., beginning with a presentation of a flowchart(s) presenting an example implementation and thereafter providing additions to and/or further details in subsequent flowcharts) generally allows for a rapid and easy

understanding of the various process implementations. In addition, those skilled in the art will further appreciate that the style of presentation used herein also lends itself well to modular and/or object-oriented program design paradigms.

Exemplary Operational Implementation of Processor 250 and Exemplary Variants

[00313] Further, in Fig. 10 and in the figures to follow thereafter, various operations may be depicted in a box-within-a-box manner. Such depictions may indicate that an operation in an internal box may comprise an optional example embodiment of the operational step illustrated in one or more external boxes. However, it should be understood that internal box operations may be viewed as independent operations separate from any associated external boxes and may be performed in any sequence with respect to all other illustrated operations, or may be performed concurrently. Still further, these operations illustrated in Fig. 10 as well as the other operations to be described herein may be performed by at least one of a machine, an article of manufacture, or a composition of matter.

[00314] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software, and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle

86 will vary with the context in which the processes and/or systems and/or other

technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically- oriented hardware, software, and or firmware.

[00315] Throughout this application, examples and lists are given, with parentheses, the abbreviation "e.g.," or both. Unless explicitly otherwise stated, these examples and lists are merely exemplary and are non-exhaustive. In most cases, it would be prohibitive to list every example and every combination. Thus, smaller, illustrative lists and examples are used, with focus on imparting understanding of the claim terms rather than limiting the scope of such terms.

[00316] Referring now to Fig. 10, Fig. 10 shows operation 1000, e.g., an example operation of message processing device 230 operating in an environment 200. In an embodiment, operation 1000 may include operation 1002 depicting acquiring a request for particular image data that is part of a scene. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene acquiring module 252 acquiring (e.g., receiving, e.g., from a device that requested an image, that is any device or set of devices capable of displaying, storing, analyzing, or operating upon an image, e.g., television, computer, laptop, smartphone, etc., e.g., or from an entity that requested an image, e.g., a person, an automated monitoring system, an artificial intelligence, an intelligence amplification (e.g., a computer designed to watch for persons appearing on video or still shots), or otherwise obtaining (e.g., acquiring includes receiving, retrieving, creating, generating, generating a portion of, receiving a location of, receiving access

87 instructions for, receiving a password for, etc.) a request (e.g., data, in any format that indicates a computationally-based request for data, e.g., image data, from any source, whether properly-formed or not, and which may come from a communications network or port, or an input/output port, or from a human or other entity, or any device) for particular image data (e.g., a set of image data, e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), or other data, such as audio data and other data on the electromagnetic spectrum, e.g., infrared data, microwave data, etc.) that is part of a scene (e.g., a particular area, and/or data (including graphical data, audio data, and factual/derived data) that makes up the particular area, which may in some embodiments be all of the data, pixel data or otherwise, that is captured by the image sensor array or portions of the image sensor array).

[00317] capturing (e.g., collecting data, that includes visual data, e.g., pixel data, sound data, electromagnetic data, nonvisible spectrum data, and the like) that includes one or more images (e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), through use of an array (e.g., any grouping configured to work together in unison, regardless of arrangement, symmetry, or appearance) of more than one image sensor (e.g., a device, component, or collection of components configured to collect light, sound, or other electromagnetic spectrum data, and/or to convert the collected into digital data, or perform at least a portion of the foregoing actions).

Following Paragraphs [00318]-[00320] Reflect Changes Made Via SECOND PRELIMINARY AMENDMENT Filed on the Same Day as the Filing of 1114- 003-007-000000.

[00318] Referring again to Fig. 10, operation 1000 may include operation 1004 depicting transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is

88 larger than the requested particular image data. For example, Fig. 2, e.g., Fig. 2B, shows request for particular image data transmitting to an image sensor array module 254 transmitting the request (e.g., data, in any format that indicates a computationally-based request for data, e.g., image data, from any source, whether properly-formed or not, and which may come from a communications network or port, or an input/output port, or from a human or other entity, or any device) for particular image data (e.g., a set of image data, e.g., graphical representations of things, e.g., that are composed of pixels or other electronic data, or that are captured by an image sensor, e.g., a CCM or a CMOS sensor), or other data, such as audio data and other data on the electromagnetic spectrum, e.g., infrared data, microwave data, etc., and which may be some subset of the entire scene that includes some pixel data, whether pre-or post-processing, which may or may not include data from multiple sensors of the array of more than one image sensor) to an image sensor array that includes more than one image sensor (e.g., a device, component, or collection of components configured to collect light, sound, or other electromagnetic spectrum data, and/or to convert the collected into digital data, or perform at least a portion of the foregoing actions) and that is configured to capture the scene (e.g., the data, e.g., image data or otherwise (e.g., sound, electromagnetic, captured by the array of more than one image sensor, which may be or may be capable of being combined or stitched together, at any stage of processing, pre, or post)), captured by the array of more than one image sensor, combined or stitched together, at any stage of processing, pre, or post), that is larger (e.g., some objectively measurable feature has a higher or greater value, e.g., size, resolution, color, color depth, pixel data granularity, number of colors, hue, saturation, alpha value, shading) than the requested particular image data.

[00319] Referring again to Fig. 10, operation 1000 may include operation 1006 depicting receiving only the particular image data from the image sensor array. For example, Fig. 2 e.g., Fig. 2B shows particular image data from the image sensor array exclusive receiving module 256 receiving only (e.g., not transmitting the parts of the scene that are not part of the selected particular portion) the particular image data (e.g., the designated pixel data that was transmitted from the image sensor array) from the image sensor array (e.g., a set of one or more image sensors that are grouped together, whether spatially grouped or linked electronically or through a network, in any arrangement or

89 configuration, whether contiguous or noncontiguous, and whether in a pattern or not, and which image sensors may or may not be uniform throughout the array).

[00320] Referring again to Fig. 10, operation 1000 may include operation 1008 depicting transmitting the received particular image data to at least one requestor. For example, Fig. 2, e.g., Fig. 2B, shows received particular image data transmitting to at least one requestor module 258 transmitting the received particular image data (e.g., at least partially, but not solely, the designated pixel data that was transmitted from the image sensor array, which data may be modified, added to, subtracted from, or changed, as will be discussed herein) to at least one requestor (the particular image data may be separated into requested data and sent to the requesting entity that requested the data, whether that requesting entity is a device, person, artificial intelligence, or part of a computer routine or system, or the like). .

THIS ENDS THE CHANGES MADE BY THE SECOND PRELIMINARY AMENDMENT IN THE 1114-003-007-000000 APPLICATION.

[00321] Figs. 11A-11G depict various implementations of operation 1002, depicting acquiring a request for particular image data that is part of a scene according to embodiments. Referring now to Fig. 11 A, operation 1002 may include operation 1102 depicting acquiring the request for particular image data of the scene that includes one or more images. For example, Fig. 6, e.g., Fig. 6A shows request for particular image data that is part of a scene and includes one or more images acquiring module 602 acquiring (e.g., receiving, e.g., from a device that requested an image, that is any device or set of devices capable of displaying, storing, analyzing, or operating upon an image, e.g., television, computer, laptop, smartphone, etc., e.g., or from an entity that requested an image, e.g., a person, an automated monitoring system, an artificial intelligence, an intelligence amplification (e.g., a computer designed to watch for persons appearing on video or still shots), a request (e.g., data, in any format that requests an image) for particular image data of the scene that includes one or more images (e.g., the scene, e.g., a street corner, includes one or more images, e.g., images of a wristwatch worn by a person crossing the street corner, images of the building on the street corner, etc.).

90 [00322] Referring again to Fig. 11 A, operation 1002 may include operation 1104 depicting receiving the request for particular image data of the scene. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving module 604 receiving (e.g., from a device, e.g., a user's personal laptop device) the request for particular image data (e.g., a particular player from a game) of the scene (e.g., the portions of the game that are captured by the image sensor array).

[00323] Referring again to Fig. 11 A, operation 1104 may include operation 1106 depicting receiving the request for particular image data of the scene from a user device. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device module 606 receiving (e.g., receiving a call from an API that is accessing a remote server that sends commands to the image sensor array) the request for particular image data (e.g., a still shot of an area outside a building where any

91 movement has been detected, e.g., a security camera shot) of the scene from a user device (e.g., the API that was downloaded by an independent user is running on that user's device).

[00324] Referring again to Fig. 11 A, operation 1106 may include operation 1108 depicting receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene module 608 receiving the request for particular image data (e.g., image data of a particular animal) of the scene (e.g., image data that includes the sounds and video from an animal oasis) from a user device (e.g., a smart television) that is configured to display at least a portion of the scene (e.g., the data captured by an image sensor array of the animal oasis).

[00325] Referring again to Fig. 11 A, operation 1108 may include operation 1110 depicting receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to display at least a portion of the scene in a viewfinder module 610 receiving the request for particular image data (e.g., images of St. Peter's Basilica in Rome, Italy) of the scene (e.g., image data captured by the image sensor array of the Vatican) from a user device (e.g., a smartphone device) that is configured to display at least a portion of the scene (e.g., the Basilica, to be displayed on the screen as part of a virtual tourism app running on the smartphone) in a viewfinder (e.g., a screen or set of screens, whether real or virtual, that can display and/or process image data). It is noted that a viewfinder may be remote from where the image is captured.

[00326] Referring again to Fig. 11 A, operation 1106 may include operation 1112 depicting receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image. For example, Fig. 6, e.g., Fig. 6 A, shows request for particular image data that is part of a scene receiving from a

92 client device configured to receive a selection of a particular image module 612 receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image (e.g., the user device, e.g., a computer device, receives an audible command from a user regarding which portion of the scene the user wants to see (e.g., which may involve showing a "demo" version of the scene, e.g., a lower-resolution older version of the scene, for example), and the device receives this selection and then sends the request for the particular image data to the server device.

[00327] Referring again to Fig. 11 A, operation 1112 may include operation 1114 depicting receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from a client device configured to receive a scene-based selection of a particular image module 614 receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

[00328] Referring again to Fig. 11 A, operation 1112 may include operation 1116 depicting receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device. For example, Fig. 6, e.g., Fig. 6A, shows request for particular image data that is part of a scene receiving from one or more various devices configured to receive a scene-based selection of a particular image module 616 receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device

[00329] Referring now to Fig. 11B, operation 1002 may include operation 1118 depicting acquiring the request for particular image data of the scene, wherein the scene

93 is the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is the image data collected by the array of more than one image sensor acquiring module 618 acquiring the request for particular image data of the scene (e.g., a live street view of a corner in New York City near Madison Square Garden), wherein the scene is the image data (e.g., video and audio data) collected by the array of more than one image sensor (e.g., a set of twenty-five ten-megapixel CMOS sensors arranged at an angle to provide a full view).

[00330] Referring again to Fig. 11B, operation 1002 may include operation 1120 depicting acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that a representation of the image data collected by the array of more than one image sensor acquiring module 620 acquiring the request for particular image data (e.g., an image of a particular street vendor) of the scene (e.g., a city street in Alexandra, VA), wherein the scene is a representation (e.g., metadata, e.g., data about the image data, e.g., a sampling, a subset, a description, a retrieval location) of the image data (e.g., the pixel data) collected by the array of more than one image sensor (e.g., one thousand CMOS sensors of two megapixels each, mounted on a UAV).

[00331] Referring again to Fig. 11B, operation 1120 may include operation 1122 depicting acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that a sampling of the image data collected by the array of more than one image sensor acquiring module 622 acquiring the request for particular image data of the scene, wherein the scene is a sampling (e.g., a subset, selected randomly or through a pattern) of the image data (e.g., an image of a checkout line at a discount store) collected (e.g., gathered, read, stored) by the array of more than one image sensor (e.g., an array of two thirty megapixel sensors angled towards each other).

94 [00332] Referring again to Fig. 11B, operation 1120 may include operation 1124 depicting acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is a subset of the image data collected by the array of more than one image sensor acquiring module 624 acquiring the request for particular image data (e.g., a particular object inside of a house, e.g., a refrigerator) of the scene (e.g., an interior of a house), wherein the scene is a subset of the image data (e.g., a half of, or a sampling of the whole, or a selected area of, or only the contrast data, etc.,) collected by the array of more than one image sensor (e.g., a 10x10 grid of three-megapixel image sensors).

[00333] Referring again to Fig. 11B, operation 1120 may include operation 1126 depicting acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6B, shows request for particular image data that is part of a scene that is a low-resolution version of the image data collected by the array of more than one image sensor acquiring module 626 acquiring the request for particular image data (e.g., an image of a particular car crossing a bridge) of the scene (e.g., a highway bridge), wherein the scene is a low-resolution (e.g., "low" here meaning "less than a possible resolution given the equipment that captured the image") version of the image data collected by the array of more than one image sensor.

[00334] Referring now to Fig. 11C, operation 1002 may include operation 1128 depicting acquiring the request for particular image data of a scene that is a football game. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is a football game acquiring module 628 acquiring the request for particular image data of a scene that is a football game.

[00335] Referring again to Fig. 11C, operation 1002 may include operation 1130 depicting acquiring the request for particular image data of a scene that is a street view of an area. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is an area street view acquiring module 630 acquiring the request for

95 particular image data that is a street view (e.g., a live or short-delayed view) of an area (e.g., a street corner, a garden oasis, a mountaintop, an airport, etc.).

[00336] Referring again to Fig. 11C, operation 1002 may include operation 1132 depicting acquiring the request for particular image data of a scene that is a tourist destination. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is a tourist destination acquiring module 632 acquiring the request for particular image data of a scene that is a tourist destination (e.g., the great pyramids of Giza).

[00337] Referring again to Fig. 11C, operation 1002 may include operation 1134 depicting acquiring the request for particular image data of a scene that is an inside of a home. For example, Fig. 6, e.g., Fig. 6C, shows request for particular image data that is part of a scene that is inside a home acquiring module 634 acquiring the request for particular image data of a scene that is inside of a home (e.g., inside a kitchen).

[00338] Referring now to Fig. 11D, operation 1002 may include operation 1136 depicting acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a portion of the scene acquiring module 636 acquiring the request for particular image data (e.g., an image of a tiger in a wildlife preserve), wherein the particular image data (e.g., an image of a tiger) that is a portion of the scene (e.g., image data of the wildlife preserve).

[00339] Referring again to Fig. 11D, operation 1136 may include operation 1138 depicting acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a particular football player and a scene that is a football field acquiring module 638 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

96 [00340] Referring again to Fig. 11D, operation 1136 may include operation 1140 depicting acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge. For example, Fig. 6, e.g., Fig. 6D, shows request for particular image data that is an image that is a vehicle license plate and a scene that is a highway bridge acquiring module 640 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is a highway bridge (e.g., an image of the highway bridge).

[00341] Referring now to Fig. HE, operation 1002 may include operation 1142 depicting acquiring a request for a particular image object located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for particular image object located in the scene acquiring module 642 acquiring a request for a particular image object (e.g., a particular type of bird) located in the scene (e.g., a bird sanctuary).

[00342] Referring again to Fig. HE, operation 1002 may include operation 1144, which may appear in conjunction with operation 1142, operation 1144 depicting determining the particular image data of the scene that contains the particular image object. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining module 644 determining the particular image data (e.g., a 1920x1080 image that contains the particular type of bird) of the scene (e.g., the image of the bird sanctuary) that contains the particular image object (e.g., the particular type of bird).

[00343] Referring again to Fig. HE, operation 1142 may include operation 1146 depicting acquiring a request for a particular person located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for particular person located in the scene acquiring module 646 acquiring a request for a particular person (e.g., a person dressed a certain way, or loitering outside of a warehouse, or a particular celebrity or athlete, or a business tracking a specific worker) located in the scene (e.g., an image data).

97 [00344] Referring again to Fig. HE, operation 1142 may include operation 1148 depicting acquiring a request for a basketball located in the scene that is a basketball arena. For example, Fig. 6, e.g., Fig. 6E, shows request for a basketball located in the scene acquiring module 648 acquiring a request for a basketball (e.g., the image data corresponding to a basketball) located in the scene that is a basketball arena.

[00345] Referring again to Fig. HE, operation 1142 may include operation 1150 depicting acquiring a request for a motor vehicle located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for a motor vehicle located in the scene acquiring module 650 acquiring a request for a motor vehicle located in the scene.

[00346] Referring again to Fig. HE, operation 1142 may include operation 1152 depicting acquiring a request for any human object representations located in the scene. For example, Fig. 6, e.g., Fig. 6E, shows request for a human object representation located in the scene acquiring module 652 acquiring a request for any human object representations (e.g., when any image data corresponding to a human walks by, e.g., for a security camera application, or an application that takes an action when a person approaches, e.g., an automated terminal) located in the scene.

[00347] Referring again to Fig. HE, operation 1142 may include operation 1153 depicting determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through automated pattern recognition application to scene data module 653 determining the particular image data of the scene (e.g., a tennis match) that contains the particular image object (e.g., a tennis player) through application of automated pattern recognition (e.g., recognizing human images through machine recognition, e.g., shape-based classification, head-shoulder detection, motion-based detection, and component-based detection) to scene image data.

[00348] Referring again to Fig. HE, operation 1144 may include operation 1154 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data

98 that is image data of the scene from at least one previous moment in time. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in previous scene data module 654 determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

[00349] Referring again to Fig. HE, operation 1144 may include operation 1156 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data module 656

[00350] Referring again to Fig. HE, operation 1156 may include operation 1158 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data previously transmitted from the image sensor array module 658 determining the particular image data of the scene that contains the particular image object (e.g., a particular landmark, or animal at a watering hole) through identification of the particular image object (e.g., a lion at a watering hole) in cached scene data that was previously transmitted from the image sensor array (e.g., a set of twenty five image sensors) that includes more than one image sensor (e.g., a three megapixel CMOS sensor).

[00351] Referring again to Fig. HE, operation 1156 may include operation 1160 depicting determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one

99 image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor. For example, Fig. 6, e.g., Fig. 6E, shows particular image data of the scene that contains the particular image object determining through object identification in cached previous scene data previously transmitted from the image sensor array at a particular time module 660 determining the particular image data of the scene that contains the particular image object (e.g., a specific item in a shopping cart that doesn't match a cash-register generated list of what was purchased by the person wheeling the cart) through identification of the particular image object (e.g., the specific item, e.g., a toaster oven) in cached scene data (e.g., data that is stored in the server that was from a previous point in time, whether one-millionth of a second previously or years previously, although in the example the cached scene data is from a previous frame, e.g., less than one second prior) that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for connection to the image sensor array that includes more than one image sensor.

[00352] Referring now to Fig. 11F, operation 1002 may include operation 1162 depicting receiving a first request for first particular image data from the scene from a first requestor. For example, Fig. 6, e.g., Fig. 6F, shows first request for first particular image data from a first requestor receiving module 662 receiving a first request (e.g., a request for a 1920x180 "HD" view) for first particular image data (e.g., a first animal, e.g., a tiger, at a watering hole scene) from the scene (e.g., a watering hole) from a first requestor (e.g., a family watching the watering hole from an internet-connected television).

[00353] Referring again to Fig. 11F, operation 1002 may include operation 1164, which may appear in conjunction with operation 1162, operation 1164 depicting receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor. For example, Fig. 6, e.g., Fig. 6F, shows second request for first particular image data from a different second requestor receiving module receiving a second request (e.g., a 640x480 view for a smartphone) for second particular image data (e.g., a second animal, e.g., a pelican) from the scene (e.g., a watering hole)

10 from a second requestor (e.g., a person watching a stream of the watering hole on their smartphone).

[00354] Referring again to Fig. 11F, operation 1002 may include operation 1166 depicting combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene. For example, Fig. 6, e.g., Fig. 6F, shows first received request for first particular image data and second received request for second particular image data combining module 666 combining a received first request for first particular image data (e.g., a request to watch a running back of a football team) from the scene and a received second request for second particular image data (e.g., a request to watch a quarterback of the same football team) from the scene (e.g., a football game).

[00355] Referring again to Fig. 11F, operation 1166 may include operation 1168 depicting combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data. For example, Fig. 6, e.g., Fig. 6F, shows first received request for first particular image data and second received request for second particular image data combining into the request for particular image data module 668 combining the received first request for first particular image data (e.g., request from device 502A, as shown in Fig. 5B) from the scene and the received second request for second particular image data (e.g., the request from device 502B, as shown in Fig. 5B) from the scene into the request for particular image data that consolidates overlapping requested image data (e.g., the selected pixels 574, as shown in Fig. 5B).

[00356] Referring again to Fig. 11F, operation 1166 may include operation 1170 depicting receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene. For example, Fig. 6, e.g., Fig. 6F, shows first request for first particular image data and second request for second particular image data receiving module 670 receiving a first request for first particular image data (e.g., image data of a particular street corner from a street view) from the

100 scene (e.g., a live street view of DoG street in Alexandria, VA) and a second request for second particular image data (e.g., image data of the opposite corner of the live street view) from the scene (e.g., the live street view of DoG street in Alexandria, VA).

[00357] Referring again to Fig. 11F, operation 1002 may include operation 1172, which may appear in conjunction with operation 1170, operation 1172 depicting combining the received first request and the received second request into the request for particular image data. For example, Fig. 6, e.g., Fig. 6F, shows received first request and received second request combining module 672 combining the received first request (e.g., a 1920x1080 request for a virtual tourism view of the Sphinx) and the received second request (e.g., a 410x210 request for a virtual tourism view of an overlapping, but different part of the Sphinx) into the request for particular image data (e.g., the request that will be sent to the image sensor array that regards which pixels will be kept).

[00358] Referring again to Fig. 11F, operation 1172 may include operation 1174 depicting removing common pixel data between the received first request and the received second request. For example, Fig. 6, e.g., Fig. 6F, shows received first request and received second request common pixel deduplicating module 674 removing (e.g., deleting, marking, flagging, erasing, storing in a different format, storing in a different place, coding/compressing using a different algorithm, changing but not necessarily destroying, destroying, allowing to be written over by new data, etc.) common pixel data (e.g., pixel data that was part of more than one request) between the received first request (e.g., a request to view the left fielder of the Washington Nationals from a baseball game) and the received second request (e.g., a request to view the right fielder of the

Washington Nationals from a baseball game).

[00359] Referring now to Fig. 11G, operation 1002 may include operation 1176 depicting acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data. For example, Fig. 6, e.g., Fig. 6G, shows request for particular video data that is part of a scene acquiring module 676 acquiring the request for particular image data that is part of the scene, wherein the particular image

101 data includes video data (e.g., streaming data, e.g., as in a live street view of a corner near the Verizon Center in Washington, DC).

[00360] Referring again to Fig. 11G, operation 1002 may include operation 1178 depicting acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data. For example, Fig. 6, e.g., Fig. 6G, shows request for particular audio data that is part of a scene acquiring module 678 acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data (e.g., data of the sounds at an oasis, or of people in the image that are speaking).

[00361] Referring again to Fig. 11G, operation 1002 may include operation 1180 depicting acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface. For example, Fig. 6, e.g., Fig. 6G, shows request for particular image data that is part of a scene receiving from a user device with an audio interface module 680 acquiring the request for particular image data (e.g., to watch a particular person on the field at a football game, e.g., the quarterback) that is part of the scene (e.g., the scene of a football stadium during a game) from a user device (e.g., an internet-connected television) that receives the request for particular image data through an audio interface (e.g., the person speaks to an interface built into the television to instruct the television regarding which player to follow)

[00362] Referring again to Fig. 11G, operation 1002 may include operation 1182 depicting acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user. For example, Fig. 6, e.g., Fig. 6G, shows request for particular image data that is part of a scene receiving from a microphone-equipped user device with an audio interface module 682 acquiring the request for particular image data (e.g., an image of a cheetah at a jungle oasis) that is part of the scene (e.g., a jungle oasis) from a user device that has a microphone (e.g., a smartphone device) that receives a spoken request (e.g., "zoom in on the cheetah") for particular image data (e.g., an image of a cheetah at a

102 jungle oasis) for particular image data (e.g., an image of a cheetah at a jungle oasis) from the user (e.g., the person operating the smartphone device that wants to zoom in on the cheetah).

[00363] Figs. 12A-12E depict various implementations of operation 1004, depicting transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, according to embodiments. Referring now to Fig. 12A, operation 1004 may include operation 1202 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array that includes more than one image sensor and to capture a larger image module 702 transmitting the request for the particular image data of the scene to the image sensor array (e.g., an array of twelve sensors of ten megapixels each) that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data (e.g., the requested image data is 1920x1080 (e.g., roughly 2 million pixels), and the captured area is 12,000,000 pixels, minus overlap).

[00364] Referring again to Fig. 12A, operation 1004 may include operation 1204 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes two angled image sensors and that is configured to capture the scene that is larger than the requested particular image data 704 transmitting the request for the particular image data of the scene (e.g., a chemistry lab) to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested

103 image data (e.g., the requested image is a zoomed-out view of the lab that can be expressed in 1.7 million pixels, but the cameras capture 10.5 million pixels).

[00365] Referring again to Fig. 12A, operation 1004 may include operation 1206 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a grid and that is configured to capture the scene that is larger than the requested particular image data 706 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data (e.g., the image data requested is of a smaller area (e.g., the area around a football player) than the image (e.g., the entire football field)).

[00366] Referring again to Fig. 12A, operation 1004 may include operation 1208 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor arranged in a line and that is configured to capture the scene that is larger than the requested particular image data 708 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image (e.g., an image of a highway) that is larger than the requested image data (e.g., an image of one or more of the cars on the highway).

104 [00367] Referring again to Fig. 12A, operation 1004 may include operation 1210 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one nonlinearly arranged stationary image sensor and that is configured to capture the scene that is larger than the requested particular image data 710 transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors (e.g., five-megapixel CCD sensors) and that is configured to capture an image that is larger than the requested image data.

[00368] Referring now to Fig. 12B, operation 1004 may include operation 1212 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of static image sensors module 712 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

[00369] Referring again to Fig. 12B, operation 1212 may include operation 1214 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of static image sensors that have fixed focal length and fixed field of view module 714 transmitting the request for the particular image data (e.g., an image of a black bear) of the scene (e.g., a mountain watering hole) to the image sensor array that includes the array of image

105 sensors (e.g., twenty- five megapixel CMOS sensors) that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data (e.g., the image requested is ultra high resolution but represents a smaller area than what is captured in the scene).

[00370] Referring again to Fig. 12B, operation 1004 may include operation 1216 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array that includes an array of image sensors mounted on a movable platform module 716 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform (e.g., a movable dish, or a UAV) and that is configured to capture the scene (e.g., the scene is a wide angle view of a city) that is larger than the requested image data (e.g., one building or street corner of the city).

[00371] Referring again to Fig. 12B, operation 1004 may include operation 1218 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more data than the requested particular image data 718 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular data.

[00372] Referring again to Fig. 12B, operation 1218 may include operation 1220 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture

106 the scene that represents more than ten times as much image data as the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times as much data as the requested particular image data 720 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times (e.g., twenty million pixels) as much image data as the requested particular image data (e.g., 1.8 million pixels).

[00373] Referring again to Fig. 12B, operation 1218 may include operation 1222 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much data as the requested particular image data 722 transmitting the request for the particular image data (e.g., a 1920x1080 image of a red truck crossing a bridge) of the scene (e.g., a highway bridge) to the image sensor array (e.g., a set of one hundred sensors) that includes more than one image sensor (e.g., twenty sensors each of two megapixel, four megapixel, six megapixel, eight megapixel, and ten megapixel) and that is configured to capture the scene that represents more than one hundred times (e.g. ,600 million pixels vs. the requested two million pixels) as much image data as the requested particular image data.

[00374] Referring again to Fig. 12B, operation 1004 may include operation 1224 depicting transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for

107 particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents a greater field of view than the requested particular image data 724 transmitting the request for the particular image data (e.g., an image of a bakery shop on a corner ) to the image sensor array that is configured to capture the scene (e.g., a live street view of a busy street corner) that represents a greater field of view (e.g., the entire corner) than the requested image data (e.g., just the bakery).

[00375] Referring now to Fig. 12C, operation 1004 may include operation 1226 modifying the request for the particular image data. For example, Fig. 7, e.g., Fig. 7C, shows request for particular image data modifying module 726 modifying (e.g., altering, changing, adding to, subtracting from, deleting, supplementing, changing the form of, changing an attribute of, etc.) the request for the particular image data (e.g., a request for an image of a baseball player).

[00376] Referring again to Fig. 12C, operation 1004 may include operation 1228, which may appear in conjunction with operation 1226, operation 1228 depicting transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data. For example, Fig. 7, e.g., Fig. 7C, shows modified request for particular image data transmitting to an image sensor array module 728 transmitting the modified request (e.g., the request increases the area around the baseball player that was requested) for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene (e.g., a baseball game at a baseball stadium) that is larger than the requested particular image data.

[00377] Referring again to Fig. 12C, operation 1226 may include operation 1230 depicting removing designated image data from the request for the particular image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data module 730 removing designated image data (e.g., image data of a static object that has already been captured and stored in memory, e.g., a building

108 from a live street view, or a car that has not moved since the last request) from the request for the particular image data (e.g., a request to see a part of the live street view).

[00378] Referring again to Fig. 12C, operation 1230 may include operation 1232 depicting removing designated image data from the request for the particular image data based on previously stored image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data module 732 removing designated image data (e.g., image data of a static object that has already been captured and stored in memory, e.g., a building from a live street view, or a car that has not moved since the last request) from the request for the particular image data (e.g., a request to see a part of the live street view) based on previously stored image data (e.g., the most previously requested image has the car in it already, and so it will not be checked again for another sixty frames of captured image data, for example).

[00379] Referring again to Fig. 12C, operation 1232 may include operation 1234 depicting removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data retrieved from the image sensor array module 734 removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array (e.g., the image sensor array sent the older version of the data that included a static object, e.g., a part of a bridge when the scene is a highway bridge, and so the request for the scene that includes part of the bridge, the part of the bridge that is static is removed).

[00380] Referring again to Fig. 12C, operation 1232 may include operation 1236 depicting removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data that is an

109 earlier-in-time version of the designated image data module 736 removing designated image data (e.g., portions of a stadium) from the request for the particular image data (e.g., a request to view a player inside a stadium for a game) based on previously stored image data (e.g., image data of the stadium) that is an earlier- in-time version of the designated image data (e.g., the image data of the stadium from one hour previous, or from one frame previous).

[00381] Referring again to Fig. 12C, operation 1236 may include operation 1238 depicting removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows designated image data removing from request for particular image data based on previously stored image data that is an earlier-in-time version of the designated image data that is a static object module 738 removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

[00382] Referring now to Fig. 12D, operation 1226 may include operation 1240 depicting removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows designated image data removing from request for particular image data based on pixel data interpolation/extrapolation module 740 removing portions of the request for the particular image data (e.g., portions of a uniform building) through pixel interpolation (e.g., filling in the middle of the building based on extrapolation of a known pattern of the building) of portions of the request for the particular image data (e.g., a request for a live street view that includes abuilding).

[00383] Referring again to Fig. 12D, operation 1240 may include operation 1242 depicting removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows designated image data corresponding to

110 one or more static image objects removing from request for particular image data based on pixel data interpolation/extrapolation module 742 removing one or more static objects (e.g., a brick of a pyramid) through pixel interpolation (e.g., filling in the middle of the pyramid based on extrapolation of a known pattern of the pyramid) of portions of the request for the particular image data (e.g., a request for a live street view that includes a building).

[00384] Referring again to Fig. 12D, operation 1226 may include operation 1244 depicting identifying at least one portion of the request for the particular image data that is already stored in a memory. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for particular image data that was previously stored in memory identifying module 744 identifying at least one portion of the request for the particular image data (e.g., a request for a virtual tourism exhibit of which a part has been cached in memory from a previous access) that is already stored in a memory (e.g., a memory of the server device, e.g., memory 245).

[00385] Referring again to Fig. 12D, operation 1226 may include operation 1246, which may appear in conjunction with operation 1244, operation 1246 depicting removing the identified portion of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7D, shows identified portion of the request for the particular image data removing module 746 removing the identified portion of the request for the particular image data (e.g., removing the part of the request that requests the image data that is already stored in a memory of the server).

[00386] Referring again to Fig. 12D, operation 1226 may include operation 1248 depicting identifying at least one portion of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that was previously captured by the image sensor array identifying module 748 identifying at least one portion of the request for particular image data that was previously captured by the image sensor array (e.g., an array of twenty-five two megapixel CMOS sensors).

Il l [00387] Referring again to Fig. 12D, operation 1226 may include operation 1250 depicting identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that includes at least one static image object that was previously captured by the image sensor array identifying module 750 identifying one or more static objects (e.g., buildings, roads, trees, etc.) of the request for particular image data (e.g., image data of a part of a rural town) that was previously captured by the image sensor array.

[00388] Referring again to Fig. 12D, operation 1250 may include operation 1252 depicting identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array. For example, Fig. 7, e.g., Fig. 7D, shows portion of the request for the particular image data that includes at least one static image object of a rock outcropping that was previously captured by the image sensor array identifying module 752 identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array (e.g., an array of twenty-five two megapixel CMOS sensors).

[00389] Referring now to Fig. 12E, operation 1004 may include operation 1254 depicting determining a size of the request for the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image data determining module 754 determining a size (e.g., a number of pixels, or a transmission speed, or a number of frames per second) of the request for the particular image data (e.g., data of a lion at a jungle oasis).

[00390] Referring again to Fig. 12E, operation 1004 may include operation 1256, which may appear in conjunction with operation 1254, operation 1256 depicting transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene. For example, Fig. 7, e.g., Fig. 7E, shows determined- size request for particular image data transmitting to the image sensor array module 756 transmitting the request for the particular image data for which the size

112 has been determined to the image sensor array that is configured to capture the scene (e.g., a scene of an interior of a home).

[00391] Referring again to Fig. 12E, operation 1254 may include operation 1258 depicting determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device property module 758 determining the size (e.g., the horizontal and vertical resolutions, e.g., 1920x1080) of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

[00392] Referring again to Fig. 12E, operation 1258 may include operation 1260 depicting determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device resolution module 760 determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

[00393] Referring again to Fig. 12E, operation 1254 may include operation 1262 depicting determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on user device access level module 762 determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data (e.g., whether the user has paid for the service, or what level of service the user has subscribed to, or whether other "superusers" are present that demand higher bandwidth and receive priority in receiving images).

[00394] Referring again to Fig. 12E, operation 1254 may include operation 1264 depicting determining the size of the request for the particular image data at least partially

113 based on an available amount of bandwidth for communication with the image sensor array. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on available bandwidth module 764 determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array (e.g., a set of twenty- five image sensors lined on each face of a twenty- five sided polygonal structure)

[00395] Referring again to Fig. 12E, operation 1254 may include operation 1266 depicting determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on device usage time module 766 determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data (e.g., devices that have waited longer may get preference; or, once a device has been sent a requested image, that device may move to the back of a queue for image data requests).

[00396] Referring again to Fig. 12E, operation 1254 may include operation 1268 depicting determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data. For example, Fig. 7, e.g., Fig. 7E, shows size of request for particular image determining at least partially based on device available bandwidth module 768 determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data (e.g., based on a connection between the user device and the server, e.g., if the bandwidth to the user device is a limiting factor, that may be taken into account and used in setting the size of the request for the particular image data).

[00397] Figs. 13A-13C depict various implementations of operation 1006, depicting receiving only the particular image data from the image sensor array, according to

114 embodiments. Referring now to Fig. 13 A, operation 1006 may include operation 1302 depicting receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array in which other image data is discarded receiving module 802 receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded (e.g., the data may be stored, at least temporarily, but is not stored in a place where overwriting will be prevented, as in a persistent memory).

[00398] Referring again to Fig. 13A, operation 1006 may include operation 1304 depicting receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array in which other image data is stored at the image sensor array receiving module 804 receiving only the particular image data (e.g., an image of a polar bear and a penguin) from the image sensor array (e.g., twenty five CMOS sensors), wherein data from the scene (e.g., an Antarctic ice floe) other than the particular image data is stored at the image sensor array (e.g., a grouping of twenty- five CMOS sensors).

[00399] Referring again to Fig. 13A, operation 1006 may include operation 1306 depicting receiving the particular image data from the image sensor array in near-real time. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive near-real-time receiving module 806 receiving the particular image data from the image sensor array in near-real time (e.g., not necessarily as something is happening, but near enough to give an appearance of real-time).

[00400] Referring now to Fig. 13B, operation 1006 may include operation 1308 depicting receiving the particular image data from the image sensor array in near-real time. For example, Fig. 8, e.g., Fig. 8B, shows particular image data from the image sensor array exclusive near-real-time receiving module 808 receiving the particular image data from the image sensor array in near real time receiving the particular image data

115 (e.g., an image of a person walking across a street captured in a live street view setting) from the image sensor array (e.g., two hundred ten-megapixel sensors) in near-real time.

[00401] Referring again to Fig. 13B, operation 1006 may include operation 1310, which may appear in conjunction with operation 1308, operation 1310 depicting retrieving data from the scene other than the particular image data at a later time. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a later time module 810 retrieving data from the scene (e.g., a scene of a mountain pass) other than the particular image data (e.g., the data that was not requested- e.g., data that no user requested but that was captured by the image sensor array, but not transmitted to the remote server) at a later time (e.g., at an off-peak time when more bandwidth is available, e.g., fewer users are using the system).

[00402] Referring again to Fig. 13B, operation 1310 may include operation 1312 depicting retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time of available bandwidth module 812 retrieving data from the scene other than the particular image data (e.g., data that was not requested) at a time at which bandwidth is available to the image sensor array (e.g., the image sensor array is not using all of its allotted bandwidth to handle requests for portions of the scene, and has available bandwidth to transmit data that can be retrieved that is other than the requested particular image data).

[00403] Referring again to Fig. 13B, operation 1310 may include operation 1314 depicting retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at an off- peak usage time of the image sensor array module 814 retrieving data from the scene other than the particular image data (e.g., data that was not requested) at a time that represents off-peak usage (e.g., the image sensor array may be capturing a city street, so off-peak usage would be at night; or the image sensor array may be a security camera, so off-peak usage may be the middle of the day, or off-peak usage may be flexible based on

116 previous time period analysis, e.g., could also mean any time the image sensor array is not using all of its allotted bandwidth to handle requests for portions of the scene, and has available bandwidth to transmit data that can be retrieved that is other than the requested particular image data) for the image sensor array.

[00404] Referring again to Fig. 13B, operation 1310 may include operation 1316 depicting retrieving data from the scene other than the particular image data at a time when no particular image data is requested. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time when no particular image data requests are present at the image sensor array module 816 retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

[00405] Referring again to Fig. 13B, operation 1310 may include operation 1318 depicting retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity. For example, Fig. 8, e.g., Fig. 8B, shows data from the scene other than the particular image data retrieving at a time of available image sensor array capacity module 818 retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

[00406] Referring now to Fig. 13C, operation 1006 may include operation 1320 depicting receiving only the particular image data that includes audio data from the sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that includes audio data from the image sensor array exclusive receiving module 820 receiving only the particular image data (e.g., image data from a watering hole) that includes audio data (e.g., sound data, e.g., as picked up by a microphone) from the sensor array (e.g., the image sensor array may include one or more microphones or other sound-collecting devices, either separately from or linked to image capturing sensors).

[00407] Referring again to Fig. 13C, operation 1006 may include operation 1322 depicting receiving only the particular image data that was determined to contain a

117 particular requested image object from the image sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that was determined to contain a particular requested image object from the image sensor array exclusive receiving module 822 receiving only the particular image data that was determined to contain a particular requested image object (e.g., a particular football player from a football game that is the scene) from the image sensor array.

[00408] Referring again to Fig. 13C, operation 1322 may include operation 1324 depicting receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array. For example, Fig. 8, e.g., Fig. 8C, shows particular image data that was determined to contain a particular requested image object by the image sensor array exclusive receiving module 824.

receiving only the particular image data that was determined to contain a particular requested image object (e.g., a lion at a watering hole) by the image sensor array (e.g., the image sensor array performs the pattern recognition and identifies the particular image data, which may only have been identified as "the image data that contains the lion," and only that particular image data is transmitted and thus received by the server).

[00409] Figs. 14A-14E depict various implementations of operation 1008, depicting transmitting the received particular image data to at least one requestor, according to embodiments. Referring now to Fig. 14A, operation 1008 may include operation 1402 depicting transmitting the received particular image data to a user device. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device requestor module 902 transmitting the received particular image data (e.g., an image of a quarterback at a National Football League game) to a user device (e.g., a television connected to the internet).

[00410] Referring again to Fig. 14A, operation 1402 may include operation 1404 depicting transmitting at least a portion of the received particular image data to a user device that requested the particular image data. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device that requested at least a portion of the received particular data requestor module 904 transmitting at least a

118 portion of the received particular image data (e.g., a portion that corresponds to a particular request received from a device, e.g., a request for a particular segment of the scene that shows a lion at a watering hole) to a user device (e.g., a computer device with a CPU and monitor) that requested the particular image data (e.g., the computer device requested the portion of the scene at which the lion is visible).

[00411] Referring again to Fig. 14A, operation 1008 may include operation 1406 depicting separating the received particular image data into a set of one or more requested images. For example, Fig. 9, e.g., Fig. 9A, shows separation of the received particular data into set of one or more requested images executing module 906 separating the received particular image data into a set of one or more requested images (e.g., if there were five requests for portions of the scene data, and some of the requests overlapped, the image data may be duplicated and packaged such that each requesting device receives the pixels that were requested).

[00412] Referring again to Fig. 14A, operation 1008 may include operation 1408, which may appear in conjunction with operation 1406, operation 1408 depicting transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image. For example, Fig. 9, e.g., Fig. 9A, shows at least one image of the set of one or more requested images transmitting to a particular requestor that requested the one or more images transmitting module 908 transmitting at least one image of the set of one or more requested images to a particular requestor (e.g., a person operating a "virtual camera" that lets the person "see" the scene through the lens of a camera, even though the camera is temporally separated from the image sensor array, possibly by a large distance, because the image is transmitted to the camera).

[00413] Referring again to Fig. 14A, operation 1406 may include operation 1410 depicting separating the received particular image data into a first requested image and a second requested image. For example, Fig. 9, e.g., Fig. 9A, shows separation of the received particular data into a first requested image and a second requested image executing module 910 separating the received particular image data (e.g., image data

119 from a jungle oasis) into a first requested image (e.g., an image of a lion) and a second requested image (e.g., an image of a hippopotamus).

[00414] Referring again to Fig. 14A, operation 1008 may include operation 1412 depicting transmitting the received particular image data to a user device that requested an image that is part of the received particular image data. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data transmitting to at least one user device that requested image data that is part of the received particular image data module 912 transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

[00415] Referring now to Fig. 14B, operation 1008 may include operation 1414 depicting transmitting a first portion of the received particular image data to a first requestor. For example, Fig. 9, e.g., Fig. 9B, shows first portion of received particular image data transmitting to a first requestor module 914 transmitting a first portion (e.g., a part of an animal oasis that contains a zebra) of the received particular image data (e.g., image data from the oasis that contains a zebra) to a first requestor (e.g., device that requested video feed that is the portion of the oasis that contains the zebra, e.g., a television device).

[00416] Referring again to Fig. 14B, operation 1008 may include operation 1416, which may appear in conjunction with operation 1414, operation 1416 depicting transmitting a second portion of the received particular image data to a second requestor. For example, Fig. 9, e.g., Fig. 9B, shows second portion of received particular image data transmitting to a second requestor module 916 transmitting a second portion (e.g., a portion of the oasis that contains birds) of the received particular image data (e.g., image data from the oasis) to a second requestor (e.g., a device that requested the image that is the portion of the oasis that contains birds, e.g., a tablet device).

[00417] Referring again to Fig. 14B, operation 1414 may include operation 1418 depicting transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9B, shows first portion of received particular image data

120 transmitting to a first requestor that requested the first portion module 918 transmitting the first portion of the received particular image data (e.g., a portion that contains a particular landmark in a virtual tourism setting) to the first requestor that requested the first portion of the received particular image data.

[00418] Referring again to Fig. 14B, operation 1418 may include operation 1420 depicting transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player. For example, Fig. 9, e.g., Fig. 9B, shows portion of received particular image data that includes a particular football player transmitting to a television device that requested the football player from a football game module 920 transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

[00419] Referring again to Fig. 14B, operation 1416 may include operation 1422 depicting transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9B, shows second portion of received particular image data transmitting to the second requestor that requested the second portion module 922 transmitting the second portion of the received particular image data (e.g., the portion of the received particular image data that includes the lion) to the second requestor that requested the second portion of the received particular image data (e.g., a person watching the feed on their television).

[00420] Referring again to Fig. 14B, operation 1422 may include operation 1424 depicting transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city. For example, Fig. 9, e.g., Fig. 9B, shows portion that contains a view of a motor vehicle transmitting to the second requestor that is a tablet device that requested the view of the motor vehicle module 924 transmitting an image that contains a view of a motor vehicle (e.g., a Honda

121 Accord) to a tablet device that requested a street view image of a particular corner of a city (e.g., Alexandria, VA).

[00421] Referring again to Fig. 14B, operation 1008 may include operation 1426 depicting transmitting at least a portion of the received particular image data without alteration to at least one requestor. For example, Fig. 9, e.g., Fig. 9B, shows received particular image data unaltered transmitting to at least one requestor module 926 transmitting at least a portion of the received particular image data (e.g., an image of animals at an oasis) without alteration (e.g., without altering how the image appears to human eyes, e.g., there may be data manipulation that is not visible) to at least one requestor (e.g., the device that requested the image, e.g., a mobile device).

[00422] Referring now to Fig. 14C, operation 1008 may include operation 1428 depicting adding supplemental data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows supplemental data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 928 adding supplemental data (e.g., context data, or advertisement data, or processing assistance data, or data regarding how to display or cache the image, whether visible in the image or embedded therein, or otherwise associated with the image) to at least a portion of the received particular image data (e.g., images from an animal watering hole) to generate transmission image data (e.g., image data that will be transmitted to the requestor, e.g., a user of a desktop computer).

[00423] Referring again to Fig. 14C, operation 1008 may include operation 1430, which may appear in conjunction with operation 1428, operation 1430 depicting transmitting the generated transmission image data to at least one requestor. For example, Fig. 9, e.g., Fig. 9C, shows generated transmission image data transmitting to at least one requestor module 930 transmitting the generated transmission image data to at least one requestor transmitting the generated transmission image data (e.g., image data of a football player at a football game with statistical data of that football player overlaid in the image) to at least one requestor (e.g., a person watching the game on their mobile tablet device).

122 [00424] Referring again to Fig. 14C, operation 1428 may include operation 1432 depicting adding advertisement data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 932 adding advertisement data (e.g., data for an advertisement for buying tickets to the next soccer game and an advertisement for buying a soccer jersey of the player that is pictured) to at least a portion of the received particular image data (e.g., images of a soccer game and/or players in the soccer game) to generate transmission image data.

[00425] Referring again to Fig. 14C, operation 1432 may include operation 1434 depicting adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data. For example, Fig. 9, e.g., Fig. 9C, shows context-based advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 934 adding context-based advertisement data (e.g., an ad for travel services to a place that is being viewed in a virtual tourism setting, e.g., the Great Pyramids) that is at least partially based on the received particular image data (e.g., visual image data from the Great Pyramids) to at least the portion of the received particular image data.

[00426] Referring again to Fig. 14C, operation 1434 may include operation 1436 depicting adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis. For example, Fig. 9, e.g., Fig. 9C, shows animal rights donation fund advertisement data addition to at least a portion of the received particular image data that includes a jungle tiger at an oasis to generate transmission image data facilitating module 936 adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

123 [00427] Referring again to Fig. 14C, operation 1428 may include operation 1438 depicting adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module 938 adding related visual data (e.g., the name of an animal being shown, or a make and model year of a car being shown, or, if a product is shown in the frame, the name of the website that has it for the cheapest price right now) related to the received particular image data (e.g., an animal, a car, or a product) to at least a portion of the received particular image data to generate transmission image data (e.g., data to be transmitted to the receiving device).

[00428] Referring again to Fig. 14C, operation 1438 may include operation 1440 depicting adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data. For example, Fig. 9, e.g., Fig. 9C, shows related fantasy football statistical data addition to at least a portion of the received particular image data of a quarterback data to generate transmission image data facilitating module 940 adding fantasy football statistical data (e.g., passes completed, receptions, rushing yards gained, receiving yards gained, total points scored, player name, etc.) to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data (e.g., image data that is to be transmitted to the requesting device, e.g., a television).

[00429] Referring now to Fig. 14D, operation 1008 may include operation 1442 depicting modifying data of a portion of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data modification to generate transmission image data facilitating module 942 modifying data (e.g., adding to, subtracting from, or changing the data of a characteristic of, e.g., alpha data, color data, saturation data, either on individual bytes or on the image as a whole) of a portion of the received particular image data (e.g., the

124 image data sent from the camera array) to generate transmission image data (e.g., data to be transmitted to the device that requested the data, e.g., a smartphone device).

[00430] Referring again to Fig. 14D, operation 1008 may include operation 1444 depicting transmitting at the generated transmission image data to at least one requestor. For example, Fig. 9, e.g., Fig. 9D, shows generated transmission image data transmitting to at least one requestor module 944 transmitting the generated transmission image data (e.g., the image data that was generated by the server device to transmit to the requesting device) to at least one requestor (e.g., the requesting device, e.g., a laptop computer of a family at home running a virtual tourism program in a web page).

[00431] Referring again to Fig. 14D, operation 1442 may include operation 1446 depicting performing image manipulation modifications of the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data image manipulation modification to generate transmission image data facilitating module 946 performing image manipulation modifications (e.g., editing a feature of a captured image) of the received particular image data (e.g., a live street view of an area with a lot of shading from tall skyscrapers) to generate transmission image data (e.g., the image data to be transmitted to the device that requested the data, e.g., a camera device).

[00432] Referring again to Fig. 14D, operation 1446 may include operation 1448 depicting performing contrast balancing on the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data contrast balancing modification to generate transmission image data facilitating module 948 performing contrast balancing on the received particular image data (e.g., a live street view of an area with a lot of shading from tall skyscrapers) to generate transmission image data (e.g., the image data to be transmitted to the device that requested the data, e.g., a camera device).

[00433] Referring again to Fig. 14D, operation 1446 may include operation 1450 depicting performing color modification balancing on the received particular image data to generate transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of

125 received particular image data color modification balancing to generate transmission image data facilitating module 950 performing color modification balancing on the received particular image data (e.g., an image of a lion at an animal watering hole) to generate transmission image data (e.g., the image data that will be transmitted to the device).

[00434] Referring again to Fig. 14D, operation 1442 may include operation 1452 depicting redacting at least a portion of the received particular image data to generate the transmission image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data redaction to generate transmission image data facilitating module 952 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) at least a portion of the received particular image data to generate the transmission image data (e.g., the image data that will be transmitted to the device or designated for transmission to the device).

[00435] Referring again to Fig. 14D, operation 1452 may include operation 1454 depicting redacting at least a portion of the received particular image data based on a security clearance level of the requestor. For example, Fig. 9, e.g., Fig. 9D, shows portion of received particular image data redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 954 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) at least a portion of the received particular image data (e.g., the faces of people, or the license plates of cars) based on a security clearance level of the requestor (e.g., a device that requested the image may have a security clearance based on what that device is allowed to view, and if the security clearance level is below a certain threshold, data like license plates and people's faces may beredacted).

126 [00436] Referring again to Fig. 14D, operation 1454 may include operation 1456 depicting redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data. For example, Fig. 9, e.g., Fig. 9D, shows portion of received satellite image data that includes a tank redaction to generate transmission image data based on a security clearance level of the requestor facilitating module 956 redacting (e.g., deleting, smoothing over, blurring, performing any sort of image manipulation operation upon, or any other operation designed to obscure any feature of data associated with the tank, including, but not limited to, removing or overwriting the data associated with the tank entirely) a tank from the received particular image data that includes a satellite image (e.g., the image sensor array that captured the image is at least partially mounted on a satellite) that includes a military base, based on an insufficient security clearance level (e.g., some data indicates that the device does not have a security level sufficient to approve seeing the tank) of a device that requested the particular image data).

[00437] Referring now to Fig. 14E, operation 1008 may include operation 1458 depicting transmitting a lower-resolution version of the received particular image data to the at least one requestor. For example, Fig. 9, e.g., Fig. 9E, shows lower-resolution version of received particular image data transmitting to at least one requestor module 958 transmitting a lower-resolution version (e.g., a version of the image data that is at a lower resolution than what the device that requested the particular image data is capable of displaying) of the received particular image data (e.g., an image of a baseball player at a baseball game) to the at least one requestor (e.g., the device that requested the data).

[00438] Referring again to Fig. 14E, operation 1008 may include operation 1460, which may appear in conjunction with operation 1458, operation 1460 depicting transmitting a full-resolution version of the received particular image data to the at least one requestor. For example, Fig. 9, e.g., Fig. 9F, shows full-resolution version of received particular image data transmitting to at least one requestor module 960 transmitting a full-resolution version (e.g., a version that is at a resolution of the device that requested the image) of the received particular image data (e.g., an image of an animal at an animal oasis).

127 [00439] All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in any Application Data Sheet, are incorporated herein by reference, to the extent not inconsistent herewith.

[00440] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware, or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101. In an embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101, and that designing the circuitry and/or writing the code for the software (e.g., a high-level computer program serving as a hardware specification) and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape,

128 a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.)

[00441] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term

"includes" should be interpreted as "includes but is not limited to," etc.).

[00442] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited

129 number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[00443] Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[00444] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise.

Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[00445] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify

130 and/or distinguish his or her product from those of others. Trademark names used herein are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well- known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[00446] Throughout this application, the terms "in an embodiment," 'in one

embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise.

Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[00447] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes

131 and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

132 STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [START]

133 Those skilled in the art will appreciate the indentations and cross-referencing of the following as instructive. In general, each immediately following numbered paragraph in this "Structured Disclosure Directed Toward One Of Skill In The Art Section" should be treated as preceded by the word "Clause," as can be seen by the Clauses that cross- reference against them, and each cross-referencing paragraph is to be understood as permissively incorporating its parent clause(s) by reference.

As used in the herein, and in particular the following, thing/operation disclosures, the word "comprising" can generally be interpreted as "including but not limited to":

327. A computationally-implemented thing/operation disclosure, comprising:

acquiring a request for particular image data that is part of a scene;

transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

receiving only the particular image data from the image sensor array; and transmitting the received particular image data to at least one requestor.

328. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene that includes one or more images.

329. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving the request for particular image data of the scene.

330. The computationally-implemented thing/operation disclosure of clause 3, wherein said receiving the request for particular image data of the scene comprises: receiving the request for particular image data of the scene from a user device.

134 331. The computationally- implemented thing/operation disclosure of clause 4, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene.

135 332. The computationally-implemented thing/operation disclosure of clause 5, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in aviewfinder.

333. The computationally-implemented thing/operation disclosure of clause 4, wherein said receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

334. The computationally-implemented thing/operation disclosure of clause 7, wherein said receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

335. The computationally-implemented thing/operation disclosure of clause 7, wherein said receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

336. The computationally- implemented thing/operation disclosure of clause 1,

136 wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the the image data collected by the array of more than one image sensor.

137 337. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

338. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

339. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

340. The computationally-implemented thing/operation disclosure of clause 11, wherein said acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

341. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene

138 comprises:

acquiring the request for particular image data of a scene that is a football game.

139 342. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is a street view of an area.

343. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is a tourist destination.

344. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of a scene that is an inside of a home.

345. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

346. The computationally- implemented thing/operation disclosure of clause 19, wherein said acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises:

acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

347. The computationally-implemented thing/operation disclosure of clause 19,

140 wherein said acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises:

141 acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

348. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring a request for a particular image object located in the scene; and determining the particular image data of the scene that contains the particular image object.

349. The computationally-implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a particular person located in the scene.

350. The computationally-implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a basketball located in the scene that is a basketball arena.

351. The computationally- implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for a motor vehicle located in the scene.

352. The computationally-implemented thing/operation disclosure of clause 22, wherein said acquiring a request for a particular image object located in the scene comprises:

acquiring a request for any human object representations located in the scene.

353. The computationally-implemented thing/operation disclosure of clause 22,

142 wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

143 354. The computationally-implemented thing/operation disclosure of clause 22, wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

355. The computationally-implemented thing/operation disclosure of clause 22, wherein said determining the particular image data of the scene that contains the particular image object comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

356. The computationally-implemented thing/operation disclosure of clause 29, wherein said determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

357. The computationally-implemented thing/operation disclosure of clause 29, wherein said determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes

144 more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

145 358. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving a first request for first particular image data from the scene from a first requestor; and

receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

359. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

360. The computationally-implemented thing/operation disclosure of clause 33, wherein said combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

361. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and

combining the received first request and the received second request into the request for particular image data.

362. The computationally-implemented thing/operation disclosure of clause 35,

146 wherein said combining the received first request and the received second request into the request for particular image data comprises:

147 removing common pixel data between the received first request and the received second request.

363. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

364. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

365. The computationally- implemented thing/operation disclosure of clause 1, wherein said acquiring a request for particular image data that is part of a scene comprises:

acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

366. The computationally-implemented thing/operation disclosure of clause 39, wherein said acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

367. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor

148 array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

149 transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

368. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

369. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

370. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

150 371. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

372. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

373. The computationally-implemented thing/operation disclosure of clause 46, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

374. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

151 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data.

375. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

376. The computationally-implemented thing/operation disclosure of clause 49, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

377. The computationally-implemented thing/operation disclosure of clause 49, wherein said transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image

152 data as the requested particular image data.

153 378. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

379. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

modifying the request for the particular image data; and

transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

380. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

removing designated image data from the request for the particular image data.

381. The computationally- implemented thing/operation disclosure of clause 54, wherein said removing designated image data from the request for the particular image data comprises:

removing designated image data from the request for the particular image data based on previously stored image data.

382. The computationally-implemented thing/operation disclosure of clause 55, wherein said removing designated image data from the request for the particular image data based on previously stored image data comprises:

154 removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

383. The computationally-implemented thing/operation disclosure of clause 55, wherein said removing designated image data from the request for the particular image data based on previously stored image data comprises:

removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

384. The computationally-implemented thing/operation disclosure of clause 57, wherein said removing designated image data from the request for the particular image data based on previously stored image data that is an earlier- in-time version of the designated image data comprises:

removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

385. The computationally-implemented thing/operation disclosure of clause 58, wherein said removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

386. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

155 removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

387. The computationally-implemented thing/operation disclosure of clause 60, wherein said removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

388. The computationally-implemented thing/operation disclosure of clause 53, wherein said modifying the request for the particular image data comprises:

identifying at least one portion of the request for the particular image data that is already stored in a memory; and

removing the identified portion of the request for the particular image data.

389. The computationally-implemented thing/operation disclosure of clause 62, wherein said identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

390. The computationally-implemented thing/operation disclosure of clause 62, wherein said identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

156 391. The computationally- implemented thing/operation disclosure of clause 64, wherein said identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array comprises:

identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

392. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

determining a size of the request for the particular image data; and

transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

393. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

394. The computationally-implemented thing/operation disclosure of clause 67, wherein said determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

395. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data.

157 396. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

397. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

398. The computationally-implemented thing/operation disclosure of clause 66, wherein said determining a size of the request for the particular image data comprises: determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

399. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded.

400. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

158 401. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving the particular image data from the image sensor array in near-real time.

402. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving the particular image data from the image sensor array in near-real time; and

retrieving data from the scene other than the particular image data at a later time.

403. The computationally-implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

404. The computationally-implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

405. The computationally-implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

406. The computationally-implemented thing/operation disclosure of clause 76, wherein said retrieving data from the scene other than the particular image data at a later time comprises:

retrieving data from the scene other than the particular image data at time at

159 which fewer users are requesting particular image data than for which the sensor array has capacity.

160 407. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data that includes audio data from the sensor array.

408. The computationally- implemented thing/operation disclosure of clause 1, wherein said receiving only the particular image data from the image sensor array comprises: receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

409. The computationally-implemented thing/operation disclosure of clause 82, wherein said receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array comprises:

receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

410. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting the received particular image data to a user device.

411. The computationally- implemented thing/operation disclosure of clause 84, wherein said transmitting the received particular image data to a user thing/operation disclosure comprises:

transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

412. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

separating the received particular image data into a set of one or more requested images; and

161 transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

162 413. The computationally-implemented thing/operation disclosure of clause 86, wherein said separating the received particular image data into a set of one or more requested images comprises:

separating the received particular image data into a first requested image and a second requested image.

414. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

415. The computationally-implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting a first portion of the received particular image data to a first requestor; and

transmitting a second portion of the received particular image data to a second requestor.

416. The computationally- implemented thing/operation disclosure of clause 89, wherein said transmitting a first portion of the received particular image data to a first requestor comprises:

transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

417. The computationally-implemented thing/operation disclosure of clause 90, wherein said transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

transmitting an image that contains a particular football player to a television

163 device that is configured to display a football game and that requested the image that contains the particular football player.

164 418. The computationally- implemented thing/operation disclosure of clause 89, wherein said transmitting a second portion of the received particular image data to a second requestor comprises:

transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

419. The computationally- implemented thing/operation disclosure of clause 92, wherein said transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

420. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting at least a portion of the received particular image data without alteration to at least one requestor.

421. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

transmitting the generated transmission image data to at least one requestor.

422. The computationally-implemented thing/operation disclosure of clause 95, wherein said adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

165 423. The computationally-implemented thing/operation disclosure of clause 96, wherein said adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

166 adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

424. The computationally-implemented thing/operation disclosure of clause 97, wherein said adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

425. The computationally-implemented thing/operation disclosure of clause 95, wherein said adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

426. The computationally-implemented thing/operation disclosure of clause 99, wherein said adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

427. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

modifying data of a portion of the received particular image data to generate transmission image data; and

transmitting at the generated transmission image data to at least one requestor.

167 428. The computationally-implemented thing/operation disclosure of clause 101, wherein said modifying data of a portion of the received particular image data to generate transmission image data comprises:

performing image manipulation modifications of the received particular image data to generate transmission image data.

429. The computationally- implemented thing/operation disclosure of clause 102, wherein said performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

performing contrast balancing on the received particular image data to generate transmission image data.

430. The computationally- implemented thing/operation disclosure of clause 102, wherein said performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

performing color modification balancing on the received particular image data to generate transmission image data.

431. The computationally- implemented thing/operation disclosure of clause 101, wherein said modifying data of a portion of the received particular image data to generate transmission image data comprises:

redacting at least a portion of the received particular image data to generate the transmission image data.

432. The computationally-implemented thing/operation disclosure of clause 105, wherein said redacting at least a portion of the received particular image data to generate the transmission image data comprises:

redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

168 433. The computationally-implemented thing/operation disclosure of clause 106, wherein said redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

434. The computationally- implemented thing/operation disclosure of clause 1, wherein said transmitting the received particular image data to at least one requestor comprises:

transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

transmitting a full-resolution version of the received particular image data to the at least one requestor.

435. A computationally-implemented thing/operation disclosure, comprising

means for acquiring a request for particular image data that is part of a scene; means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

means for receiving only the particular image data from the image sensor array; and

means for transmitting the received particular image data to at least one requestor.

436. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene that includes one or more images.

437. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a

169 scene comprises:

means for receiving the request for particular image data of the scene.

170 438. The computationally- implemented thing/operation disclosure of clause 111, wherein said means for receiving the request for particular image data of the scene comprises:

means for receiving the request for particular image data of the scene from a user device.

439. The computationally-implemented thing/operation disclosure of clause 112, wherein said means for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

means for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene.

440. The computationally-implemented thing/operation disclosure of clause 113, wherein means for receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

means for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder.

441. The computationally- implemented thing/operation disclosure of clause 112, wherein said means for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

means for receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

442. The computationally-implemented thing/operation disclosure of clause 115, wherein said means for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

means for receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said

171 selection based on a view of the scene.

172 443. The computationally-implemented thing/operation disclosure of clause 115, wherein said means for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

means for receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

444. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is the image data collected by the array of more than one image sensor.

445. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

446. The computationally- implemented thing/operation disclosure of clause 119, wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

447. The computationally-implemented thing/operation disclosure of clause 119,

173 wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a

174 representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

448. The computationally-implemented thing/operation disclosure of clause 119, wherein said means for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

means for acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

449. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a football game.

450. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a street view of an area.

451. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of a scene that is a tourist destination.

175 452. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

176 means for acquiring the request for particular image data of a scene that is an inside of a home.

453. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

454. The computationally-implemented thing/operation disclosure of clause 127, wherein said means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: means for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

455. The computationally-implemented thing/operation disclosure of clause 127, wherein said means for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: means for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

456. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring a request for a particular image object located in the scene; and

means for determining the particular image data of the scene that contains the particular image object.

457. The computationally-implemented thing/operation disclosure of clause 130,

177 wherein said means for acquiring a request for a particular image object located in the scene comprises:

178 means for acquiring a request for a particular person located in the scene.

458. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for a basketball located in the scene that is a basketball arena.

459. The computationally- implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for a motor vehicle located in the scene.

460. The computationally- implemented thing/operation disclosure of clause 130, wherein said means for acquiring a request for a particular image object located in the scene comprises:

means for acquiring a request for any human object representations located in the scene.

461. The computationally- implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

462. The computationally- implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous

179 moment in time.

180 463. The computationally-implemented thing/operation disclosure of clause 130, wherein said means for determining the particular image data of the scene that contains the particular image object comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

464. The computationally-implemented thing/operation disclosure of clause 137, wherein said means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

465. The computationally-implemented thing/operation disclosure of clause 137, wherein said means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

means for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

466. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for receiving a first request for first particular image data from the scene from a first requestor; and

181 means for receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

467. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

468. The computationally- implemented thing/operation disclosure of clause 141, wherein said means for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

means for combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

469. The computationally-implemented thing/operation disclosure of clause 142, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and means for combining the received first request and the received second request into the request for particular image data.

470. The computationally-implemented thing/operation disclosure of clause 143, wherein said means for combining the received first request and the received second request into the request for particular image data comprises:

means for removing common pixel data between the received first request and the received second request.

182 471. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

472. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

473. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

means for acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

474. The computationally-implemented thing/operation disclosure of clause 147, wherein said means for acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

means for acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

475. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

183 means for transmitting the request for the particular image data of the scene image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

184 476. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

477. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

478. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

479. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that

185 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly, nonsequentially arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

480. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

481. The computationally- implemented thing/operation disclosure of clause 154, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data

comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

482. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a

186 movable platform and that is configured to capture the scene that is larger than the requested image data.

483. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

484. The computationally-implemented thing/operation disclosure of clause 157, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

485. The computationally-implemented thing/operation disclosure of clause 157, wherein said means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

187 486. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises: means for transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

487. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises: means for modifying the request for the particular image data; and

means for transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

488. The computationally-implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for removing designated image data from the request for the particular image data.

489. The computationally- implemented thing/operation disclosure of clause 162, wherein said means for removing designated image data from the request for the particular image data comprises:

means for removing designated image data from the request for the particular image data based on previously stored image data.

490. The computationally-implemented thing/operation disclosure of clause 163, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data comprises:

188 means for removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

491. The computationally-implemented thing/operation disclosure of clause 163, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data comprises:

means for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

492. The computationally-implemented thing/operation disclosure of clause 165, wherein said means for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data comprises:

means for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

493. The computationally-implemented thing/operation disclosure of clause 166, wherein said means for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

means for removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

189 494. The computationally- implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

495. The computationally-implemented thing/operation disclosure of clause 168, wherein said means for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

means for removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

496. The computationally- implemented thing/operation disclosure of clause 161, wherein said means for modifying the request for the particular image data comprises: means for identifying at least one portion of the request for the particular image data that is already stored in a memory; and

means for removing the identified portion of the request for the particular image data.

497. The computationally-implemented thing/operation disclosure of clause 170, wherein said means for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

means for identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

498. The computationally-implemented thing/operation disclosure of clause 170, wherein said means for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

means for identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

190 499. The computationally-implemented thing/operation disclosure of clause 172, wherein said means for determining the size of the request for the particular image data at least partially based on a resolution of the user thing/operation disclosure that requested the particular image data comprises:

means for identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

500. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

means for determining a size of the request for the particular image data; and means for transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

501. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

502. The computationally-implemented thing/operation disclosure of clause 175, wherein said means for determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

191 503. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data.

504. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

505. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

506. The computationally-implemented thing/operation disclosure of clause 174, wherein said means for determining a size of the request for the particular image data comprises:

means for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

507. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data from the image sensor array,

192 wherein data from the scene other than the particular image data is discarded.

508. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

193 means for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

509. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving the particular image data from the image sensor array in near- real time.

510. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving the particular image data from the image sensor array in near- real time; and

means for retrieving data from the scene other than the particular image data at a later time.

511. The computationally- implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

512. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

513. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image

194 data at a later time comprises:

195 means for retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

514. The computationally-implemented thing/operation disclosure of clause 184, wherein said means for retrieving data from the scene other than the particular image data at a later time comprises:

means for retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

515. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data that includes audio data from the sensor array.

516. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for receiving only the particular image data from the image sensor array comprises:

means for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

517. The computationally-implemented thing/operation disclosure of clause 190, wherein said means for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array:

means for receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

518. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting the received particular image data to a user device.

196 519. The computationally- implemented thing/operation disclosure of clause 192, wherein said means for transmitting the received particular image data to a user thing/operation disclosure comprises:

means for transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

520. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for separating the received particular image data into a set of one or more requested images; and

means for transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

521. The computationally-implemented thing/operation disclosure of clause 194, wherein said means for separating the received particular image data into a set of one or more requested images comprises:

means for separating the received particular image data into a first requested image and a second requested image.

522. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

523. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for transmitting a first portion of the received particular image data to a first requestor; and

means for transmitting a second portion of the received particular image data to a

197 second requestor.

198 524. The computationally-implemented thing/operation disclosure of clause 197, wherein said means for transmitting a first portion of the received particular image data to a first requestor comprises:

means for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

525. The computationally-implemented thing/operation disclosure of clause 198, wherein said means for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

means for transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

526. The computationally-implemented thing/operation disclosure of clause 197, wherein said means for transmitting a second portion of the received particular image data to a second requestor comprises:

means for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

527. The computationally-implemented thing/operation disclosure of clause 200, wherein said means for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

means for transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

528. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

199 means for transmitting at least a portion of the received particular image data without alteration to at least one requestor.

529. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

means for transmitting the generated transmission image data to at least one requestor.

530. The computationally-implemented thing/operation disclosure of clause 203, wherein said means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

531. The computationally- implemented thing/operation disclosure of clause 204, wherein said means for adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

532. The computationally-implemented thing/operation disclosure of clause 205, wherein said means for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

means for adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

200 533. The computationally-implemented thing/operation disclosure of clause 203, wherein said means for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

534. The computationally-implemented thing/operation disclosure of clause 207, wherein said means for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

means for adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

535. The computationally-implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

means for modifying data of a portion of the received particular image data to generate transmission image data; and

means for transmitting at the generated transmission image data to at least one requestor.

536. The computationally-implemented thing/operation disclosure of clause 209, wherein said means for modifying data of a portion of the received particular image data to generate transmission image data comprises:

means for performing image manipulation modifications of the received particular image data to generate transmission image data.

537. The computationally-implemented thing/operation disclosure of clause 210, wherein said means for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

201 means for performing contrast balancing on the received particular image data to generate transmission image data.

538. The computationally-implemented thing/operation disclosure of clause 210, wherein said means for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

means for performing color modification balancing on the received particular image data to generate transmission image data.

539. The computationally-implemented thing/operation disclosure of clause 209, wherein said means for modifying data of a portion of the received particular image data to generate transmission image data comprises:

means for redacting at least a portion of the received particular image data to generate the transmission image data.

540. The computationally-implemented thing/operation disclosure of clause 213, wherein said means for redacting at least a portion of the received particular image data to generate the transmission image data comprises:

means for redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

541. The computationally- implemented thing/operation disclosure of clause 214, wherein said means for redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

means for redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

542. The computationally- implemented thing/operation disclosure of clause 109, wherein said means for transmitting the received particular image data to at least one requestor comprises:

202 means for transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

means for transmitting a full-resolution version of the received particular image data to the at least one requestor.

543. A computationally-implemented thing/operation disclosure, comprising

circuitry for acquiring a request for particular image data that is part of a scene; circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

circuitry for receiving only the particular image data from the image sensor array; and

circuitry for transmitting the received particular image data to at least one requestor.

544. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene that includes one or more images.

545. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving the request for particular image data of the scene.

546. The computationally-implemented thing/operation disclosure of clause 219, wherein said circuitry for receiving the request for particular image data of the scene comprises:

circuitry for receiving the request for particular image data of the scene from a

203 user device.

204 547. The computationally-implemented thing/operation disclosure of clause 220, wherein said circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

circuitry for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene.

548. The computationally-implemented thing/operation disclosure of clause 221, wherein circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure that is configured to display at least a portion of the scene comprises:

circuitry for receiving the request for particular image data of the scene from a user device that is configured to display at least a portion of the scene in a viewfinder.

549. The computationally-implemented thing/operation disclosure of clause 220, wherein said circuitry for receiving the request for particular image data of the scene from a user thing/operation disclosure comprises:

circuitry for receiving the request for particular image data of the scene from the user device that is configured to receive a selection of a particular image.

550. The computationally-implemented thing/operation disclosure of clause 223, wherein said circuitry for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

circuitry for receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, said selection based on a view of the scene.

551. The computationally- implemented thing/operation disclosure of clause 223, wherein said circuitry for receiving the request for particular image data of the scene from the user thing/operation disclosure that is configured to receive a selection of a particular image comprises:

205 circuitry for receiving the request for particular image data of the scene from the user receiving the request for particular image data of the scene from the user device that is configured to receive the selection of the particular image, wherein

206 the user device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

552. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is the image data collected by the array of more than one image sensor.

553. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

554. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

555. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

207 556. The computationally-implemented thing/operation disclosure of clause 227, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

circuitry for acquiring the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

557. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a football game.

558. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a street view of an area.

559. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is a tourist destination.

560. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data of a scene that is an inside of a home.

208 561. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

209 circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

562. The computationally-implemented thing/operation disclosure of clause 235, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: circuitry for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

563. The computationally-implemented thing/operation disclosure of clause 235, wherein said circuitry for acquiring the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises: circuitry for acquiring the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

564. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring a request for a particular image object located in the scene; and

circuitry for determining the particular image data of the scene that contains the particular image object.

565. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for a particular person located in the scene.

566. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the

210 scene comprises:

211 circuitry for acquiring a request for a basketball located in the scene that is a basketball arena.

567. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for a motor vehicle located in the scene.

568. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for acquiring a request for a particular image object located in the scene comprises:

circuitry for acquiring a request for any human object representations located in the scene.

569. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

570. The computationally-implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

571. The computationally- implemented thing/operation disclosure of clause 238, wherein said circuitry for determining the particular image data of the scene that contains the particular image object comprises:

212 circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

572. The computationally-implemented thing/operation disclosure of clause 245, wherein said circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

573. The computationally-implemented thing/operation disclosure of clause 245, wherein said circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

circuitry for determining the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

574. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving a first request for first particular image data from the scene from a first requestor; and

circuitry for receiving a second request for second particular image data from the scene from a second requestor that is different than the first requestor.

213 575. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

576. The computationally-implemented thing/operation disclosure of clause 249, wherein said circuitry for combining a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

circuitry for combining the received first request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

577. The computationally-implemented thing/operation disclosure of clause 250, wherein said means for acquiring a request for particular image data that is part of a scene comprises:

circuitry for receiving a first request for first particular image data from the scene and a second request for second particular image data from the scene; and circuitry for combining the received first request and the received second request into the request for particular image data.

578. The computationally- implemented thing/operation disclosure of clause 251, wherein said circuitry for combining the received first request and the received second request into the request for particular image data comprises:

circuitry for removing common pixel data between the received first request and the received second request.

579. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a

214 scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes video data.

215 580. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

581. The computationally- implemented thing/operation disclosure of clause 217, wherein said circuitry for acquiring a request for particular image data that is part of a scene comprises:

circuitry for acquiring the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

582. The computationally-implemented thing/operation disclosure of clause 255, wherein said circuitry for acquiring the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

circuitry for acquiring the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

583. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

584. The computationally-implemented thing/operation disclosure of clause 217,

216 wherein said circuitry for transmitting the request for the particular image data to an image sensor array that

217 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture an image that is larger than the requested image data.

585. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

586. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

587. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes a grouping of nonlinearly,nonsequentially

218 arranged stationary image sensors and that is configured to capture an image that is larger than the requested image data.

588. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

589. The computationally-implemented thing/operation disclosure of clause 262, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data comprises: circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

590. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a movable platform and that is configured to capture the scene that is larger than the requested image data.

591. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that

219 includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

592. The computationally-implemented thing/operation disclosure of clause 265, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

593. The computationally-implemented thing/operation disclosure of clause 265, wherein said circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

594. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

220 circuitry for transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

595. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for modifying the request for the particular image data; and

circuitry for transmitting the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

596. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises: circuitry for removing designated image data from the request for the particular image data.

597. The computationally-implemented thing/operation disclosure of clause 270, wherein said circuitry for removing designated image data from the request for the particular image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data.

598. The computationally- implemented thing/operation disclosure of clause 271, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

221 599. The computationally- implemented thing/operation disclosure of clause 271, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data comprises:

circuitry for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

600. The computationally-implemented thing/operation disclosure of clause 273, wherein said circuitry for removing designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data comprises:

circuitry for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

601. The computationally- implemented thing/operation disclosure of clause 274, wherein said circuitry for removing image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

circuitry for removing image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

602. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises:

222 circuitry for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data.

603. The computationally-implemented thing/operation disclosure of clause 276, wherein said circuitry for removing portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

circuitry for removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

604. The computationally-implemented thing/operation disclosure of clause 269, wherein said circuitry for modifying the request for the particular image data comprises: circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory; and

circuitry for removing the identified portion of the request for the particular image data.

605. The computationally-implemented thing/operation disclosure of clause 278, wherein said circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

circuitry for identifying at least one portion of the request for particular image data that was previously captured by the image sensor array.

606. The computationally-implemented thing/operation disclosure of clause 278, wherein said circuitry for identifying at least one portion of the request for the particular image data that is already stored in a memory comprises:

circuitry for identifying one or more static objects of the request for particular image data that was previously captured by the image sensor array.

223 607. The computationally-implemented thing/operation disclosure of clause 280, wherein said circuitry for determining the size of the request for the particular image data at least partially based on a resolution of the user thing/operation disclosure that requested the particular image data comprises:

circuitry for identifying one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

608. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

circuitry for determining a size of the request for the particular image data; and circuitry for transmitting the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

609. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

610. The computationally-implemented thing/operation disclosure of clause 283, wherein said circuitry for determining the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

611. The computationally- implemented thing/operation disclosure of clause 282,

224 wherein said circuitry for determining a size of the request for the particular image data comprises:

225 circuitry for determining the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of the particular image data.

612. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

613. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

614. The computationally-implemented thing/operation disclosure of clause 282, wherein said circuitry for determining a size of the request for the particular image data comprises:

circuitry for determining the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

615. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded.

616. The computationally-implemented thing/operation disclosure of clause 217,

226 wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

227 circuitry for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

617. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving the particular image data from the image sensor array in near-real time.

618. The computationally- implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving the particular image data from the image sensor array in near-real time; and

circuitry for retrieving data from the scene other than the particular image data at a later time.

619. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

620. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at a time at that represents off-peak usage for the image sensor array.

621. The computationally- implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image

228 data at a later time comprises:

229 circuitry for retrieving data from the scene other than the particular image data at a time when no particular image data is requested.

622. The computationally-implemented thing/operation disclosure of clause 292, wherein said circuitry for retrieving data from the scene other than the particular image data at a later time comprises:

circuitry for retrieving data from the scene other than the particular image data at time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

623. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data that includes audio data from the sensor array.

624. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for receiving only the particular image data from the image sensor array comprises:

circuitry for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array.

625. The computationally-implemented thing/operation disclosure of clause 298, wherein said circuitry for receiving only the particular image data that was determined to contain a particular requested image object from the image sensor array:

circuitry for receiving only the particular image data that was determined to contain a particular requested image object by the image sensor array.

626. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting the received particular image data to a user device.

230 627. The computationally-implemented thing/operation disclosure of clause 300, wherein said circuitry for transmitting the received particular image data to a user thing/operation disclosure comprises:

circuitry for transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

628. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for separating the received particular image data into a set of one or more requested images; and

circuitry for transmitting at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

629. The computationally-implemented thing/operation disclosure of clause 302, wherein said circuitry for separating the received particular image data into a set of one or more requested images comprises:

circuitry for separating the received particular image data into a first requested image and a second requested image.

630. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting the received particular image data to a user device that requested an image that is part of the received particular image data.

631. The computationally- implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for transmitting a first portion of the received particular image data to a first requestor; and

circuitry for transmitting a second portion of the received particular image data to

231 a second requestor.

232 632. The computationally-implemented thing/operation disclosure of clause 305, wherein said circuitry for transmitting a first portion of the received particular image data to a first requestor comprises:

circuitry for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

633. The computationally-implemented thing/operation disclosure of clause 306, wherein said circuitry for transmitting the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

circuitry for transmitting an image that contains a particular football player to a television device that is configured to display a football game and that requested the image that contains the particular football player.

634. The computationally-implemented thing/operation disclosure of clause 305, wherein said circuitry for transmitting a second portion of the received particular image data to a second requestor comprises:

circuitry for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

635. The computationally-implemented thing/operation disclosure of clause 308, wherein said circuitry for transmitting the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

circuitry for transmitting an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

636. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

233 circuitry for transmitting at least a portion of the received particular image data without alteration to at least one requestor.

637. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data; and

circuitry for transmitting the generated transmission image data to at least one requestor.

638. The computationally- implemented thing/operation disclosure of clause 311, wherein said circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding advertisement data to at least a portion of the received particular image data to generate transmission image data.

639. The computationally- implemented thing/operation disclosure of clause 312, wherein said circuitry for adding advertisement data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

640. The computationally-implemented thing/operation disclosure of clause 313, wherein said circuitry for adding context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

circuitry for adding an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

200 641. The computationally- implemented thing/operation disclosure of clause 311, wherein said circuitry for adding supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

642. The computationally-implemented thing/operation disclosure of clause 315, wherein said circuitry for adding related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

circuitry for adding fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

643. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

circuitry for modifying data of a portion of the received particular image data to generate transmission image data; and

circuitry for transmitting at the generated transmission image data to at least one requestor.

644. The computationally- implemented thing/operation disclosure of clause 317, wherein said circuitry for modifying data of a portion of the received particular image data to generate transmission image data comprises:

circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data.

645. The computationally- implemented thing/operation disclosure of clause 318, wherein said circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

201 circuitry for performing contrast balancing on the received particular image data to generate transmission image data.

646. The computationally- implemented thing/operation disclosure of clause 318, wherein said circuitry for performing image manipulation modifications of the received particular image data to generate transmission image data comprises:

circuitry for performing color modification balancing on the received particular image data to generate transmission image data.

647. The computationally- implemented thing/operation disclosure of clause 317, wherein said circuitry for modifying data of a portion of the received particular image data to generate transmission image data comprises:

circuitry for redacting at least a portion of the received particular image data to generate the transmission image data.

648. The computationally- implemented thing/operation disclosure of clause 321, wherein said circuitry for redacting at least a portion of the received particular image data to generate the transmission image data comprises:

circuitry for redacting at least a portion of the received particular image data based on a security clearance level of the requestor.

649. The computationally-implemented thing/operation disclosure of clause 322, wherein said circuitry for redacting at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

circuitry for redacting a tank from the received particular image data that includes a satellite image that includes a military base, based on an insufficient security clearance level of a device that requested the particular image data.

650. The computationally-implemented thing/operation disclosure of clause 217, wherein said circuitry for transmitting the received particular image data to at least one requestor comprises:

202 circuitry for transmitting a lower-resolution version of the received particular image data to the at least one requestor; and

circuitry for transmitting a full-resolution version of the received particular image data to the at least one requestor.

651. A thing/operation disclosure, comprising:

a signal -bearing medium bearing:

one or more instructions for acquiring a request for particular image data that is part of a scene;

one or more instructions for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

one or more instructions for receiving only the particular image data from the image sensor array; and

one or more instructions for transmitting the received particular image data to at least one requestor.

652. A thing/operation disclosure defined by a computational language comprising: one or more interchained physical machines ordered for acquiring a request for particular image data that is part of a scene;

one or more interchained physical machines ordered for transmitting the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

one or more interchained physical machines ordered for receiving only the particular image data from the image sensor array; and

one or more interchained physical machines ordered for transmitting the received particular image data to at least one requestor.

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [EN D]

203 START PRELIMINARY AMENDMENT 27 AUGUST 2015 - 1114-003-

007-000000

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [START]

204 Those skilled in the art will appreciate the indentations and cross-referencing of the following as instructive. In general, each immediately following numbered paragraph in this " Structured Disclosure Directed Toward One Of Skill In The Art Section" should be treated as preceded by the word "Clause," as can be seen by the Clauses that cross- reference against them, and each cross-referencing paragraph is to be understood as permissively incorporating its parent clause(s) by reference.

205 As used i n the herein, and i n pa rticu la r the following, thing/operation disclosures, the word "com prising" ca n genera lly be interpreted as " incl uding but not limited to" :

327. (NEW) A thing/operation disclosure, comprising:

a request for particular image data that is part of a scene acquiring module;

a request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

a particular image data from the image sensor array exclusive receiving module; and

a received particular image data transmitting to at least one requestor module.

328. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a scene and includes one or more images acquiring module.

329. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a scene receiving module.

330. (NEW) The thing/operation disclosure of clause 329, wherein said request for particular image data that is part of a scene receiving module comprises:

a request for particular image data that is part of a scene receiving from a client device module.

206 331. (NEW) The thing/operation disclosure of clause 330, wherein said request for particular image data that is part of a scene receiving from a client thing/operation disclosure module comprises:

a request for particular image data that is part of a scene receiving from a client device module configured to receive the request for particular image data that is part of the scene from the client device that is configured to display at least a portion of the scene.

332. (NEW) The thing/operation disclosure of clause 331, wherein said request for particular image data that is part of a scene receiving from a client thing/operation disclosure module comprises:

a request for particular image data that is part of a scene receiving from a client device module configured to receive the request for particular image data that is part of the scene from the client device that is configured to display at least a portion of the scene in a viewfinder.

333. (NEW) The thing/operation disclosure of clause 330, wherein said request for particular image data that is part of a scene receiving from a client thing/operation disclosure module comprises:

a request for particular image data that is part of a scene receiving from a client device module configured to receive the request for particular image data from the client device that is configured to obtain a selection of a particular image.

334. (NEW) The thing/operation disclosure of clause 333, wherein said request for particular image data that is part of a scene receiving from a client thing/operation disclosure module configured to receive the request for particular image data from the client thing/operation disclosure that is configured to obtain a selection of a particular image comprises:

a request for particular image data that is part of a scene receiving from a client device module configured to receive the request for particular image data from the client device that is configured to obtain a selection of a particular image that is

207 based on a view of the scene

335. (NEW) The thing/operation disclosure of clause 333, wherein said request for particular image data that is part of a scene receiving from a client thing/operation disclosure module configured to receive the

208 request for particular image data from the client thing/operation disclosure that is configured to obtain a selection of a particular image comprises:

a request for particular image data that is part of a scene receiving from one or more various devices configured to receive a scene-based selection of a particular image module, wherein the one or more various devices include one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

336. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a scene acquiring module, wherein the scene includes image data collected by the array of more than one image sensor.

337. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a particular scene acquiring module, wherein the request particular scene is a representation of the image data collected by the array of more than one image sensor acquiring module.

338. (NEW) The thing/operation disclosure of clause 337, wherein said request for particular image data that is part of a particular scene acquiring module comprises:

a request for particular image data that is part of a particular scene acquiring module, wherein the request particular scene is a sampling of the image data collected by the array of more than one image sensor.

339. (NEW) The thing/operation disclosure of clause 337, wherein said request for particular image data that is part of a particular scene acquiring module comprises:

a request for particular image data that is part of a particular scene acquiring module, wherein the request particular scene is a subset of the image data collected by the array of more than one image sensor.

209 340. (NEW) The thing/operation disclosure of clause 337, wherein said request for particular image data that is part of a particular scene acquiring module comprises:

a request for particular image data that is part of a particular scene acquiring module, wherein the request particular scene is a low-resolution version of the image data collected by the array of more than one image sensor.

341. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a scene that is a football game acquiring module.

342. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a scene that is an area street view acquiring module.

343. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a scene that is a tourist destination acquiring module.

344. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a scene that is inside of a home acquiring module.

345. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data acquiring module, wherein the particular image data is an image that is a portion of the scene.

210 346. (NEW) The thing/operation disclosure of clause 345, wherein said request for particular image data acquiring module, wherein the particular image data is an image that is a portion of the scene comprises:

a request for particular image data of the scene acquiring module, wherein the particular image data includes image data of a particular football player and the scene is a football field.

347. (NEW) The thing/operation disclosure of clause 345, wherein said request for particular image data acquiring module, wherein the particular image data is an image that is a portion of the scene comprises:

a request for particular image data of the scene acquiring module, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is a highway bridge.

348. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image object located in the scene acquiring module; and a particular image data of the scene that contains the particular image object determining module.

349. (NEW) The thing/operation disclosure of clause 348, wherein said request for particular image object located in the scene acquiring module comprises:

a request for particular person located in the scene acquiring module.

350. (NEW) The thing/operation disclosure of clause 348, wherein said request for particular image object located in the scene acquiring module comprises:

a request for a basketball object located in the scene acquiring module, wherein the scene is a basketball arena.

351. (NEW) The thing/operation disclosure of clause 348, wherein said request for particular image object located in the scene acquiring module comprises:

a request for particular motor vehicle located in the scene acquiring module.

211 352. (NEW) The thing/operation disclosure of clause 348, wherein said request for particular image object located in the scene acquiring module comprises:

a request for human object representations located in the scene acquiring module.

353. (NEW) The thing/operation disclosure of clause 348, wherein said particular image data of the scene that contains the particular image object determining module comprises:

a particular image data of the scene that contains the particular image object determining module configured to determine the particular image data of the scene that contains the particular image object through application of automated pattern recognition to scene image data.

354. (NEW) The thing/operation disclosure of clause 348, wherein said particular image data of the scene that contains the particular image object determining module comprises:

a particular image data of the scene that contains the particular image object determining module configured to determine the particular image data of the scene that contains the particular image object through identification of the particular image object in previous scene data that is image data of the scene from at least one previous moment in time.

355. (NEW) The thing/operation disclosure of clause 348, wherein said particular image data of the scene that contains the particular image object determining module comprises:

a particular image data of the scene that contains the particular image object determining module configured to determine the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data.

356. (NEW) The thing/operation disclosure of clause 355, wherein said particular image data of the scene that contains the particular image object determining module

212 configured to determine the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

213 a particular image data of the scene that contains the particular image object determining module configured to determine the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor.

357. (NEW) The thing/operation disclosure of clause 355, wherein said particular image data of the scene that contains the particular image object determining module configured to determine the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data comprises:

a particular image data of the scene that contains the particular image object determining module configured to determine the particular image data of the scene that contains the particular image object through identification of the particular image object in cached scene data that was previously transmitted from the image sensor array that includes more than one image sensor at a time when bandwidth was available for a connection to the image sensor array that includes more than one image sensor.

358. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a first request for first particular image data from a first requestor receiving module; and

a second request for first particular image data from a different second requestor receiving module, wherein the second requestor is different than the first requestor.

359. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a first received request and second received request combining module configured to combine a received first request for first particular image data from the scene and a received second request for second particular image data from the scene.

214 360. (NEW) The thing/operation disclosure of clause 359, wherein said first received request and second received request combining module configured to combine a received first request for first particular image data from the scene and a received second request for second particular image data from the scene comprises:

a first received request and second received request combining module the received first request configured to combine the first received request for first particular image data from the scene and the received second request for second particular image data from the scene into the request for particular image data that consolidates overlapping requested image data.

361. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a first request for first particular image data and second request for second particular image data receiving module; and

a received first request and received second request combining module configured to combine the received first request and the received second request into the request for particular image data.

362. (NEW) The thing/operation disclosure of clause 361, wherein said received first request and received second request combining module configured to combine the received first request and the received second request into the request for particular image data comprises:

a received first request and received second request common pixel deduplicating module configured to remove common pixel data between the received first request and the received second request.

363. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular video data that is part of a scene acquiring module.

364. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

215 a request for particular audio data that is part of a scene acquiring module configured to acquire the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

365. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data that is part of a scene acquiring module comprises:

a request for particular image data that is part of a scene receiving from an audio- enabled user device module configured to acquire the request for particular image data that is part of the scene from a user device that receives the request for particular image data through an audio interface.

366. (NEW) The thing/operation disclosure of clause 365, wherein said request for particular image data that is part of a scene receiving from an audio-enabled user thing/operation disclosure module configured to acquire the request for particular image data that is part of the scene from a user thing/operation disclosure that receives the request for particular image data through an audio interface comprises:

a request for particular image data that is part of a scene receiving from an audio- enabled user device module configured to acquire the request for particular image data that is part of the scene from a user device that has a microphone that receives a spoken request for particular image data from the user.

367. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture an image that is larger than the requested image data.

368. (NEW) The thing/operation disclosure of clause 327, wherein said request for

216 particular image data transmitting to an image sensor array module configured to transmit the request to the

217 image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes two angled image sensors and that is configured to capture the scene that is larger than the requested particular image data.

369. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture an image that is larger than the requested image data.

370. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture an image that is larger than the requested image data.

371. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular

218 data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the

219 image sensor array that includes the array of stationary image sensors and that is configured to capture an image that is larger than the requested image data.

372. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data.

373. (NEW) The thing/operation disclosure of clause 372, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes the array of static image sensors and that is configured to capture an image that is larger than the requested image data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes the array of image sensors that have a fixed focal length and a fixed field of view and that is configured to capture an image that is larger than the requested image data.

374. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes the array of image sensors mounted on a

- 13 - movable platform and that is configured to capture the scene that is larger than the requested image data.

- 14 - 375. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

376. (NEW) The thing/operation disclosure of clause 375, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises: a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

377. (NEW) The thing/operation disclosure of clause 375, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises: a request for particular image data transmitting to an image sensor array module configured to transmit the request for the particular image data of the scene to the image sensor array that includes more than one image sensor and that is

- 15 - configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

378. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

transmitting the request for the particular image data to the image sensor array that is configured to capture the scene that represents a greater field of view than the requested particular image data.

379. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a request for particular image data modifying module; and

a modified request for particular image data transmitting to an image sensor array module configured to transmit the modified request for the particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

380. (NEW) The thing/operation disclosure of clause 379, wherein said request for particular image data modifying module comprises:

a designated image data removing from request for particular image data module configured to remove designated image data from the request for the particular image data.

381. (NEW) The thing/operation disclosure of clause 380, wherein said designated image data removing from request for particular image data module configured to

- 16 - remove designated image data from the request for the particular image data comprises:

- 17 - a designated image data removing from request for particular image data module configured to remove designated image data from the request for the particular image data based on previously stored image data.

382. (NEW) The thing/operation disclosure of clause 381, wherein said designated image data removing from request for particular image data module configured to remove designated image data from the request for the particular image data based on previously stored image data comprises:

a designated image data removing from request for particular image data module configured to remove designated image data from the request for the particular image data based on previously stored image data retrieved from the image sensor array.

383. (NEW) The thing/operation disclosure of clause 381, wherein said designated image data removing from request for particular image data module configured to remove designated image data from the request for the particular image data based on previously stored image data comprises:

a designated image data removing from request for particular image data module configured to remove designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data.

384. (NEW) The thing/operation disclosure of clause 383, wherein said designated image data removing from request for particular image data module configured to remove designated image data from the request for the particular image data based on previously stored image data that is an earlier-in-time version of the designated image data comprises:

a designated image data removing from request for particular image data module configured to remove image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

- 18 - 385. (NEW) The thing/operation disclosure of clause 384, wherein said designated image data removing from request for particular image data module configured to remove image data of one or more static objects from the request for the particular image data that is a view of a street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array comprises:

a designated image data removing from request for particular image data module configured to remove image data of one or more buildings from the request for the particular image data of the one or more static objects from the request for the particular image data that is the view of the street, based on previously stored image data of the one or more static objects from an earlier-in-time version captured by the image sensor array.

386. (NEW) The thing/operation disclosure of clause 379, wherein said request for particular image data modifying module comprises:

an image data removing from request for particular image data module portions of the request for the particular image data through pixel interpolation or

extrapolation of portions of the request for the particular image data.

387. (NEW) The thing/operation disclosure of clause 386, wherein said image data removing from request for particular image data module portions of the request for the particular image data through pixel interpolation or extrapolation of portions of the request for the particular image data comprises:

a static object removing from request for particular image data module configured to remove removing one or more static objects from the request for the particular image data through pixel interpolation of the portions of the request for the particular image data.

388. (NEW) The thing/operation disclosure of clause 379, wherein said request for particular image data modifying module comprises:

a portion of the request for particular image data that was previously stored in memory identifying module configured to identify at least one portion of the

- 19 - request for the particular image data that was previously stored in a memory; and

-20- an identified portion of the request for the particular image data removing module configured to removing the identified portion of the request for the particular image data.

389. (NEW) The thing/operation disclosure of clause 388, wherein said portion of the request for particular image data that was previously stored in memory identifying module configured to identify at least one portion of the request for the particular image data that was previously stored in a memory comprises:

a portion of the request for particular image data that was previously stored in memory identifying module configured to identify at least one portion of the request for particular image data that was previously captured by the image sensor array.

390. (NEW) The thing/operation disclosure of clause 388, wherein said portion of the request for particular image data that was previously stored in memory identifying module configured to identify at least one portion of the request for the particular image data that was previously stored in a memory comprises:

a portion of the request for particular image data that was previously stored in memory identifying module configured to identify one or more static objects of the request for particular image data that was previously captured by the image sensor array.

391. (NEW) The thing/operation disclosure of clause 390, wherein said portion of the request for particular image data that was previously stored in memory identifying module configured to identify one or more static objects of the request for particular image data that was previously captured by the image sensor array comprises:

a portion of the request for particular image data that was previously stored in memory identifying module configured to identify one or more natural rock outcroppings of the request for particular image data that was previously captured by the image sensor array.

- 21 - 392. (NEW) The thing/operation disclosure of clause 327, wherein said request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data comprises:

a size of request for particular image data determining module; and a determined- size request for particular image data transmitting to the image sensor array module configured to transmit the request for the particular image data for which the size has been determined to the image sensor array that is configured to capture the scene.

393. (NEW) The thing/operation disclosure of clause 392, wherein said size of request for particular image data determining module comprises:

a size of request for particular image data determining module configured to determine the size of the request for the particular image data at least partially based on a property of a user device that requested at least a portion of the particular image data.

394. (NEW) The thing/operation disclosure of clause 393, wherein said size of request for particular image data determining module configured to determine the size of the request for the particular image data at least partially based on a property of a user thing/operation disclosure that requested at least a portion of the particular image data comprises:

a size of request for particular image data determining module configured to determine the size of the request for the particular image data at least partially based on a resolution of the user device that requested the particular image data.

395. (NEW) The thing/operation disclosure of clause 392, wherein said size of request for particular image data determining module comprises:

a size of request for particular image data determining module configured to determine the size of the request for the particular image data at least partially based on a device access level of a user device that requested at least a portion of

- 22 - the particular image data.

-23 - 396. (NEW) The thing/operation disclosure of clause 392, wherein said size of request for particular image data determining module comprises:

a size of request for particular image data determining module configured to determine the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with the image sensor array.

397. (NEW) The thing/operation disclosure of clause 392, wherein said size of request for particular image data determining module comprises:

a size of request for particular image data determining module configured to determine the size of the request for the particular image data at least partially based on an amount of time that a user device that requested at least a portion of the particular image data has requested image data.

398. (NEW) The thing/operation disclosure of clause 392, wherein said size of request for particular image data determining module comprises:

a size of request for particular image data determining module configured to determine the size of the request for the particular image data at least partially based on an available amount of bandwidth for communication with a user device that requested at least a portion of the particular image data.

399. (NEW) The thing/operation disclosure of clause 327, wherein said particular image data from the image sensor array exclusive receiving module comprises:

a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is discarded.

400. (NEW) The thing/operation disclosure of clause 327, wherein said particular image data from the image sensor array exclusive receiving module comprises:

a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array,

- 24 - wherein data from the scene other than the particular image data is stored at the image sensor array.

401. (NEW) The thing/operation disclosure of clause 327, wherein said particular image data from the image sensor array exclusive receiving module comprises:

a particular image data from the image sensor array exclusive near-real-time receiving module configured to receive the particular image data from the image sensor array in near-real time.

402. (NEW) The thing/operation disclosure of clause 327, wherein said particular image data from the image sensor array exclusive receiving module comprises:

a particular image data from the image sensor array exclusive near-real-time receiving module configured to receive the particular image data from the image sensor array in near-real time; and

a data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image data at a later time.

403. (NEW) The thing/operation disclosure of clause 402, wherein said data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image data at a later time comprises:

a data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image data at a time at which bandwidth is available to the image sensor array.

404. (NEW) The thing/operation disclosure of clause 402, wherein said data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image data at a later time comprises:

a data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image

- 25 - data at a time at that represents off-peak usage for the image sensor array.

-26- 405. (NEW) The thing/operation disclosure of clause 402, wherein said data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image data at a later time comprises:

a data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image data at a time when no particular image data is requested.

406. (NEW) The thing/operation disclosure of clause 402, wherein said data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image data at a later time comprises:

a data from the scene other than the particular image data retrieving at a later time module configured to retrieve data from the scene other than the particular image data at a time at which fewer users are requesting particular image data than for which the image sensor array has capacity.

407. (NEW) The thing/operation disclosure of clause 327, wherein said particular image data from the image sensor array exclusive receiving module comprises:

a particular image data that includes audio data from the image sensor array exclusive receiving module configured to receive only the particular image data that includes audio data from the sensor array.

408. (NEW) The thing/operation disclosure of clause 327, wherein said particular image data from the image sensor array exclusive receiving module comprises:

a particular image data that was determined to contain a particular requested image object from the image sensor array exclusive receiving module configured to receive only the particular image data that was determined to contain a particular requested image object from the image sensor array.

409. (NEW) The thing/operation disclosure of clause 408, wherein said particular image data that was determined to contain a particular requested image object from the

- 27 - image sensor array exclusive receiving module configured to receive only the particular image data that was

- 28 - determined to contain a particular requested image object from the image sensor array comprises:

a particular image data that was determined to contain a particular requested image object from the image sensor array exclusive receiving module configured to receive only the particular image data that was determined to contain a particular requested image object by the image sensor array.

410. (NEW) The thing/operation disclosure of clause 327, wherein said received particular image data transmitting to at least one requestor module comprises:

a received particular image data transmitting to at least one user device requestor module.

411. (NEW) The thing/operation disclosure of clause 410, wherein said received particular image data transmitting to at least one user device requestor module comprises:

a transmitting at least a portion of the received particular image data to a user device that requested the particular image data.

412. (NEW) The thing/operation disclosure of clause 327, wherein said received particular image data transmitting to at least one requestor module comprises:

a separation of the received particular data into set of one or more requested images executing module configured to separate the received particular image data into a set of one or more requested images; and

an at least one image of the set of one or more requested images transmitting to a particular requestor module configured to transmit at least one image of the set of one or more requested images to a particular requestor that requested the at least one image.

413. (NEW) The thing/operation disclosure of clause 412, wherein said separation of the received particular data into set of one or more requested images executing module configured to separate the received particular image data into a set of one or more

- 29 - requested images comprises:

-30- a separation of the received particular data into a first requested image and a second requested image executing module configured to separate the received particular image data into a first requested image and a second requested image.

414. (NEW) The thing/operation disclosure of clause 327, wherein said received particular image data transmitting to at least one requestor module comprises:

a received particular image data transmitting module configured to transmit the received particular image data to a user device that requested an image that is part of the received particular image data.

415. (NEW) The thing/operation disclosure of clause 327, wherein said received particular image data transmitting to at least one requestor module comprises:

a first portion of received particular image data transmitting to a first requestor module; and

a second portion of received particular image data transmitting to a second requestor module.

416. (NEW) The thing/operation disclosure of clause 415, wherein said first portion of received particular image data transmitting to a first requestor module comprises: a first portion of received particular image data transmitting to a first requestor that requested the first portion module configured to transmit the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data.

417. (NEW) The thing/operation disclosure of clause 416, wherein said first portion of received particular image data transmitting to a first requestor that requested the first portion module configured to transmit the first portion of the received particular image data to the first requestor that requested the first portion of the received particular image data comprises:

a first portion of received particular image data transmitting to a first requestor module configured to transmit an image that contains a particular football player to a television device that is configured to display a football game and that

- 31 - requested the image that contains the particular football player.

-32- 418. (NEW) The thing/operation disclosure of clause 415, wherein said second portion of received particular image data transmitting to a second requestor

module comprises:

a second portion of received particular image data transmitting to the second requestor that requested the second portion module configured to transmit the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data.

419. (NEW) The thing/operation disclosure of clause 418, wherein said second portion of received particular image data transmitting to the second requestor that requested the second portion module configured to transmit the second portion of the received particular image data to the second requestor that requested the second portion of the received particular image data comprises:

a second portion of received particular image data transmitting to the second requestor that requested the second portion module configured to transmit an image that contains a view of a motor vehicle to a tablet device that requested a street view image of a particular corner of a city.

420. (NEW) The thing/operation disclosure of clause 327, wherein said received particular image data transmitting to at least one requestor module comprises:

a received particular image data unaltered transmitting to at least one requestor module configured to transmit at least a portion of the received particular image data without alteration to at least one requestor.

421. (NEW) The thing/operation disclosure of clause 327, wherein said received particular image data transmitting to at least one requestor module comprises:

a supplemental data addition to at least a portion of the received particular image data facilitating module configured to add supplemental data to at least a portion of the received particular image data to generate transmission image data; and a generated transmission image data transmitting to at least one requestor module configured to transmit the generated transmission image data to at least one

- 25 - requestor.

-26- 422. (NEW) The thing/operation disclosure of clause 421, wherein said supplemental data addition to at least a portion of the received particular image data facilitating module configured to add supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

an advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module.

423. (NEW) The thing/operation disclosure of clause 422, wherein said advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module comprises:

a context-based advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module configured to add context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data.

424. (NEW) The thing/operation disclosure of clause 423, wherein said context-based advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module configured to add context-based advertisement data that is at least partially based on the received particular image data to at least the portion of the received particular image data comprises:

a context-based advertisement data addition to at least a portion of the received particular image data to generate transmission image data facilitating module configured to add an animal rights donation fund advertisement data that is at least partially based on the received particular image data of a tiger at a jungle oasis, to the received particular image data of the tiger at the jungle oasis.

425. (NEW) The thing/operation disclosure of clause 421, wherein said supplemental data addition to at least a portion of the received particular image data facilitating module configured to add supplemental data to at least a portion of the received particular image data to generate transmission image data comprises:

- 27 - a related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module configured to add related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data.

426. (NEW) The thing/operation disclosure of clause 425, wherein said related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module configured to add related visual data related to the received particular image data to at least a portion of the received particular image data to generate transmission image data comprises:

a related visual data addition to at least a portion of the received particular image data to generate transmission image data facilitating module configured to add fantasy football statistical data to the received particular image data of a football quarterback that appears in the received particular image data of the quarterback in a football game to generate the transmission image data.

427. (NEW) The thing/operation disclosure of clause 327, wherein said received particular image data transmitting to at least one requestor module comprises:

a portion of received particular image data modification to generate transmission image data facilitating module configured to modify data of a portion of the received particular image data to generate transmission image data; and a generated transmission image data transmitting to at least one requestor module configured to transmit at the generated transmission image data to at least one requestor.

428. (NEW) The thing/operation disclosure of clause 427, wherein said portion of received particular image data modification to generate transmission image data facilitating module configured to modify data of a portion of the received particular image data to generate transmission image data comprises:

a portion of received particular image data image manipulation modification to generate transmission image data facilitating module.

- 28 - 429. (NEW) The thing/operation disclosure of clause 428, wherein said portion of received particular image data image manipulation modification to generate transmission image data facilitating module comprises:

a portion of received particular image data contrast balancing modification to generate transmission image data facilitating module.

430. (NEW) The thing/operation disclosure of clause 428, wherein said portion of received particular image data image manipulation modification to generate transmission image data facilitating module comprises:

a portion of received particular image data color modification balancing to generate transmission image data facilitating module.

431. (NEW) The thing/operation disclosure of clause 427, wherein said portion of received particular image data modification to generate transmission image data facilitating module configured to modify data of a portion of the received particular image data to generate transmission image data comprises:

a received particular image data redaction to generate transmission image data facilitating module.

432. (NEW) The thing/operation disclosure of clause 431, wherein said received particular image data redaction to generate transmission image data facilitating module comprises:

a received particular image data redaction to generate transmission image data facilitating module configured to redact at least a portion of the received particular image data based on a security clearance level of the requestor.

433. (NEW) The thing/operation disclosure of clause 432, wherein said received particular image data redaction to generate transmission image data facilitating module configured to redact at least a portion of the received particular image data based on a security clearance level of the requestor comprises:

a received particular image data redaction to generate transmission image data facilitating module configured to redact a tank from the received particular image

- 29 - data that includes a satellite image that includes a military base, based on an

-30- insufficient security clearance level of a thing/operation disclosure that requested the particular image data.

434. (NEW) The thing/operation disclosure of clause 327, wherein said received particular image data transmitting to at least one requestor module comprises:

a lower-resolution version of received particular image data transmitting to at least one requestor module configured to transmit a lower-resolution version of the received particular image data to the at least one requestor; and

a full-resolution version of received particular image data transmitting to at least one requestor module configured to transmit a full-resolution version of the received particular image data to the at least one requestor.

435. (NEW) A thing/operation disclosure, comprising:

one or more general purpose integrated circuits configured to receive instructions to configure as an request for particular image data that is part of a scene acquiring module at one or more first particular times;

one or more general purpose integrated circuits configured to receive instructions to configure as a request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data at one or more second particular times;

one or more general purpose integrated circuits configured to receive instructions to configure as a particular image data from the image sensor array exclusive receiving module at one or more third particular times; and

one or more general purpose integrated circuits configured to receive instructions to configure as a received particular image data transmitting to at least one requestor module at one or more fourth particular times.

436. (NEW) The thing/operation disclosure of clause 435, wherein said one or more second particular times occur prior to the one or more third particular times and one or more fourth particular times and after the one or more first particular times.

- 31 - 437. (NEW) A thing/operation disclosure comprising:

an integrated circuit configured to purpose itself as an request for particular image data that is part of a scene acquiring module at a first time;

the integrated circuit configured to purpose itself as a request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data at a second time; the integrated circuit configured to purpose itself as an particular image data from the image sensor array exclusive receiving module at a third time; and

the integrated circuit configured to purpose itself as a received particular image data transmitting to at least one requestor module at a fourth time.

438. (NEW) A thing/operation disclosure, comprising:

one or more elements of programmable hardware programmed to function as an request for particular image data that is part of a scene acquiring module;

the one or more elements of programmable hardware programmed to function as a request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data;

the one or more elements of programmable hardware programmed to function as an particular image data from the image sensor array exclusive receiving module; and the one or more elements of programmable hardware programmed to function as a received particular image data transmitting to at least one requestor module.

END PRELIMINARY AMENDMENT 27 AUGUST 2015 - 1114-003-007-000000.

STRUCTURED DISCLOSURE DI RECTED TOWARD ONE OF SKILL IN THE ART [EN D]

¾^B ^^^^^^^^^^¾— This Roman Numeral Section, And the Corresponding Figures, Were Copied From a Pending United States Application into this PCT

Application in View of Meeting Bar Date, So Each Figure Under this Roman Numeral Section Should Be Read "As If Having Roman Numeral Denotation Sufficient to Distinguish from Other Figures Copied From Other Applications in View of Bar date,

- 32 - Such Clerical Issues to Be Cured by Subsequent Amendment High-Level System Architecture

[0001] Fig. 1, including Figs. 1-A-l-AN, shows partial views that, when assembled, form a complete view of an entire system, of which at least a portion will be described in more detail. An overview of the entire system of Fig. 1 is now described herein, with a more specific reference to at least one subsystem of Fig. 1 to be described later with respect to Figs. 2-14D.

[0002] Fig. 1 shows various implementations of the overall system. At a high level, Fig. 1 shows various implementations of a multiple user video imaging array (hereinafter interchangeably referred to as a "MUVIA"). It is noted that the designation "MUVIA" is merely shorthand and descriptive of an exemplary embodiment, and not a limiting term. Although "multiple user" appears in the name MUVIA, multiple users or even a single user are not required. Further, "video" is used in the designation "MUVIA," but MUVIA systems also may capture still images, multiple images, audio data, electromagnetic waves outside the visible spectrum, and other data as will be described herein. Further, "imaging array" may be used in the MUVIA designation, but the image sensor in MUVIA is not necessarily an array or even multiple sensors (although commonly implemented as larger groups of image sensors, single-sensor implementations are also contemplated), and "array" here does not necessarily imply any specific structure, but rather any grouping of one or more sensors.

[0003] Generally, although not necessarily required, a MUVIA system may include one or more of a user device (e.g., hereinafter interchangeably referred to as a "client device," in recognition that a user may not necessarily be a human, living, or organic"), a server, and an image sensor array. A "server" in the context of this application may refer to any device, program, or module that is not directly connected to the image sensor array or to the client device, including any and all "cloud" storage, applications, and/or processing.

[0004] For example, in an embodiment, e.g., as shown in Fig. 1-A, Fig. 1-K, Fig. 1-U, Fig. 1-AE, and Fig. 1-AF, in an embodiment, the system may include one or more of image

- 33 - sensor array 3200, array local storage and processing module 3300, server 4000, and user device 5200. Each of these portions will be discussed in more detail herein.

[0005] Referring now to Fig. 1-A, Fig. 1-A depicts user device 5200, which is a device that may be operated or controlled by a user of a MUVIA system. It is noted here that "user" is merely provided as a designation for ease of understanding, and does not imply control by a human or other organism, sentient or otherwise. In an embodiment, for example, in a security-type embodiment, the user device 5200 may be mostly or completely unmonitored, or may be monitored by an artificial intelligence, or by a combination of artificial intelligence, pseudo-artificial intelligence (e.g., that is intelligence amplification) and human intelligence.

[0006] User device 5200 may be, but is not limited to, a wearable device (e.g., glasses, goggles, headgear, a watch, clothing), an implant (e.g., a retinal-implant display), a computer of any kind (e.g., a laptop computer, desktop computer, mainframe, server, etc.), a tablet or other portable device, a phone or other similar device (e.g., smartphone, personal digital assistant), a personal electronic device (e.g., music player, CD player), a home appliance (e.g., a television, a refrigerator, or any other so-called "smart" device), a piece of office equipment (e.g., a copier, scanner, fax device, etc.), a camera or other camera-like device, a video game system, an entertainment/media center, or any other electrical equipment that has a functionality of presenting an image (whether visual or by other means, e.g., a screen, but also other sensory stimulating work).

[0007] User device 5200 may be capable of presenting an image, which, for purposes of clarity and conciseness will be referred to as displaying an image, although communication through forms other than generating light waves through the visible light spectrum, although the image is not required to be presented at all times or even at all. For example, in an embodiment, user device 5200 may receive images from server 4000 (or directly from the image sensor array 3200, as will be discussed herein), and may store the images for later viewing, or for processing internally, or for any other reason.

[0008] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection accepting module 5210. User selection accepting module 5210 may be configured to receive user input about what the user wants to see. For example, as shown in

- 34 - Fig. 1-A in the exemplary interface 5212, the user selection accepting module 5210 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[0009] In an embodiment, the user selection accepting module may accept a selection of a particular thing- e.g., a building, an animal, or any other object whose representation is present on the screen. Moreover, a user may use a text box to "search" the image for a particular thing, and processing, done at the user device 5200 or at the server 4000, may determine the image and the zoom level for viewing that thing. The search for a particular thing may include a generic search, e.g., "search for people," or "search for penguins," or a more specific search, e.g., "search for the Space Needle," or "search for the White House." The search for a particular thing may take on any known contextual search, e.g., an address, a text string, or any other input.

[0010] In an embodiment, the "user selection" facilitated by the user selection accepting module 5210 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[0011] Referring again to Fig. 1-A, in an embodiment, user device 5200 may include a user selection transmitting module 5220. The user selection transmitting module 5220 may take the user selection from user selection transmitting module 5220, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5200 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server

- 35 - 4000. Following the thick-line arrow leftward from user selection transmitting module 5220 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

[0012] Referring again to Fig. 1-A, Fig. 1-A also includes a selected image receiving module 5230 and a user selection presenting module 5240, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[0013] Referring now to Fig. 1-K, Figs. 1-K and 1-U show an embodiment of a server 4000 that communicates with one or both of user device 5200 and array local storage and processing module 3300. Sever 4000 may be a single computing device, or may be many computing devices, which may or may not be in proximity with each other.

[0014] Referring again to Fig. 1-K, server 4000 may include a user request reception module 4010. The user request reception module 4010 may receive the transmitted request from user selection transmitting module 5220. The user request reception module 4010 may then turn over processing to user request validation module 4020, which may perform, among other things, a check to make sure the user is not requesting more resolution than what their device can handle. For example, if the server has learned (e.g., through gathered information, or through information that was transmitted with the user request or in a same session as the user request), that the user is requesting a 1900x1080 resolution image, and the maximum resolution for the device is 1334x750, then the request will be modified so that no more than the maximum resolution that can be handled by the device is requested. In an embodiment, this may conserve the bandwidth required to transmit from the MUVIA to the server 4000 and/or the user device 3200

[0015] Referring again to Fig. 1-K, in an embodiment, server 4000 may include a user request latency management module 4030. User request latency management module 4030 may, in conjunction with user device 3200, attempt to reduce the latency from the time a specific image is requested by user device 3200 to the time the request is acted upon and

- 36 - data is transmitted to the user. The details for this latency management will be described in more detail herein, with varying techniques that may be carried out by any or all of the devices in the chain (e.g., user device, camera array, and server). As an example, in an embodiment, a lower resolution version of the image, e.g., that is stored locally or on the server, may be sent to the user immediately upon the request, and then that image is updated with the actual image taken by the camera. In an embodiment, user request latency management module 4030 also may handle static gap-filling, that is, if the image captured by the camera is unchanging, e.g., has not changed for a particular period of time, then a new image is not necessary to be captured, and an older image, that may be stored on server 4000, may be transmitted to the user device 3200. This process also will be discussed in more detail herein.

[0016] Referring now to Fig. 1-U, which shows more of server 4000, in an embodiment, server 4000 may include a consolidated user request transmission module 4040, which may be configured to consolidate all the user requests, perform any necessary pre-processing on those requests, and send the request for particular sets of pixels to the array local storage and processing module 3300. The process for consolidating the user requests and performing pre-processing will be described in more detail herein with respect to some of the other exemplary embodiments. In this embodiment, however, server consolidated user request transmission module 4040 transmits the request (exiting leftward from Fig. 1-U and traveling downward to Fig. 1-AE, through a pathway identified in Fig. 1-AE as lower- bandwidth communication from remote server 3515. It is noted here that "lower bandwidth communication" does not necessarily mean "low bandwidth" or imply any specific number about the bandwidth— it is simply lower than the relatively higher bandwidth

communication from the actual image sensor array 3505 to the array local storage and processing module 3300, which will be discussed in more detail herein.

[0017] Referring again to Fig. 1-U, server 4000 also may include requested pixel reception module 4050, user request preparation module 4060, and user request

transmission module 4070 (shown in Fig. 1-T), which will be discussed in more detail herein, with respect to the dataflow of this embodiment

- 37 - [0018] Referring now to Figs. 1-AE and 1-AF, Figs. 1-AE and 1-AF show an image sensor array ("ISA") 3200 and an array local storage and processing module 3300, each of which will now be described in more detail.

[0019] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[0020] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3300. In an embodiment, array local storage and processing module 3300 is integrated into the image sensor array 3200. In another embodiment, the array local storage and processing module 3300 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3300 to the remote server, which may be, but is not required to be, located further away temporally.

[0021] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

- 38 - [0022] Referring again to Fig. 1-AE, the image sensor array 3300 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3310. Consolidated user request reception module 3310 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[0023] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests.

[0024] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[0025] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3300 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different

- 39 - resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3300 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and

"cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[0026] Referring back to Fig. 1-U, the transmitted pixels transmitted from selected pixel transmission module 3340 of array local processing module 3300 may be received by server 4000, e.g., at requested pixel reception module 4050. Requested pixel reception module 4050 may receive the requested pixels and turn them over to user request preparation module 4060, which may "unpack" the requested pixels, e.g., determining which pixels go to which user, and at what resolutions, along with any post-processing, including image adjustment, adding in missing cached data, or adding additional data to the images (e.g., advertisements or other data). In an embodiment, server 4000 also may include a user request transmission module 4070, which may be configured to transmit the requested pixels back to the user device 5200.

[0027] Referring again to Fig. 1-A, user device 5200 may include a selected image receiving module 5230, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[0028] Figs. 1-B, 1-C, 1-M, 1-W, 1-AG, and 1-AH show another embodiment of the MUVIA system, in which multiple user devices 5510, 5520, and 5530 may request images captured by the same image sensor array 3200.

[0029] Referring now to Figs. 1-B and 1-C, user device 5510, user device 5520, and user device 5530 are shown. In an embodiment, user devices 5510, 5520, and 5530 may have some or all of the same components as user device 5200, but are not shown here for clarity

- 40 - and ease of understanding the drawing. For each of user devices 5510, 5520, and 5530, exemplary screen resolutions have been chosen. There is nothing specific about these numbers that have been chosen, however, they are merely illustrated for exemplary purposes, and any other numbers could have been chosen in their place.

[0030] For example, in an embodiment, referring to Fig. 1-B, user device 5510 may have a screen resolution of 1920x1080 (e.g., colloquially referred to as "HD quality"). User device 5510 may send an image request to the server 4000, and may also send data regarding the screen resolution of the device.

[0031] Referring now to Fig. 1-C, user device 5520 may have a screen resolution of 1334x750. User device 5520 may send another image request to the server 4000, and, in an embodiment, instead of sending data regarding the screen resolution of the device, may send data that identifies what kind of device it is (e.g., an Apple -branded smartphone). Server 4000 may use this data to determine the screen resolution for user device 5520 through an internal database, or through contacting an external source, e.g., a manufacturer of the device or a third party supplier of data about devices.

[0032] Referring again to Fig. 1-C, user device 5530 may have a screen resolution of 640x480, and may send the request by itself to the server 4000, without any additional data. In addition, server 4000 may receive independent requests from various users to change their current viewing area on the device.

[0033] Referring now to Fig. 1-M, server 4000 may include user request reception module 4110. User request reception module 4110 may receive requests from multiple user devices, e.g., user devices 5510, 5520, and 5530. Server 4000 also may include an independent user view change request reception module 4115, which, in an embodiment, may be a part of user request reception module 4110, and may be configured to receive requests from users that are already connected to the system, to change the view of what they are currently seeing.

[0034] Referring again to Fig. 1-M, server 4000 may include relevant pixel selection module 4120 configured to combine the user selections into a single area, as shown in Fig. 1-M. It is noted that, in an embodiment, the different user devices may request areas that

- 41 - overlap each other. In this case, there may be one or more overlapping areas, e.g., overlapping areas 4122. In an embodiment, the overlapping areas are only transmitted once, in order to save data/transmission costs and increase efficiency.

[0035] Referring now to Fig. 1-W, server 4000 may include selected pixel transmission to ISA module 4130. Module 4130 may take the relevant selected pixels, and transmit them to the array local processing module 3400 of image sensor array 3200. Selected pixel transmission to ISA module 4130 may include communication components, which may be shared with other transmission and/or reception modules.

[0036] Referring now to Fig. 1-AG, array local processing module 3400 may

communicate with image sensor array 3200. Similarly to Figs. 1-AE and 1-AF, Figs. 1-AG and 1-AH show array local processing module 3400 and image sensor array 3200, respectively.

[0037] Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[0038] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3400. In an embodiment, array local storage and processing module 3400 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3400 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication

- 42 - from the array local processing module 3400 to the remote server, which may be, but is not required to be, located further away temporally.

[0039] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[0040] Referring again to Fig. 1-AG, the image sensor array 3200 may capture an image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3410. Consolidated user request reception module 3410 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3420 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[0041] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3430. In an embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3417. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3415. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3400, or may be subject to other manipulations or processing separate from the user requests.

[0042] Referring gain to Fig. 1-AG, array local processing module 3400 may include flagged selected pixel transmission module 3440, which takes the pixels identified as requested (e.g., "flagged") and transmits them back to the server 4000 for further processing. Similarly to as previously described, this transmission may utilize a lower-

- 43 - bandwidth channel, and module 3440 may include all necessary hardware to effect that lower-bandwidth transmission to server 4000.

[0043] Referring again to Fig. 1-W, the flagged selected pixel transmission module 3440 of array local processing module 3400 may transmit the flagged pixels to server 4000. Specifically, flagged selected pixel transmission module 3440 may transmit the pixels to flagged selected pixel reception from ISA module 4140 of server 4000, as shown in Fig. 1- W.

[0044] Referring again to Fig. 1-W, server 4000 also may include flagged selected pixel separation and duplication module 4150, which may, effectively, reverse the process of combining the pixels from the various selections, duplicating overlapping areas where necessary, and creating the requested images for each of the user devices that requested images. Flagged selected pixel separation and duplication module 4150 also may include the post-processing done to the image, including filling in cached versions of images, image adjustments based on the device preferences and/or the user preferences, and any other image post-processing.

[0045] Referring now to Fig. 1-M (as data flows "northward" from Fig. 1-W from module 4150), server 4000 may include pixel transmission to user device module 4160, which may be configured to transmit the pixels that have been separated out and processed to the specific users that requested the image. Pixel transmission to user device module 4160 may handle the transmission of images to the user devices 5510, 5520, and 5530. In an embodiment, pixel transmission to user device module 4160 may have some or all components in common with user request reception module 4110.

[0046] Following the arrow of data flow to the right and upward from module 4160 of server 4000, the requested user images arrive at user device 5510, user device 5520, and user device 5530, as shown in Figs. 1-B and 1-C. The user devices 5510, 5520, and 5530 may present the received images as previously discussed and/or as further discussed herein.

[0047] Referring again to Fig. 1, Figs. 1-E, l-O, 1-Y, 1-AH, and 1-AI depict a MUVIA implementation according to an embodiment. In an embodiment, referring now to Fig. 1-E, a user device 5600 may include a target selection reception module 5610. Target selection

- 44 - reception module 5610 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA array is pointed at a football stadium, e.g., CenturyLink Field. As an example, a user may select one of the football players visible on the field as a "target." This may be facilitated by a target presentation module, e.g., target presentation module 5612, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up-to-date) from which the user may select the target, e.g., the football player.

[0048] In an embodiment, target selection reception module 5610 may include an audible target selection module 5614 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[0049] Referring again to Fig. 1, e.g., Fig. 1-E, in an embodiment, user device 5600 may include selected target transmission module 5620. Selected target transmission module 5620 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[0050] Referring now to Fig. l-O, Fig. 1-0 (and Fig. 1-Y to the direct "south" of Fig. 1- O) shows an embodiment of server 4000. For example, in an embodiment, server 4000 may include a selected target reception module 4210. In an embodiment, selected target reception module 4210 of server 4000 may receive the selected target from the user device 5600. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[0051] Referring again to Fig. l-O, in an embodiment, server 4000 may include selected target identification module 4220, which may be configured to take the target data received by selected target reception module 4210 and determine an image that needs to be captured in order to obtain an image that contains the selected target (e.g., in the shown example, the football player). In an embodiment, selected target identification module 4220 may use images previously received (or, in an embodiment, current images) from the image sensor

- 45 - array 3200 to determine the parameters of an image that contains the selected target. For example, in an embodiment, lower-resolution images from the image sensor array 3200 may be transmitted to server 4000 for determining where the target is located within the image, and then specific requests for portions of the image may be transmitted to the image sensor array 3200, as will be discussed herein.

[0052] In an embodiment, server 4000 may perform processing on the selected target data, and/or on image data that is received, in order to create a request that is to be transmitted to the image sensor array 3200. For example, in the given example, the selected target data is a football player. The server 4000, that is, selected target

identification module 4220 may perform image recognition on one or more images captured from the image sensor array to determine a "sector" of the entire scene that contains the selected target. In another embodiment, the selected target identification module 4220 may use other, external sources of data to determine where the target is. In yet another embodiment, the selected target data was selected by the user from the scene displayed by the image sensor array, so such processing may not be necessary.

[0053] Referring again to Fig. l-O, in an embodiment, server 4000 may include pixel information selection module 4230, which may select the pixels needed to capture the target, and which may determine the size of the image that should be transmitted from the image sensor array. The size of the image may be determined based on a type of target that is selected, one or more parameters (set by the user, by the device, or by the server, which may or may not be based on the selected target), by the screen resolution of the device, or by any other algorithm. Pixel information selection module 4230 may determine the pixels to be captured in order to express the target, and may update based on changes in the target's status (e.g., if the target is moving, e.g., in the football example, once a play has started and the football player is moving in a certain direction).

[0054] Referring now to Fig. 1-Y, Fig. 1Y includes more of server 4000 according to an embodiment. In an embodiment, server 4000 may include pixel information transmission to ISA module 4240. Pixel information transmission to ISA module 4240 may transmit the selected pixels to the array local processing module 3500 associated with image sensor array 3200.

- 46 - [0055] Referring now to Figs. 1-AH and 1-AI, Fig. 1-AH depicts an image sensor array 3200, which in this example is pointed at a football stadium, e.g., CenturyLink field. Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[0056] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3505 to the array local storage and processing module 3500. In an embodiment, array local storage and processing module 3500 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3500 is separate from, but directly connected to (e.g., via a USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3505" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3500 to the remote server, which may be, but is not required to be, located further away temporally.

[0057] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

- 47 - [0058] Referring again to Fig. 1-AE, the image sensor array 3200 may capture an image that is received by image capturing module 3305. Image capturing module 3305 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3510. Consolidated user request reception module 3510 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3320 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[0059] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3330. In an embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3317. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3315. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3300, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module 3330 may include or communicate with a lower resolution module 3314, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[0060] Referring again to Fig. 1-AE, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3340. Selected pixel transmission module 3340 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

- 48 - [0061] Referring now again to Fig. 1-Y, server 4000 may include a requested image reception from ISA module 4250. Requested image reception from ISA module 4250 may receive the image data from the array local processing module 3500 (e.g., in the arrow coming "north" from Fig. 1-AI. That image, as depicted in Fig. 1-Y, may include the target (e.g., the football player), as well as some surrounding area (e.g., the area of the field around the football player). The "surrounding area" and the specifics of what is

included/transmitted from the array local processing module may be specified by the user (directly or indirectly, e.g., through a set of preferences), or may be determined by the server, e.g., in the pixel information selection module 4230 (shown in Fig. l-O).

[0062] Referring again to Fig. 1-Y, server 4000 may also include a requested image transmission to user device module 4260. Requested image transmission to user device module 4260 may transmit the requested image to the user device 5600. Requested image transmission to user device module 4260 may include components necessary to

communicate with user device 5600 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[0063] Referring again to Fig. 1-Y, server 4000 may include a server cached image updating module 4270. Server cached image updating module 4270 may take the images received from the array local processing module 3500 (e.g., which may include the image to be sent to the user), and compare the received images with stored or "cached" images on the server, in order to determine if the cached images should be updated. This process may happen frequently or infrequently, depending on embodiments, and may be continuously ongoing as long as there is a data connection, in some embodiments. In some

embodiments, the frequency of the process may depend on the available bandwidth to the array local processing module 3500, e.g., that is, at off-peak times, the frequency may be increased. In an embodiment, server cached image updating module 4270 compares an image received from the array local processing module 3500, and, if the image has changes, replaces the cached version of the image with the newer image.

[0064] Referring now again to Fig. 1-E, Fig. 1-E shows user device 5600. In an embodiment, user device 5600 includes image containing selected target receiving module

- 49 - 5630 that may be configured to receive the image from server 4000, e.g., requested image transmission to user device module 4260 of server 4000 (e.g., depicted in Fig. 1-Y, with the data transmission indicated by a rightward-upward arrow passing through Fig. 1-Y and Fig. 1-0 (to the north) before arriving at Fig. 1-E.

[0065] Referring again to Fig. 1-E, Fig. 1-E shows received image presentation module 5640, which may display the requested pixels that include the selected target to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through an exemplary interface that allows the user to monitor the target, and which also may display information about the target (e.g., in an embodiment, as shown in the figures, the game statistics for the football player also may be shown), which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-A.

[0066] Referring again to Fig. 1, Figs. 1-F, 1-P, 1-Z, and 1-AJ depict a MUVIA implementation according to an embodiment. This embodiment may be colloquially known as "live street view" in which one or more MUVIA systems allow for a user to move through an area similarly to the well known Google-branded Maps (or Google- Street), except with the cameras working in real time. For example, in an embodiment, referring now to Fig. 1-F, a user device 5700 may include a target selection reception module 5710. Target selection reception module 5710 may be a component that allows the user to select a "target" from the image, that is, a point of interest from the image. For example, in the shown example, the MUVIA may be focused on a city, and the target may be an address, a building, a car, or a person in the city. As an example, a user may select a street address as a "target." This may be facilitated by a target presentation module, e.g., image selection presentation module 5712, which may present one or more images (e.g., which may be various versions of images from MUVIA, at different resolutions or not up- to-date) from which the user may select the target, e.g., the street address. In an

embodiment, image selection presentation module 5712 may use static images that may or may not be sourced by the MUVIA system, and, in another embodiment, image selection presentation module 5712 may use current or cached views from the MUVIA system.

- 50 - [0067] In an embodiment, image selection presentation module 5712 may include an audible target selection module 5714 which may be configured to allow the user to select a target using audible commands, without requiring physical interaction with a device.

[0068] Referring again to Fig. 1, e.g., Fig. 1-F, in an embodiment, user device 5700 may include selected target transmission module 5720. Selected target transmission module 5720 may be configured to take the target selected by the user, and transmit the selected target to the server 4000.

[0069] Referring now to Fig. 1-P, Fig. 1-P depicts a server 4000 of the MUVIA system according to embodiments. In an embodiment, server 4000 may include a selected target reception module 4310. Selected target reception module 4310 may receive the selected target from the user device 3700. In an embodiment, server 4000 may provide all or most of the data that facilitates the selection of the target, that is, the images and the interface, which may be provided, e.g., through a web portal.

[0070] Referring again to Fig. 1-P, in an embodiment, server 4000 may include a selected image pre-processing module 4320. Selected image pre-processing module 4320 may perform one or more tasks of pre-processing the image, some of which are described herein for exemplary purposes. For example, in an embodiment, selected image pre-processing module 4320 may include a resolution determination module 4322 which may be configured to determine the resolution for the image in order to show the target (and here, resolution is merely a stand-in for any facet of the image, e.g., color depth, size, shadow, pixilation, filter, etc.). In an embodiment, selected image pre-processing module 4320 may include a cached pixel fill-in module 4324. Cached pixel fill-in module 4324 may be configured to manage which portions of the requested image are recovered from a cache, and which are updated, in order to improve performance. For example, if a view of a street is requested, certain features of the street (e.g., buildings, trees, etc., may not need to be retrieved each time, but can be filled in with a cached version, or, in another embodiment, can be filled in by an earlier version. A check can be done to see if a red parked car is still in the same spot as it was in an hour ago; if so, that part of the image may not need to be updated. Using lower resolution/prior images stored in a memory 4215, as well as other image processing techniques, cached pixel fill-in module determines which portions of the

- 51 - image do not need to be retrieved, thus reducing bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[0071] Referring again to Fig. 1-P, in an embodiment, selected image pre-processing module 4320 of server 4000 may include a static object obtaining module 4326, which may operate similarly to cached pixel fill-in module 4324. For example, as in the example shown in Fig. 1-B, static object obtaining module 4326 may obtain prior versions of static objects, e.g., buildings, trees, fixtures, landmarks, etc., which may save bandwidth load on the connection between the array local processing module 3600 and the server 4000.

[0072] Referring again to Fig. 1-P, in an embodiment, pixel information transmission to ISA module 4330 may transmit the request for pixels (e.g., an image, after the preprocessing) to the array local processing module 3600 (e.g., as shown in Figs. 1-Z and 1- AI, with the downward extending dataflow arrow).

[0073] Referring now to Figs. 1-Z and 1-AI, in an embodiment, an array local processing module 3600, that may be connected by a higher bandwidth connection to an image sensor array 3200, may be present.

[0074] . Image sensor array 3200 may include one or more image sensors that may, in an embodiment, be statically pointed at a particular object or scene. Image sensor array 3200 may be a single image sensor, or more commonly, may be a group of individual image sensors 3201 that are combined to create a larger field of view. For example, in an embodiment, ten megapixel sensors may be used for each individual image sensor 3201. With twelve of these sensors, the effective field of view, loss-less zoom, and so forth may be increased substantially. These numbers are for example only, and any number of sensors and/or megapixel image sensor capacities may be used.

[0075] The use of many individual sensors may create a very large number of pixels captured for each exposure of the image sensor array 3200. Thus, these pixels are transmitted via a higher bandwidth communication 3605 to the array local storage and processing module 3600. In an embodiment, array local storage and processing module 3600 is integrated into the image sensor array. In another embodiment, the array local storage and processing module 3600 is separate from, but directly connected to (e.g., via a

- 52 - USB 3.0 cable) to the image sensor array 3200. It is noted that "higher bandwidth communication 3605" does not require a specific amount of bandwidth, but only that the bandwidth for this communication is relatively higher than the bandwidth communication from the array local processing module 3600 to the remote server, which may be, but is not required to be, located further away temporally.

[0076] It is noted that, because of the large number of pixels captured by image sensor array 3200, mechanical changes to the image sensor array are not generally required, although such mechanical changes are not excluded from these embodiments. For example, because the array has a very large field of view, with very high resolution, "pan" and "zoom" functions may be handled optically, rather than by mechanically changing the focal point of the lenses or by physically pointing the array at a different location. This may reduce the complexity required of the device, and also may improve the speed at which different views may be generated by the image sensor array 3200.

[0077] Referring again to Fig. 1-AJ and Fig. 1-Z, the image sensor array 3200 may capture an image that is received by image capturing module 3605. Image capturing module 3605 may take the captured image and compare it to a consolidated user request, e.g., which is provided by a consolidated user request reception module 3610.

Consolidated user request reception module 3610 may receive the communication from server 4000 regarding which pixels of the image have been requested. Through use of the consolidated user request and the captured image, pixel selection module 3620 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

[0078] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3630. In an embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3617. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3615. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3600, or may be subject to other manipulations or processing separate from the user requests. In an embodiment, unused pixel decimation module may include or communicate

- 53 - with a lower resolution module 3614, which may, in some embodiments, be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to capture the target selected by the user.

[0079] Referring again to Fig. 1-AJ, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3640. Selected pixel transmission module 3640 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3510. Similarly to lower-bandwidth communication 3515, the lower-bandwidth communication 3510 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[0080] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3600 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3600 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[0081] Referring now again to Fig. 1-P, in an embodiment, server 4000 may include image receiving from ISA module 4340. Image receiving from ISA module 4340 may receive the image data from the array local processing module 3600 (e.g., in the arrow coming "north" from Fig. 1-AJ via Fig. 1-Z). The image may include the pixels that were requested from the image sensor array 3200. In an embodiment, server 4000 also may include received image post-processing module 4350, which may, among other post-

- 54 - processing tasks, fill in objects and pixels into the image that were determined not to be needed by selected image pre-processing module 4320, as previously described. In an embodiment, server 4000 may include received image transmission to user device module 4360, which may be configured to transmit the requested image to the user device 5700. Requested image transmission to user device module 4360 may include components necessary to communicate with user device 5700 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[0082] Referring now again to Fig. 1-F, user device 5700 may include a server image reception module 5730. Server image reception module 5730 may receive an image from sent by the server 4000, and user selection presenting module 5240, which may display the requested pixels to the user, e.g., by showing them on a screen of the device. In an embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-F.

[0083] In an embodiment, as shown in Figs. 1-F and 1-G, server image reception module 5730 may include an audio stream reception module 5732 and a video stream reception module 5734. In an embodiment, as discussed throughout this application, the MUVIA system may capture still images, video, and also sound, as well as other electromagnetic waves and other signals and data. In an embodiment, the audio signals and the video signals may be handled together, or they may be handled separately, as separate streams. Although not every module in the instant diagram separately shows audio streams and video streams, it is noted here that all implementations of MUVIA contemplate both audio and video coverage, as well as still image and other data collection.

[0084] Referring now to Fig. 1-G, which shows another portion of user device 5700, Fig. 1-G may include a display 5755 and a memory 5765, which may be used to facilitate presentation and/or storage of the received images.

[0085] Figs. 1-H, 1-R, 1-AA, and 1-AB show an embodiment of a MUVIA

implementation. For example, referring now to Fig. 1-H, Fig. 1-H shows an embodiment of a user device 5800. For exemplary purposes, the user device 5800 may be an augmented

- 55 - reality device that shows a user looking down a "street" at which the user is not actually present, e.g., a "virtual tourism" where the user may use their augmented reality device (e.g., googles, e.g., an Oculus Rift-type headgear device) which may be a wearable computer. It is noted that this embodiment is not limited to wearable computers or augmented reality, but as in all of the embodiments described in this disclosure, may be any device. The use of a wearable augmented/virtual reality device is merely used to for illustrative and exemplary purposes.

[0086] In an embodiment, user device 5800 may have a field of view 5810, as shown in Fig. 1-H. The field of view for the user 5810 may be illustrated in Fig. 1-H as follows. The most internal rectangle, shown by the dot hatching, represents the user's "field of view" as they look at their "virtual world." The second most internal rectangle, with the straight line hatching, represents the "nearest" objects to the user, that is, a range where the user is likely to "look" next, by turning their head or moving their eyes. In an embodiment, this area of the image may already be loaded on the device, e.g., through use of a particular codec, which will be discussed in more detail herein. The outermost rectangle, which is the image without hatching, represents further outside the user's viewpoint. This area, too, may already be loaded on the device. By loading areas where the user may eventually look, the system can reduce latency and make a user's motions, e.g., movement of head, eyes, and body, appear "natural" to the system.

[0087] Referring now to Figs. 1-AA and 1-AB, these figures show an array local processing module 3700 that is connected to an image sensor array 3200 (e.g., as shown in Fig. 1-AK, and "viewing" a city as shown in Fig. 1-AJ). The image sensor array 3200 may operate as previously described in this document. In an embodiment, array local processing module 3700 may include a captured image receiving module 3710, which may receive the entire scene captured by the image sensor array 3200, through the higher-bandwidth communication channel 3505. As described previously in this application, these pixels may be "cropped" or "decimated" into the relevant portion of the captured image, as described by one or more of the user device 5800, the server 4000, and the processing done at the array local processing module 3700. This process may occur as previously described. The relevant pixels may be handled by relevant portion of captured image receiving module 3720.

- 56 - [0088] Referring now to Fig. 1-AB, in an embodiment, the relevant pixels for the image that are processed by relevant portion of captured image receiving module 3720 may be encoded using a particular codec at relevant portion encoding module 3730. In an embodiment, the codec may be configured to encode the innermost rectangle, e.g., the portion that represents the current user's field of view, e.g., portion 3716, at a higher resolution, or a different compression, or a combination of both. The codec may be further configured to encode the second rectangle, e.g., with the vertical line hashing, e.g., portion 3714, at a different resolution and/or a different (e.g., a higher) compression. Similarly, the outermost portion of the image, e.g., the clear portion 3712, may again be coded at still another resolution and/or a different compression. In an embodiment, the codec itself handles the algorithm for encoding the image, and as such, in an embodiment, the codec may include information about user device 5800.

[0089] As shown in Fig. 1-AB, the encoded portion of the image, including portions 3716, 3714, and 3712, may be transmitted using encoded relevant portion transmitting module 3740. It is noted that "lower compression," "more compression," and "higher compression," are merely used as one example for the kind of processing done by the codec. For example, instead of lower compression, a different sampling algorithm or compacting algorithm may be used, or a lossier algorithm may be implemented for various parts of the encoded relevant portion.

[0090] Referring now to Fig. 1-R, Fig. 1-R depicts a server 4000 in a MUVIA system according to an embodiment. For example, as shown in Fig. 1-R, server 4000 may include, in addition to portions previously described, an encoded image receiving module 4410. Encoded image receiving module 4410 may receive the encoded image, encoded as previously described, from encoded relevant portion transmitting module 3740 of array local processing module 3700.

[0091] Referring again to Fig. 1-R, server 4000 may include an encoded image transmission controlling module 4420. Encoded image transmission controlling module 4420 may transmit portions of the image to the user device 5800. In an embodiment, at least partially depending on the bandwidth and the particulars of the user device, the server may send all of the encoded image to the user device, and let the user device decode the

- 57 - portions as needed, or may decode the image and send portions in piecemeal, or with a different encoding, depending on the needs of the user device, and the complexity that can be handled by the user device.

[0092] Referring again to Fig. 1-H, user device 5800 may include an encoded image transmission receiving module 5720, which may be configured to receive the image that is coded in a particular way, e.g., as will be disclosed in more detail herein. Fig. 1-H also may include an encoded image processing module 5830 that may handle the processing of the image, that is, encoding and decoding portions of the image, or other processing necessary to provide the image to the user.

[0093] Referring now to Fig. 1-AL, Fig. 1-AL shows an implementation of an

Application Programming Interface (API) for the various MUVIA components.

Specifically, image sensor array API 7800 may include, among other elements, a programming specification 7810, that may include, for example, libraries, classes, specifications, templates, or other coding elements that generally make up an API, and an access authentication module 7820 that governs API access to the various image sensor arrays. The API allows third party developers to access the workings of the image sensor array and the array local processing module 3700, so that the third party developers can write applications for the array local processing module 3700, as well as determine which data captured by the image sensor array 3200 (which often may be multiple gigabytes or more of data per second) should be kept or stored or transmitted. In an embodiment, API access to certain functions may be limited. For example, a tiered system may allow a certain number of API calls to the MUVIA data per second, per minute, per hour, or per day. In an embodiment, a third party might pay fees or perform a registration that would allow more or less access to the MUVIA data. In an embodiment, the third party could host their application on a separate web site, and let that web site access the image sensor array 3200 and/or the array local processing module 3700 directly.

[0094] Referring again to Fig. 1, Figs. 1-1, 1-J, 1-S, 1-T, 1-AC, 1-AD, 1-AM, and 1-AN, in an embodiment, show a MUVIA implementation that allows insertion of advertising (or other context-sensitive material) into the images displayed to the user.

- 58 - [0095] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection accepting module 5910. User selection accepting module 5910 may be configured to receive user input about what the user wants to see. For example, as shown in Fig. 1-1, the user selection accepting module 5910 may show an image from image sensor array 3200, and the user may "pan" and "zoom" the image using any known interface, including, but not limited to, keyboard, mouse, touch, haptic, augmented reality interface, voice command, nonverbal motion commands (e.g., as part of a video game system interface, e.g., the Microsoft Kinect). It is noted, and as will be discussed in more detail herein, the camera itself is not "zooming" or "panning," because the camera does not move. What is happening is that different pixels that are captured by the image sensor array 3200 are kept by the image sensor array 3200 and transmitted to the server 4000.

[0096] In an embodiment, the "user selection" facilitated by the user selection accepting module 5910 may not involve a user at all. For example, in an embodiment, e.g., in a security embodiment, the user selection may be handled completely by machine, and may include "select any portion of the image with movement," or "select any portion of the image in which a person is recognized," or "select any portion of the image in which a particular person, e.g., a person on the FBI most wanted list" is recognized.

[0097] Referring again to Fig. 1-1, in an embodiment, user device 5900 may include a user selection transmitting module 5920. The user selection transmitting module 5920 may take the user selection from user selection transmitting module 5920, and transmit the selection to the server 4000. The transmission may include some pre-processing, for example, the user device 5900 may determine the size and parameters of the image prior to sending the request to the server 4000, or that processing may be handled by the server 4000. Following the thick-line arrow leftward from user selection transmitting module 5920 through to Fig. 1-K, the transmission goes to server 4000, as will be discussed herein. It is noted that the transmission to the server 4000 may also include data about the user device, for example, the screen resolution, the window size, the type of device, an identity of the user, a level of service the user has paid for (in embodiments in which such services are prioritized by the camera/server), other capabilities of the device, e.g., framerate, and the like.

- 59 - [0098] Referring again to Fig. 1-1, Fig. 1-1 also includes a selected image receiving module 5930 and a user selection presenting module 5940, which will be discussed in more detail herein, with respect to the dataflow of this embodiment.

[0099] Referring now to Fig. 1-T (graphically represented as "down" and "to the right" of Fig. 1-1), in an embodiment, a server 4000 may include a selected image reception module 4510. In an embodiment, selected image reception module 4510 of server 4000 may receive the selected target from the user device 5900. The selected target data may take various formats, e.g., it may be image data, it may be metadata that identifies the selected target, it may be some other designation, e.g., an ID number, a tracking number, or a piece of information, like a license plate or a social security number. The selected target data may be an address or a physical description, or any other instantiation of data that can be used to identify something.

[00100] Referring again to Fig. 1-T, in an embodiment, server 4000 may include selected image pre-processing module 4520. Selected image pre-processing module 4520 may perform one or more tasks of pre-processing the image, some of which have been previously described with respect to other embodiments. In an embodiment, server 4000 also may include pixel information transmission to ISA module 4330 configured to transmit the image request data to the image search array 3200, as has been previously described.

[00101] Referring now to Figs. 1-AD and 1-AN, array local processing module 3700 may be connected to an image sensor array 3200 through a higher-bandwidth communication link 3505, e.g., a USB or PCI port. In an embodiment, image sensor array 3200 may include a request reception module 3710. Request reception module 3710 may receive the request for an image from the server 4000, as previously described. Request reception module 3710 may transmit the data to a pixel selection module 3720, which may receive the pixels captured from image sensor array 3200, and select the ones that are to be kept. That is, in an embodiment, through use of the (sometimes consolidated) user requests and the captured image, pixel selection module 3720 may select the pixels that have been specifically requested by the user, and mark those pixels for transmission back to the server.

- 60 - [00102] After the pixels to be kept are identified, the other pixels that are not to be kept are removed, e.g., decimated at unused pixel decimation module 3730. In an embodiment, these pixels are simply discarded, e.g., not stored in a long-term memory, that is removed to a digital trash 3717. In another embodiment, some or all of these pixels are stored in a local memory, e.g., local memory 3715. From here, these pixels may be transmitted to various locations at off-peak times, may be kept for image processing by the array local processing module 3700, or may be subject to other manipulations or processing separate from the user requests, as described in previous embodiments. In an embodiment, unused pixel decimation module 3730 may be used to transmit a lower-resolution version of more of the image (e.g., an entire scene, or more of the field of view surrounding the target) to the server 4000, so that the server 4000 may accurately determine which images are required to fulfill the request of the user.

[00103] Referring again to Fig. 1-AN, the selected pixels then may be transmitted to the server 4000 using selected pixel transmission module 3740. Selected pixel transmission module 3740 may include any transmission equipment necessary, e.g., cellular radio, wireless adapter, and the like, depending on the format of communication. In an embodiment, only those pixels which have been requested are transmitted to the server via lower-bandwidth communication 3710. Similarly to lower-bandwidth communication 3715, the lower-bandwidth communication 3710 does not refer to a specific amount of bandwidth, just that the amount of bandwidth is relatively lower than higher-bandwidth communication 3505.

[00104] It is noted that more pixels than what are specifically requested by the user may be transmitted, in certain embodiments. For example, the array local processing module 3700 may send pixels that border the user's requested area, but are outside the user's requested area. In an embodiment, as will be discussed herein, those pixels may be sent at a different resolution or using a different kind of compression. In another embodiment, the additional pixels may merely be sent the same as the requested pixels. In still another embodiment, server 4000 may expand the user requested areas, so that array local processing module 3700 may send only the requested pixels, but the requested pixels cover more area than what the user originally requested. These additional pixels may be transmitted and

- 61 - "cached" by the server or local device, which may be used to decrease latency times, in a process that will be discussed more herein.

[00105] Referring now again to Fig. 1-T, in an embodiment, server 4000 may include received image post-processing module 4550. Received image post-processing module 4550 may receive the image data from the array local processing module 3700 (e.g., in the arrow coming "north" from Fig. 1-AN via Fig. 1-AD). The image may include the pixels that were requested from the image sensor array 3200.

[00106] In an embodiment, server 4000 also may include advertisement insertion module 4560. Advertisement insertion module 4560 may insert an advertisement into the received image. The advertisement may be based one or more of the contents of the image, a characteristic of a user or the user device, or a setting of the advertisement server component 7700 (see, e.g., Fig. 1-AC, as will be discussed in more detail herein). The advertisement insertion module 4560 may place the advertisement into the image using any known image combination techniques, or, in another embodiment, the advertisement image may be in a separate layer, overlay, or any other data structure. In an embodiment, advertisement insertion module 4560 may include context-based advertisement insertion module 4562, which may be configured to add advertisements that are based on the context of the image. For example, if the image is a live street view of a department store, the context of the image may show advertisements related to products sold by that department store, e.g., clothing, cosmetics, or power tools.

[00107] Referring again to Fig. 1-T, server 4000 may include a received image with advertisement transmission to user device module 4570 configured to transmit the image 5900. Received image with advertisement transmission to user device module 4570 may include components necessary to communicate with user device 5900 and may, in some embodiments, share components with one or more other modules of server 4000, e.g., a network interface, or a wireless antenna.

[00108] Referring again to Fig. 1-1, user device 5900 may include a selected image receiving module 5930, which may receive the pixels that were sent by the server 4000, and user selection presenting module 5940, which may display the requested pixels to the user, including the advertisement, e.g., by showing them on a screen of the device. In an

- 62 - embodiment, the display of the image may be carried out through the exemplary interface, which allows a cycle of user requests and new images to be shown as the user navigates through what is seen on the MUVIA, e.g., as shown in Fig. 1-1.

[00109] Referring now to Fig. 1-AC, Fig. 1-AC shows an advertisement server component 7700 configured to deliver advertisements to the server 4000 for insertion into the images prior to delivery to the user. In an embodiment, advertisement server component 7700 may be integrated with server 4000. In another embodiment, advertisement server component may be separate from server 4000 and may communicate with server 4000. In yet another embodiment, rather than interacting with server 4000, advertisement server component 7700 may interact directly with the user device 5900, and insert the advertisement into the image after the image has been received, or, in another embodiment, cause the user device to display the advertisement concurrently with the image (e.g., overlapping or adjacent to). In such embodiments, some of the described modules of server 4000 may be incorporated into user device 5900, but the functionality of those modules would operate similarly to as previously described.

[00110] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a user data collection module 7705. User data collection module 7705 may collect data from user device 5900, and use that data to drive placement of

advertisements (e.g., based on a user's browser history, e.g., to sports sites, and the like).

[00111] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include advertisement database 7715 which includes advertisements that are ready to be inserted into images. In an embodiment, these advertisements may be created on the fly.

[00112] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include an advertisement request reception module 7710 which receives a request to add an advertisement into the drawing (the receipt of the request is not shown to ease understanding of the drawings). In an embodiment, advertisement server component 7700 may include advertisement selection module 7720, which may include an image analysis module 7722 configured to analyze the image to determine the best context-based advertisement to place into the image. In an embodiment, that decision may be made by

- 63 - the server 4000, or partly at the server 4000 and partly at the advertisement server component 7700 (e.g., the advertisement server component may have a set of

advertisements from which a particular one may be chosen). In an embodiment, various third parties may compensate the operators of server component 7700, server 4000, or any other component of the system, in order to receive preferential treatment.

[00113] Referring again to Fig. 1-AC, in an embodiment, advertisement server component 7700 may include a selected advertisement transmission module 7730, which may transmit the selected advertisement (or a set of selected advertisements) to the server 4000. In an embodiment, selected advertisement transmission module 7730 may send the complete image with the advertisement overlaid, e.g., in an implementation in which the

advertisement server component 7700 also handles the placement of the advertisement. In an embodiment in which advertisement server component 7700 is integrated with server 4000, this module may be an internal transmission module, as may all such

transmission/reception modules.

Exem kry Emiro».¾«e.¾ti

[00114] Referring now to Fig. 2A, Fig. 2A illustrates an example environment 200 in which methods, systems, circuitry, articles of manufacture, and computer program products and architecture, in accordance with various embodiments, may be implemented by at least one requestor device 250. Image device 220 may include a number of individual sensors that capture data. Although commonly referred to throughout this application as "image data," this is merely shorthand for data that can be collected by the sensors. Other data, including video data, audio data, electromagnetic spectrum data (e.g., infrared, ultraviolet, radio, microwave data), thermal data, and the like, may be collected by the sensors.

[00115] Referring again to Fig. 2A, in an embodiment, image device 220 may operate in an environment 200. Specifically, in an embodiment, image device 220 may capture a scene 215. The scene 215 may be captured by a number of sensors 243. Sensors 243 may be grouped in an array, which in this context means they may be grouped in any pattern, on any plane, but have a fixed position relative to one another. Sensors 243 may capture the image in parts, which may be stitched back together by processor 222. There may be overlap in the images captured by sensors 243 of scene 215, which may be removed.

- 64 - [00116] Upon capture of the scene in image device 220, in processes and systems that will be described in more detail herein, the requested pixels are selected. Specifically, pixels that have been identified by a remote user, by a server, by the local device, by another device, by a program written by an outside user with an API, by a component or other hardware or software in communication with the image device, and the like, are transmitted to a remote location via a communications network 240. The pixels that are to be transmitted may be illustrated in Fig. 2A as selected portion 255, however this is a simplified expression meant for illustrative purposes only.

[00117] Referring again to Fig. 2A, in an embodiment, server device 230 may be any device or group of devices that is connected to a communication network. Although in some examples, server device 230 is distant from image device 220, that is not required. Server device 230 may be "remote" from image device 220, which may be that they are separate components, but does not necessarily imply a specific distance. The

communications network may be a local transmission component, e.g., a PCI bus. Server device 230 may include a request handling module 232 that handles requests for images from user devices, e.g., requestor device 250 and 250B. Request handling module 232 also may handle other remote computers and/or users that want to take active control of the image device, e.g., through an API, or through more direct control.

[00118] Server device 230 also may include an image device management module, which may perform some of the processing to determine which of the captured pixels of image device 220 are kept. For example, image device management module 234 may do some pattern recognition, e.g., to recognize objects of interest in the scene, e.g., a particular football player, as shown in the example of Fig. 2A. In other embodiments, this processing may be handled at the image device 220 or at the requestor device 250. In an embodiment, server device 230 limits a size of the selected portion by a screen resolution of the requesting user device.

[00119] Server device 230 then may transmit the requested portions to the requestor devices, e.g., requestor device 250 and requestor device 250B. In another embodiment, the user device or devices may directly communicate with image device 220, cutting out server device 230 from the system.

- 65 - [00120] In an embodiment, requestor device 250 and 250B are shown, however user devices may be any electronic device or combination of devices, which may be located together or spread across multiple devices and/or locations. Requestor device 250 may be a server device, or may be a user-level device, e.g., including, but not limited to, a cellular phone, a network phone, a smartphone, a tablet, a music player, a walkie-talkie, a radio, an augmented reality device (e.g., augmented reality glasses and/or headphones), wearable electronics, e.g., watches, belts, earphones, or "smart" clothing, earphones, headphones, audio/visual equipment, media player, television, projection screen, flat screen, monitor, clock, appliance (e.g., microwave, convection oven, stove, refrigerator, freezer), a navigation system (e.g., a Global Positioning System ("GPS") system), a medical alert device, a remote control, a peripheral, an electronic safe, an electronic lock, an electronic security system, a video camera, a personal video recorder, a personal audio recorder, and the like. Requestor device 250 may include a device interface 243 which may allow the requestor device 250 to receive input and to output data to the client in sensory (e.g., visual or any other sense) form, and/or allow the requestor device 250 to receive data from the client, e.g., through touch, typing, or moving a pointing device (e.g., a mouse). Requestor device 250 may include a viewfinder or a viewport that allows a user to "look" through the lens of image device 220, either optically or digitally, regardless of whether the user device 250 is spatially close to the image device 220 or whether they are directly connected (e.g., requestor device 250 may have a sole connection to image device 220 solely by server device 230

[00121] Referring again to Fig. 2A, in various embodiments, the communication network 240 may include one or more of a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a wireless local area network (WLAN), a personal area network (PAN), a Worldwide Interoperability for Microwave Access (WiMAX), public switched telephone network (PTSN), a general packet radio service (GPRS) network, a cellular network, and so forth. The communication networks 240 may be wired, wireless, or a combination of wired and wireless networks. It is noted that

"communication network" as it is used in this application refers to one or more

communication networks, which may or may not interact with each other.

- 66 - [00122] Referring now to Fig. 2B, Fig. 2B shows a more detailed version of requestor device 250, according to an embodiment. The requestor device 250 may include a device memory 245. In an embodiment, device memory 245 may include memory, random access memory ("RAM"), read only memory ("ROM"), flash memory, hard drives, disk-based media, disc-based media, magnetic storage, optical storage, volatile memory, nonvolatile memory, and any combination thereof. In an embodiment, device memory 245 may be separated from the device, e.g., available on a different device on a network, or over the air. For example, in a networked system, there may be more than one requestor device 250 whose device memories 245 may be located at a central server that may be a few feet away or located across an ocean. In an embodiment, device memory 245 may include of one or more of one or more mass storage devices, read-only memory (ROM), programmable readonly memory (PROM), erasable programmable read-only memory (EPROM), cache memory such as random access memory (RAM), flash memory, synchronous random access memory (SRAM), dynamic random access memory (DRAM), and/or other types of memory devices. In an embodiment, memory 245 may be located at a single network site. In an embodiment, memory 245 may be located at multiple network sites, including sites that are distant from each other. In an embodiment, device memory 245 may include one or more of cached images 245A and previously retained image data 245B, as will be discussed in more detail further herein.

[00123] Referring again to Fig. 2B, in an embodiment, requestor device 250 may include an optional viewport 247, which may be used to view images captured by image device 220. This optional viewport 247 may be physical (e.g., glass) or electrical (e.g., LCD screen), or may be at a distance from one or both of server device 230 and image device 220.

[00124] Referring again to Fig. 2B, Fig. 2B shows a more detailed description of requestor device 250. In an embodiment, requestor device 250 may include a processor 222.

Processor 222 may include one or more microprocessors, Central Processing Units

("CPU"), a Graphics Processing Units ("GPU"), Physics Processing Units, Digital Signal Processors, Network Processors, Floating Point Processors, and the like. In an

embodiment, processor 222 may be a server. In an embodiment, processor 222 may be a distributed-core processor. Although processor 222 is as a single processor that is part of a

- 67 - single device 220, processor 222 may be multiple processors distributed over one or many devices 220, which may or may not be configured to operate together.

[00125] Processor 222 is illustrated as being configured to execute computer readable instructions in order to execute one or more operations described above, and as illustrated in Fig. 10, Figs. 11A-11F, Figs. 12A-12G, Figs. 13A-13C, and Figs. 14A-14C. In an embodiment, processor 222 is designed to be configured to operate as processing module 251, which may include one or more of an input of a request for particular image data accepting module 252, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data, an inputted request for the particular image data transmitting module 254 configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene, a particular image data from the image sensor array exclusive receiving module 256 configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor, and a received particular image data presenting module 258 configured to present the received particular image data to the requestor.

Exem lary Env ronment 3C IA

[00126] Referring now to Fig. 3A, Fig. 3A shows an exemplary embodiment of an image device, e.g., image device 220A operating in an environment 300A. In an embodiment, image device 220A may include an array 310 of image sensors 312 as shown in Fig. 3. The array of image sensors in this image is shown in a rectangular grid, however this is merely exemplary to show that image sensors 312 may be arranged in any format. In an embodiment, each image sensor 312 may capture a portion of scene 315, which portions are then processed by processor 350. Although processor 350 is shown as local to image device 220A, it may be remote to image device 220A, with a sufficiently high-bandwidth connection to receive all of the data from the array of image sensors (e.g., multiple USB 3.0 lines). In an embodiment, the selected portions from the scene (e.g., the portions shown in the shaded box, e.g., selected portion 315), may be transmitted to a remote device 330, which may be a user device or a server device, as previously described. In an embodiment,

- 68 - the pixels that are not transmitted to remote device 330 may be stored in a local memory 340 or discarded.

Exe lary I¾ in *M«e«t 3CK B

[00127] Referring now to Fig. 4, Fig. 4 shows an exemplary embodiment of an image device, e.g., image device 320B operating in an environment 300B. In an embodiment, image device 320B may include an image sensor array 320B, e.g., an array of image sensors, which, in this example, are arranged around a polygon to increase the field of view that can be captured, that is, they can capture scene 315, illustrated in Fig. 3B as a natural landmark that can be viewed in a virtual tourism setting. Processor 322 receives the scene 315B and selects the pixels from the scene 315B that have been requested by a user, e.g., requested portions 317B. Requested portions 317B may include an overlapping area 324B that is only transmitted once. In an embodiment, the requested portions 317B may be transmitted to a remote location via communications network 240.

Exem l Etu ron rnt

[00128] Referring now to Fig. 3C, Fig. 3C shows an exemplary embodiment of an image device, e.g., image device 320C operating in an environment 300C. In an embodiment, image device 320C may capture a scene, of which a part of the scene, e.g., scene portion 315C, as previously described in other embodiments (e.g., some parts of image device 320C are omitted for simplicity of drawing). In an embodiment, e.g., scene portion 315C may show a street- level view of a busy road, e.g., for a virtual tourism or virtual reality simulator. In an embodiment, different portions of the scene portion 315C may be transmitted at different resolutions or at different times. For example, in an embodiment, a central part of the scene portion 315C, e.g., portion 516, which may correspond to what a user's eyes would see, is transmitted at a first resolution, or "full" resolution relative to what the user's device can handle. In an embodiment, an outer border outside portion 316, e.g., portion 314, may be transmitted at a second resolution, which may be lower, e.g., lower than the first resolution. In another embodiment, a further outside portion, e.g., portion 312, may be discarded, transmitted at a still lower rate, or transmitted

asynchronously.

- 69 - [00129] Referring now to Fig. 4A, Fig. 4A shows an exemplary embodiment of a server device, e.g., server device 430A. In an embodiment, an image device, e.g., image device 420 A may capture a scene 415. Scene 415 may be stored in local memory 440. The portions of scene 415 that are requested by the server device 430A may be transmitted (e.g., through requested image transfer 465) to requested pixel reception module 432 of server device 430A. In an embodiment, the requested pixels transmitted to requested pixel reception module 432 may correspond to images that were requested by various users and/or devices (not shown) in communication with server device 430A.

[00130] Referring again to Fig. 4A, in an embodiment, pixels not transmitted from local memory 440 of image device 420A may be stored in untransmitted pixel temporary storage 440B. These untransmitted pixels may be stored and transmitted to the server device 430A at a later time, e.g., an off-peak time for requests for images of scene 415. For example, in an embodiment, the pixels stored in untransmitted pixel temporary storage 440B may be transmitted to the unrequested pixel reception module 434 of server device 430A at night, or when other users are disconnected from the system, or when the available bandwidth to transfer pixels between image device 420A and server device 430A reaches a certain threshold value.

[00131] In an embodiment, server device 430A may analyze the pixels received by unrequested pixel reception module 434, for example, to provide a repository of static images from the scene 415 that do not need to be transmitted from the image device 420 A each time certain portions of scene 415 are requested.

Exem lary E« viro me 4IMIB

[00132] Referring now to Fig. 4B, Fig. 4A shows an exemplary embodiment of a server device, e.g., server device 430B. In an embodiment, an image device, e.g., image device 420B may capture a scene 415B. Scene 415B may be stored in local memory 440B. In an embodiment, image device 420B may capture the same scene 415B multiple times. In an embodiment, scene 415B may include an unchanged area 416 A, which is a portion of the image that has not changed since the last time the scene 415B was captured by the image device 420B. In an embodiment, scene 415B also may include a changed area 416B, which may be a portion of the image that has changed since the last time the scene 415B was

- 70 - captured by the image device 420B. Although changed area 416B is illustrated as polygonal and contiguous in Fig. 4B, this is merely for illustrative purposes, and changed area 416B may be, in some embodiments, nonpolygonal and/or noncontiguous.

[00133] In an embodiment, image device 420B, upon capturing scene 415B use an image previously stored in local memory 440B to compare the previous image, e.g., previous image 441B, to the current image, e.g., current image 442B, and may determine which areas of the scene 415B have been changed. The changed areas may be transmitted to server device 430B, e.g., to changed area reception module 432B. This may occur through a changed area transmission 465, as indicated in Fig. 4B.

[00134] Referring again to Fig. 4B, in an embodiment, requestor device 450B receives the changed area at changed area reception module 432B. Requestor device 450B also may include an unchanged area addition module 434B, which adds the unchanged areas that were previously stored in a memory of server device 430B or requestor device 450B (not shown) from a previous transmission from image device 420B.

Exem lary E£tvm»2ime»t 5C A

[00135] Referring now to Fig. 5A, Fig. 5A shows an exemplary embodiment of a server device, e.g., server device 530A. In an embodiment, an image device 520 A may capture a scene 515 through use of an image sensor array 540, as previously described. The image may be temporarily stored in a local memory 540 (as pictured), or may be partially or wholly stored in a local memory before transmission to a server device 530A. In an embodiment, server device 530A may include an image data reception module 532A. Image data reception module 532A may receive the image from image device 520A. In an embodiment, server device 530A may include data addition module 534A, which may add additional data to the received image data. In an embodiment, the additional data may be visible or invisible, e.g., pixel data or metadata, for example. In an embodiment, the additional data may be advertising data. In an embodiment, the additional data may be context-dependent upon the image data, for example, if the image data is of a football player, the additional data may be statistics about that player, or an advertisement for an online shop that sells that player's jersey.

- 71 - [00136] In an embodiment, the additional data may be stored in a memory of server device 530A (not shown). In another embodiment, the additional data may be retrieved from an advertising server or another data server. In an embodiment, the additional data may be tailored to one or more characteristics of the user or the user device, e.g., the user may have a setting that labels each player displayed on the screen with that player' s last name. Referring again to Fig. 5A, in an embodiment, server device 530A may include a modified data transmission module 536A, which may receive the modified data from data addition module 534A, and transmit the modified data to a user device, e.g., a user device that requested the image data, e.g., user device 550A.to the server device 430A at a later time, e.g., an off-peak time for requests for images of scene 415. For example, in an embodiment, the pixels stored in untransmitted pixel temporary storage 440B may be transmitted to the unrequested pixel reception module 434 of server device 430A at night, or when other users are disconnected from the system, or when the available bandwidth to transfer pixels between image device 420A and server device 430A reaches a certain threshold value.

[00137] In an embodiment, server device 430A may analyze the pixels received by unrequested pixel reception module 434, for example, to provide a repository of static images from the scene 415 that do not need to be transmitted from the image device 420 A each time certain portions of scene 415 are requested.

Exem lary E« viro me SIMIB

[00138] Referring now to Fig. 5B, Fig. 5B shows an exemplary embodiment of a server device, e.g., server device 530B. In an embodiment, multiple user devices, e.g., user device 502A, user device 502B, and user device 502C, each may send a request for image data from a scene, e.g., scene 515B. Each user device may send a request to a server device, e.g., server device 530B. Server device 530B may consolidate the requests, which may be for various resolutions, shapes, sizes, and other features, into a single combined request 570. Overlapping portions of the request, e.g., as shown in overlapping area 572, may be combined.

[00139] In an embodiment, server device 530B transmits the combined request 570 to the image device 520B. In an embodiment, image device 520B uses the combined request 570 to designate selected pixels 574, which then may be transmitted back to the server device

- 72 - 530B, where the process of combining the requests may be reversed, and each user device 502A, 502B, and 502C may receive the requested image. This process will be discussed in more detail further herein.

Exem lary Env ronment SCM!C

[00140] Referring now to Fig. 5C, Fig. 5C shows an exemplary embodiment of a requestor device, e.g., requestor device 550C. In an embodiment, requestor device 550C receives a particular image 580C, e.g., from a server device (not pictured) or an image sensor array (not pictured). The requestor device 550C has previously requested the received particular image 580C which is larger than the field of view 581C, e.g., the area that the user can currently view. The particular image 580C also includes image data that is the anticipated next field of view 582C. The anticipated next field of view 582C may be a portion of the image that the requestor device 550C may anticipate will be requested next. For example, in an embodiment in which requestor device 550C is a virtual reality helmet, the user's head may be turning in that direction. In another example, some characteristic of the image, e.g., relation to the current field of view, may cause the requestor device 550C to select that portion of the scene as the anticipated next field of view 582C. For example, if it is a football game, and the user's designated favorite player has just walked onto the field, the requestor device 550C may detect that occurrence and flag those pixels as the anticipated next field of view 582C. Although the field of view 581C and the anticipated next field of view 582C are shown as contiguous in Fig. 5C, this is not necessary or required.

[00141] Referring again to Fig. 5C, Fig. 5C shows that, in an embodiment, the received particular image 580C also may include a border field of view 583C. The border field of view 583C may be the parts of the image that border the field of view 581C, and thus they may be cached for quick retrieval. In an embodiment, the requestor device 550C is configured to present the portions of the received particular image that are in the field of view 581C, and to cache one or more portions of the border field of view 583C and/or anticipated field of view 581C. In this manner, the requestor device 550C may supply requested particular images to the requestor without waiting on transmissions from a remote server or an image sensor array. In an embodiment, one or more of the anticipated next field of view 582C and the border of field of view 583C are stored at a lower

- 73 - resolution and/or displayed and/or received at a lower resolution until higher-resolution images can be obtained from a remote server or from the image sensor array.

Exe lary I¾ in *M«e«t SCM!D

[00142] Referring now to Fig. 5D, Fig. 5D shows a requestor device 550D according to various embodiments. In an embodiment, requestor device 550D receives a request for a particular image from a requestor (not shown). The initial request for the particular image 592 of the scene portion 515D is shown in Fig. 5D. The requestor device 550D then may expand the request to include one or more of the expanded request 594 and the further expanded request 596, which may border the initially-requested particular image 592. In an embodiment, the request for the particular image 592 may be at a first resolution, the expanded request 594 may be at a second resolution, which may be less than the first, and the further expanded request 596, if present, may be at a third resolution, which may be less than or equal to the second resolution. It is noted that these resolutions may mirror the resolution captured in Fig. 3C, but this is merely for illustrative/exemplary purposes and is not required.

Exem lary Emhodimmis of the Vanoos Modules of Portio s of Processor 2S1

[00143] Figs. 6-9 illustrate exemplary embodiments of the various modules that form portions of processor 250. In an embodiment, the modules represent hardware, either that is hard-coded, e.g., as in an application- specific integrated circuit ("ASIC") or that is physically reconfigured through gate activation described by computer instructions, e.g., as in a central processing unit.

[00144] Referring now to Fig. 6, Fig. 6 illustrates an exemplary implementation of the input of a request for particular image data accepting module 252. As illustrated in Fig. 6, the input of a request for particular image data accepting module may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 6, e.g., Fig. 6A, in an embodiment, module 252 may include one or more of input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data 602, input of a request for particular image data accepting module configured to accept input from an automated component for the request for particular image data 604, and input of a request from an image object tracking algorithm for

- 74 - particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data and the requested particular image data includes a tracked image object present in the scene 606. In an embodiment, module 606 may include one or more of image object tracking algorithm for particular image data accepting through a requestor device interface module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data and the requested particular image data includes a tracked image object present in the scene 608 and image object tracking algorithm for particular image data accepting through an interface of an Internet-enabled television device interface module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data and the requested particular image data includes a tracked image object that is a football player present in the scene that is a football field 610.

[00145] Referring again to Fig. 6, e.g., Fig. 6B, as described above, in an embodiment, module 252 may include input of a request for particular image data accepting through a requestor device interface module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data 612. In an embodiment, module 612 may include input of a request for particular image data accepting through a requestor device interface module, wherein the requestor device interface is configured to display at least a portion of the scene 614. In an embodiment, module 614 may include one or more of input of a request for particular image data accepting through a requestor device interface module, wherein the requestor device interface is configured to display at least a portion of the scene in a viewfinder 616, input of a request for particular image data that is at least partially based on a view of the scene accepting through a requestor device interface module, wherein the requestor device interface is configured to display at least a portion of the scene 618, and input of a request for particular image data accepting through a specific requestor device interface module 620.

[00146] Referring again to Fig. 6, e.g., Fig. 6C, in an embodiment, module 252 may include one or more of input of the request for particular image data that is part of the scene that contains more pixels than the particular image associated with the particular image data accepting module 622, input of the request for particular image data that is part of the scene

- 75 - is a larger spatial area than the particular image associated with the particular image data accepting module 624, input of the request for particular image data that is part of the scene that contains more data than the particular image associated with the particular image data accepting module 626, and input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a representation of the image data collected by the array of more than one image sensor 628. In an embodiment, module 628 may include one or more of input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a sampling of the image data collected by the array of more than one image sensor 630, input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a subset of the image data collected by the array of more than one image sensor 632, and input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a lower resolution expression of the image data collected by the array of more than one image sensor 634.

[00147] Referring again to Fig. 6, e.g., Fig. 6D, in an embodiment, module 252 may include one or more of input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is an animal oasis 636, input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a street view of an area 638, input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a tourist destination available for virtual tourism 640, input of a request for particular image data that is a portion of the scene accepting module, wherein the particular image data is part of a scene that is an interior of a commercial retail property 642, and input of a request for particular image data that is a portion of the scene accepting module 644. In an

embodiment, module 644 may include one or more of input of a request for particular image data that includes a particular football player that is a portion of the scene that is a football field accepting module 646 and input of a request for particular image data that includes a license plate of a vehicle that is a portion of the scene that is a representation of a highway bridge accepting module 648.

[00148] Referring again to Fig. 6, e.g., Fig. 6E, in an embodiment, module 252 may include one or more of input of a request for particular image video data accepting module,

- 76 - wherein the particular image video data is part of the scene that is larger than at least the particular image associated with the particular image video data 650, input of a request for particular image audio data accepting module, wherein the particular image audio data is part of the scene that is larger than at least the particular image associated with the particular image audio data 652, and input of a request for particular image data accepting through an audio interface module, wherein the particular image data is part of the scene that is larger than the particular image associated with the particular image data 654. In an embodiment, module 654 may include input of a request for particular image data accepting through a microphone audio interface module, wherein the particular image data is part of the scene that is larger than the particular image associated with the particular image data 656.

[00149] Referring again to Fig. 6, e.g., Fig. 6F, in an embodiment, module 252 may include input of a request for particular image data accepting from the requestor module 658. In an embodiment, module 658 may include one or more of input of a request for particular image data accepting from the requestor module, wherein the requestor is a client operating a device 660 and input of a request for particular image data accepting from a requestor device module, wherein the requestor is a device 664. In an embodiment, module 660 may include input of a request for particular image data accepting from the requestor module, wherein the requestor is a user operating a smart television with a remote control 662. In an embodiment, module 664 may include one or more of input of a request for particular image data accepting from a requestor device component module, wherein the requestor is a component of a device 666, input of a request for particular image data accepting from a requestor device component module, wherein the requestor a device configured to carry out a request subroutine 668, and input of a request for particular image data accepting at the requestor device module, wherein the requestor is the requestor device that is executing a separate subroutine 670.

[00150] Referring now to Fig. 7, Fig. 7 illustrates an exemplary implementation of inputted request for the particular image data transmitting module 254. As illustrated in Fig. 7, the inputted request for the particular image data transmitting module 254 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 7, e.g., Fig. 7A, in an embodiment, module 254 may include one

- 77 - or more of request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes multiple connected image sensors and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene 702, request for particular image data transmitting to the image sensor array that includes two inline image sensors angled toward each other module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data 704, request for particular image data transmitting to the image sensor array that includes a pattern of image sensors arranged in a grid module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data 706, and request for particular image data transmitting to the image sensor array that includes a pattern of image sensors arranged in a line module configured to transmit the request to the image sensor array that has a field of view greater than one hundred twenty degrees and that is configured to capture the scene that is larger than the requested particular image data 708.

[00151] Referring again to Fig. 7, e.g., Fig. 7B, in an embodiment, module 254 may include request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data 710. In an embodiment, module 710 may include one or more of request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times more image data than the requested particular image data 712 and request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times more image data than the requested particular image data 714.

[00152] Referring again to Fig. 7, e.g., Fig. 7C, in an embodiment, module 254 may include request for particular image data transmitting to a remote server deployed to relay

- 78 - the request to the image sensor array module configured to transmit the request to the remote server that is configured to package the request for particular image data and relay the request for particular image data to the image sensor array 716. In an embodiment, module 716 may include one or more of request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package the request for particular image data and relay the request for particular image data along with one or more other image data requests to the image sensor array 718 and request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package multiple requests that include the request for particular image data and relay the package of multiple requests to the image sensor array 720. In an embodiment, module 720 may include request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package multiple requests that include the request for particular image data and relay the package of multiple requests to the image sensor array 722. In an embodiment, module 722 may include request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to combine multiple requests that include the request for particular image data and transmit the combined multiple requests as a single combined request for image data to the image sensor array 724.

[00153] Referring again to Fig. 7, e.g., Fig. 7D, in an embodiment, module 254 may include one or more of request for particular image data modifying into updated request for particular image data module 726 and request for updated particular image data

transmitting to the image sensor array module 728. In an embodiment, module 726 may include request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted module 730. In an embodiment, module 730 may include request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted module wherein the request for particular image data is a request for an image of an eagle that circles an animal oasis and the updated request for particular image

- 79 - data identifies a twenty foot spatial radius around the eagle as the portion of the image data that is update-targeted 732. In an embodiment, module 732 may include request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted module wherein the request for particular image data is a request for an image of an eagle that circles an animal oasis and the updated request for particular image data identifies a twenty foot spatial radius around the eagle as the portion of the image data that is update-targeted based on an algorithm that determined that portion of the image data as likely to have changed since a previous reception of image data 734.

[00154] Referring again to Fig. 7, e.g., Fig. 7E, in an embodiment, module 254 may include one or more of module 726 and module 728, as previously described. In an embodiment, module 826 may include request for particular image data modifying into updated request for particular image data based on one or more previously received images module 736. In an embodiment, module 736 may include one or more of particular image data request to previous image data that contains one or more previously received images determined to be similar to the previous image data comparing module 738 and particular image data request modifying based on compared previous image data module 740. In an embodiment, module 738 may include particular image data request to previous image data that contains one or more previously received images determined to be similar to the previous image data comparing to identify an update-targeted portion of the particular image data module 742. In an embodiment, module 742 may include one or more of first previously received image data with second previously received image data and request for particular image data delta determining module 744 and particular image data request portion that corresponds to determined delta identifying module 746.

[00155] Referring again to Fig. 7, e.g., Fig. 7F, in an embodiment, module 254 may include one or more of expanded request for particular image data generating module 748 and expanded request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data 750. In an embodiment, module 748 may include expanded request for particular image data that includes the request for particular image data and border

- 80 - image data that borders the particular image data generating module 752. In an

embodiment, module 752 may include one or more of expanded request for particular image data that includes the request for particular image data and border image data that borders the particular image data on all four sides generating module 754, projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining module 756, and expanded request for particular image data that includes the request for particular image data and next side image data generating module 758. In an embodiment, module 756 may include projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining at least partially based on a detected motion of the device associated with the requestor module 760. In an embodiment, module 760 may include projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining at least partially based on a detected head turn of the requestor that wears the device associated with the requestor module 762.

[00156] Referring again to Fig. 7, e.g., Fig. 7G, in an embodiment, module 254 may include module 748, module 750, and module 752, as previously described. In an embodiment, module 752 may include expanded request for particular image data that includes the request for particular image data, first border image data that borders the particular image data, and second border image data that borders the first border image data generating module 764. In an embodiment, module 764 may include expanded request for particular image data that includes the request for particular image data, first border image data that borders the particular image data, and second border image data that borders the first border image data generating module configured to generate the expanded request for the particular image data that includes the request for the particular image data at a first resolution, the request for the first border image data at a second resolution less than the first resolution, and the request for the second border image data at a third resolution less than or equal to the second resolution 766.

[00157] Referring now to Fig. 8, Fig. 8 illustrates an exemplary implementation of particular image data from the image sensor array exclusive receiving module 256. As illustrated in Fig. 8 A, the particular image data from the image sensor array exclusive

- 81 - receiving module 256 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 8, e.g., Fig. 8A, in an embodiment, module 256 may include one or more of particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data that represents fewer pixels than the scene from the image sensor array 802, particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data that represents a smaller geographic area than the scene from the image sensor array 804, particular image data from the image sensor array exclusive receiving from a remote server module, wherein the remote server received the portion of the scene that included the request for the particular image data and a second request for second particular image data that is at least partially different than the particular image data 806, particular image data from the image sensor array exclusive receiving from a remote server module, wherein the image sensor array discarded portions of the scene other than the particular image data 808, particular image data from the image sensor array exclusive receiving from a remote server module, wherein portions of the scene other than the particular image data are stored at the image sensor array 810, and particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from a remote server deployed to communicate with the image sensor array, wherein a first portion of the scene data other than the particular image data is stored at the image sensor array and a second portion of the scene data other than the particular image data is stored at the remote server 812.

[00158] Referring again to Fig. 8, e.g., Fig. 8B, in an embodiment, module 256 may include particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor device that is associated with the requestor 814. Module 814 may include one or more of particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a feature of a requestor device that is deployed to store data about with the requestor 816, particular image data from the image sensor array exclusive receiving module configured to receive only the particular image

- 82 - data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a bandwidth available to the requestor device 818, and particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a bandwidth between the requestor device and a remote server that communicates with the image sensor array 820.

[00159] Referring again to Fig. 8, e.g., Fig. 8C, in an embodiment, module 256 may include one or more of particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a screen size of a requestor device that is associated with the requestor 822 and particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a maximum resolution of a requestor device that is associated with the requestor 824.

[00160] Referring now to Fig. 9, Fig. 9 illustrates an exemplary implementation of received particular image data presenting module 258. As illustrated in Fig. 9 A, the received particular image data presenting module 258 may include one or more sub-logic modules in various alternative implementations and embodiments. For example, as shown in Fig. 9, e.g., Fig. 9A, in an embodiment, module 258 may include one or more of received particular image data presenting on a device viewfinder module configured to present the received particular image data to the requestor on a viewfinder of a device associated with the requestor 902, received particular image data presenting module configured to present the received particular image data to the requestor that is a spectator of a baseball game on a requestor device that is an internet-enabled television 906, received particular image data presenting module configured to present the received particular image data to the requestor that is a naturalist that observes an animal watering hole from a smartwatch touchscreen 908, received particular image data modifying into modified particular image data module

- 83 - 910, and modified particular image data presenting module 912. In an embodiment, module 902 may include received particular image data presenting on a particular device viewfinder module, wherein the particular device is one or more of a cellular telephone device, a tablet device, a smartphone device, a laptop computer, a desktop computer, a television, and a wearable computer 904. In an embodiment, module 910 may include received particular image data that includes only changed portions of the scene modifying into modified particular image data module 914. In an embodiment, module 914 may include received particular image data that includes only changed portions of the scene modifying into modified particular image data through addition of unchanged portions of existent image data module 916.

[00161] Referring again to Fig. 9, e.g., Fig. 9B, in an embodiment, module 258 may include portion of received particular image data presenting module configured to present a portion of the received particular image data to the requestor 918. In an embodiment, module 918 may include one or more of first portion of the received particular image data presenting module 920 and second portion of the received particular image data storing module 922. In an embodiment, module 922 may include one or more of second portion of the received particular image data that is adjacent to the first portion of the received particular image data and is configured to be used as cached image data storing module 924 and second portion of the received particular image data that is adjacent to the first portion of the received particular image data and is received at a lower resolution than the first portion of the received particular image data storing module 926.

[00162] Referring again to Fig. 9, e.g., Fig. 9C, in an embodiment, module 258 may include one or more of received particular image data transmitting to a component module configured transmit the received particular data to a component deployed to analyze the received particular image 928, received particular image data transmitting to a component module configured transmit the received particular data to a component deployed to store the received particular image 930, received particular image data presenting module configured to present the received particular image data to a client requestor 932, and received particular image data presenting module configured to present the received particular image data to a device component requestor 934.

- 84 - [00163] In some implementations described herein, logic and similar implementations may include software or other control structures. Electronic circuitry, for example, may have one or more paths of electrical current constructed and arranged to implement various functions as described herein. In some implementations, one or more media may be configured to bear a device-detectable implementation when such media hold or transmit device detectable instructions operable to perform as described herein. In some variants, for example, implementations may include an update or modification of existing software or firmware, or of gate arrays or programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein. Alternatively or additionally, in some variants, an

implementation may include special-purpose hardware, software, firmware components, and/or general-purpose components executing or otherwise invoking special-purpose components. Specifications or other implementations may be transmitted by one or more instances of tangible transmission media as described herein, optionally by packet transmission or otherwise by passing through distributed media at various times.

[00164] Following are a series of flowcharts depicting implementations. For ease of understanding, the flowcharts are organized such that the initial flowcharts present implementations via an example implementation and thereafter the following flowcharts present alternate implementations and/or expansions of the initial flowchart(s) as either sub-component operations or additional component operations building on one or more earlier-presented flowcharts. Those having skill in the art will appreciate that the style of presentation utilized herein (e.g., beginning with a presentation of a flowchart(s) presenting an example implementation and thereafter providing additions to and/or further details in subsequent flowcharts) generally allows for a rapid and easy understanding of the various process implementations. In addition, those skilled in the art will further appreciate that the style of presentation used herein also lends itself well to modular and/or object-oriented program design paradigms.

Exeropkry O erational Implementation of Processor 250 and Exemplary Y da s

[00165] Further, in Fig. 10 and in the figures to follow thereafter, various operations may be depicted in a box-within-a-box manner. Such depictions may indicate that an operation in an internal box may comprise an optional example embodiment of the operational step illustrated in one or more external boxes. However, it should be understood that internal

- 83 - box operations may be viewed as independent operations separate from any associated external boxes and may be performed in any sequence with respect to all other illustrated operations, or may be performed concurrently. Still further, these operations illustrated in Fig. 10 as well as the other operations to be described herein may be performed by at least one of a machine, an article of manufacture, or a composition of matter.

[00166] Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware, software, and/or firmware implementations of aspects of systems; the use of hardware, software, and/or firmware is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs.

efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle;

alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of

implementations will typically employ optically- oriented hardware, software, and or firmware.

[00167] Throughout this application, examples and lists are given, with parentheses, the abbreviation "e.g.," or both. Unless explicitly otherwise stated, these examples and lists are merely exemplary and are non-exhaustive. In most cases, it would be prohibitive to list every example and every combination. Thus, smaller, illustrative lists and examples are used, with focus on imparting understanding of the claim terms rather than limiting the scope of such terms. [00168] Referring now to Fig. 10, Fig. 10 shows operation 1000, e.g., an example operation of message processing device 230 operating in an environment 200. In an embodiment, operation 1000 may include operation 1002 depicting accepting input of a request for a particular image of a scene that is larger than the particular image. For example, Fig. 6, e.g., Fig. 6B, shows input of a request for particular image data accepting module 252 accepting (e.g., receiving, retrieving, facilitating the reception of, interacting with an input/output interface) input (e.g., the input could take many forms, e.g., a person interacting with an input/output device, e.g., a keyboard, mouse, touchscreen, haptic interface, virtual reality interface, augmented reality interface, audio interface, body-motion interface, or similar, or in the form of one device sending a request to another device, for example a monitoring device sending a request for a particular image, or in the form of an internal communication in a device (e.g., a subroutine of a device inputs the request to a different portion of the device (which may use the same CPU and/or other components), or any other form) of a request (e.g., a command, suggestion, or description, which may be narrow or specific, e.g., "these pixels are the pixels that are requested," or "request all pixels that contain image objects of herring birds in them," or "request all pixels that indicate an image object has moved since the last image was captured") for a particular image (e.g., an image, or image data (which may be used substantially interchangeably throughout, but noted that here "image" and "image data" may include still pixel data, video data, audio data, metadata regarding any of the previous data, or other

processing/cataloging data associated with the digital capture of external stimuli in the universe), whether in the visible spectrum or not (e.g., also including infrared, ultraviolet, and all other waves on the electromagnetic spectrum) of a scene (e.g., in this context the scene refers to the data captured by the image sensor array, which as described in more detail herein, may be substantially reduced or modified before it reaches a destination, of which the particular image is part) that is larger than the particular image.

[00169] Referring again to Fig. 10, operation 1000 may include operation 1004 depicting transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene. For example, Fig. 2, e.g., Fig. 2B, shows inputted request for the particular image data transmitting module 254.

In an embodiment, transmitting may describe, e.g., sending, or facilitating sending, to the

- 87 - destination, or to an intermediary that may act autonomously. It is noted that in several embodiments of the system, the request for the particular image is not transmitted directly to the image sensor array, but rather to a remote server that communicates with the image sensor array. The transmitting device may not know the actual location or other data about the image sensor array, e.g., the remote server may be configured to act as an intermediary, however, this is also considered "transmitting" to the image sensor array for the purposes of one or more embodiments listed herein. The "request for the particular image" may be a request (e.g., a command, suggestion, or description, which may be narrow or specific, e.g., "these pixels are the pixels that are requested," or "request all pixels that contain image objects of herring birds in them," or "request all pixels that indicate an image object has moved since the last image was captured") for a particular image (e.g., an image, or image data (which may be used substantially interchangeably throughout, but noted that here "image" and "image data" may include still pixel data, video data, audio data, metadata regarding any of the previous data, or other processing/cataloging data associated with the digital capture of external stimuli in the universe), whether in the visible spectrum or not (e.g., also including infrared, ultraviolet, and all other waves on the electromagnetic spectrum). In an embodiment, the "image sensor array" may be, e.g., a set of one or more image sensors that are grouped together, whether spatially grouped or linked electronically or through a network, in any arrangement or configuration, whether contiguous or noncontiguous, and whether in a pattern or not, and which image sensors may or may not be uniform throughout the array. In an embodiment, the scene, e.g., in this context the scene refers to the data captured by the image sensor array, which as described in more detail herein, may be substantially reduced or modified before it reaches a destination, of which the particular image is part.

[00170] Referring again to Fig. 10, operation 1000 may include operation 1006 depicting receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor. For example, Fig. 2 e.g., Fig.

2B shows particular image data from the image sensor array exclusive receiving module

256 receiving only (e.g., the ultimate destination of the data receives only the particular image, although more data may be received at intermediaries, e.g., the remote server), from the image sensor array (e.g., a set of one or more image sensors that are grouped together,

- 88 - whether spatially grouped or linked electronically or through a network, in any arrangement or configuration, whether contiguous or noncontiguous, and whether in a pattern or not, and which image sensors may or may not be uniform throughout the array), wherein the particular image represents a subset (e.g., in this context "subset" merely means that the particular image is some part, possibly all, of the scene) of the scene (e.g., the data captured by the image sensor array, which as described in more detail herein, may be substantially reduced or modified before it reaches a destination, of which the particular image is part), and wherein a size characteristic (e.g., a data size, or a real-world correspondent size (e.g., spatial distance) of the particular image is at least partially based on a property of a requestor (e.g., a requestor is the entity that made the request, e.g., via a device, and the property of the requestor may include properties of the device, for example, if the requestor made the request on a cellular telephone device with a maximum resolution of 800x600 pixels, then that property of the requestor would limit the size characteristic of the particular image to a resolution of 800x600).

[00171] Referring again to Fig. 10, operation 1000 may include operation 1008 depicting presenting the received particular image to the requestor. For example, Fig. 2, e.g., Fig. 2B, shows received particular image data presenting module 258 presenting (e.g., transmitting, storing, displaying, or taking some other action in accordance with the configuration/wishes of the requestor, e.g., if the requestor intends to store the particular image, then

"presenting" is "transmitting" or "copying," but if the requestor intends to view the particular image, then "presenting" may mean "displaying") the received particular image to the requestor.

[00172] Figs. 11A-11F depict various implementations of operation 1002, depicting accepting input of a request for a particular image of a scene that is larger than the particular image according to embodiments. Referring now to Fig. 11 A, operation 1002 may include operation 1102 depicting receiving input of the request for the particular image of the scene that is larger than the particular image. For example, Fig. 6, e.g., Fig. 6A shows input of a request for particular image data accepting module accepting input (e.g., receiving a vocal order spoken into a microphone) of a request for a particular image (e.g., "show me the corner of 59th and Vine in New York" of a scene that is larger than the particular image (e.g., the scene may be the entire area of New York captured by the image

- 89 - sensor array, which is larger than the street corner (e.g., depending on the array, it may be blocks, or square miles, or even larger, limited only by the array). In an embodiment, other image sensor arrays may combine into a larger image sensor array, or pass off control from one to the other so that it appears they are a single image sensor array, which is also included here in one or more embodiments.

[00173] Referring again to Fig. 11 A, operation 1002 may include operation 1104 depicting receiving input from an automated component, of the request for the particular image of the scene that is larger than the particular image. For example, Fig. 6, e.g., Fig. 6A, shows input of a request for particular image data accepting module configured to accept input from an automated component for the request for particular image data 604 receiving input from an automated component (e.g., an algorithm (e.g., an algorithm that runs on a hardware component without further human interaction required, e.g., the algorithm is programmed to execute instructions, and a human may cause the algorithm/component to execute, in an embodiment, the human may need to take no further action) of the request for the particular image (e.g., an image of any motion that was detected in front of a warehouse at night) of the scene (e.g., the front of the warehouse) that is larger than the particular image (e.g., an image of where motion was detected). For example, in an embodiment, a separate motion sensor may detect motion in front of the warehouse, and may send a request for the particular image in the form of "transmit the image where motion was detected." In another embodiment, the scene data may be used to detect motion, e.g., if pixels have changed in the scene, and the request may be generated internally, e.g., "collect the portion of the image where motion was detected as the particular image").

[00174] Referring again to Fig. 11 A, operation 1002 may include operation 1106 depicting receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image. For example, Fig. 6, e.g., Fig. 6A, shows input of a request from an image object tracking algorithm for particular image data accepting module 606 receiving input from an image object tracking component (e.g., a component designed to track an image object as it moves through the scene, e.g., a flying bird (the particular image) through an animal oasis (the scene), or a moving car (the particular image) crossing a

- 90 - highway (the scene), or a specific person (the particular image) walking down a street corner (e.g., the scene). These examples are merely exemplary and not exhaustive. The request for the particular image that contains a tracked image object (e.g., the specific person ) present in the scene (e.g., the street corner). In an embodiment, the tracked image object may be tracked through automated image analysis (e.g., image object detection) or through human/automation/artificial intelligence/intelligence amplification intervention (e.g., a human selecting the place where the person is), or some combination thereof.

[00175] Referring again to Fig. 11 A, operation 1106 may include operation 1108 depicting receiving input of the request for the particular image from the image object tracking component, of the requestor device that is associated with the requestor, wherein the particular image contains the tracked image object present in the scene that is larger than the particular image. For example, Fig. 6, e.g., Fig. 6A, shows image object tracking algorithm for particular image data accepting through a requestor device interface module 608 receiving input of the request for the particular image from the image object tracking component, of the requestor device receiving input of the request for the particular image from the image object tracking component, of the requestor device that is associated with the requestor, wherein the particular image contains the tracked image object (e.g., tracking a person walking through a stadium environment) present in the scene (e.g., the stadium environment, e.g., a concourse of a stadium) that is larger than the particular image.

[00176] Referring again to Fig. 11 A, operation 1106 may include operation 1110 depicting receiving input from a player tracking component, that is part of an Internet-enabled television configured to track football players, of the request for the particular image that is an image of a football game that contains a tracked image object that is a particular football player present in the scene that is larger than the particular image of the football game. For example, Fig. 6, e.g., Fig. 6A, shows image object tracking algorithm for particular image data accepting through an interface of an Internet-enabled television device interface module 610 receiving input from a player tracking component, that is part of an Internet- enabled television configured to track football players, of the request for the particular image that is an image of a football game that contains a tracked image object that is a particular football player present in the scene that is larger than the particular image of the football game.

- 91 - [00177] Referring now to Fig. 11B, operation 1002 may include operation 1112 depicting accepting input of the request for the particular image of the scene that is larger than the particular image through an interface of a requestor device associated with the requestor. For example, Fig. 6, e.g., Fig. 6B, shows input of a request for particular image data accepting through a requestor device interface module 612 accepting input of the request for the particular image (e.g., an image of panda bears) of the scene (e.g., an animal oasis/watering hole) that is larger than the particular image through an interface (e.g., a touchscreen) of a requestor device (e.g., a smartphone) associated with (e.g., operated by) the requestor (e.g., a person who wants to watch the pandas at the watering hole).

[00178] Referring again to Fig. 11B, operation 1112 may include operation 1114 depicting receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene. For example, Fig. 6, e.g., Fig. 6B, shows input of a request for particular image data accepting through a requestor device interface module 614 receiving input of the request for the particular image (e.g., a specific area of a downward-looking view of a city block in a "live street view" context) of the scene (e.g., a downward-looking view of a city block) that is larger than the particular image (e.g., a specific area of the city block, e.g., a hot dog stand area) from the requestor (e.g., a person viewing on their computer) through the interface of the requestor device that is configured to display at least a portion of the scene (e.g., the computer show a low- resolution, condensed version of the scene, that is, the city block, and the user clicks on the area that contains the hot dog stand, and that input is received and the particular image of the hot dog stand is selected).

[00179] Referring again to Fig. 11B, operation 1114 may include operation 1116 depicting receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder. For example, Fig. 6, e.g., Fig. 6B, shows input of a request for particular image data accepting through a requestor device interface module 616 receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the

- 92 - scene in a viewfinder (e.g., either optically or digitally, a condensed version of the scene is shown in a viewfinder, and a remote (or local) requestor can select the portion of the scene to be accepted as input of the request for the particular image, for example, the user can select a particular animal at a watering hole, or a football player at a football game).

[00180] Referring again to Fig. 11B, operation 1114 may include operation 1118 depicting receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene, said request at least partially based on a view of the scene. For example, Fig. 6, e.g., Fig. 6B, shows input of a request for particular image data that is at least partially based on a view of the scene accepting through a requestor device interface module 618 receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene, said request at least partially based on a view of the scene (e.g., a view of the scene is given to the requestor so that the requestor can select a portion of the scene as the particular image).

[00181] Referring again to Fig. 11B, operation 1114 may include operation 1120 depicting receiving the request for particular image data of the scene from the requestor device that is configured to receive the selection of the particular image, wherein the requestor device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device. For example, Fig. 6, e.g., Fig. 6B, shows input of a request for particular image data accepting through a specific requestor device interface module 620 receiving the request for particular image data of the scene from the requestor device that is configured to receive the selection of the particular image (e.g., a view of the interior of the house that can be viewed remotely), wherein the requestor device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

[00182] Referring now to Fig. 11C, operation 1002 may include operation 1122 depicting accepting input of the request for the particular image of the scene that contains more pixels than the particular image. For example, Fig. 6, e.g., Fig. 6C, shows input of the request for

- 93 - particular image data that is part of the scene that contains more pixels than the particular image associated with the particular image data accepting module 622 accepting input of the request (e.g., a requestor pushes a button on a remote control to control the smart television that is showing images) for the particular image (e.g., a particular player on a person's fantasy football team) of the scene (e.g., a football stadium) that contains more pixels than the particular image.

[00183] Referring again to Fig. 11C, operation 1002 may include operation 1124 depicting accepting input of the request for the particular image of the scene that captures a larger spatial area than the particular image. For example, Fig. 6, e.g., Fig. 6C, shows input of the request for particular image data that is part of the scene is a larger spatial area than the particular image associated with the particular image data accepting module 622 accepting input (e.g., a touchscreen input in which the requestor touches a representation of the scene at a particular portion) of the request for the particular image (e.g., the image data at the location touched by the requestor) of the scene (e.g., a street-level view of downtown Washington DC) that captures a larger spatial area (e.g., the depiction of the scene is a larger geographic area than the depiction of the particular image) than the particular image (e.g., an image of a particular intersection in downtown Washington DC).

[00184] Referring again to Fig. 11C, operation 1002 may include operation 1126 depicting accepting input of the request for the particular image of the scene that includes more data than the particular image. For example, Fig. 6, e.g., Fig. 6C, shows input of the request for particular image data that is part of the scene that contains more data than the particular image associated with the particular image data accepting module 626 accepting input of the request for the particular image of the scene that includes more data than the particular image (e.g., an image of a lion at the watering hole). It is noted that, in an embodiment, the scene represents the total data captured by the image sensor array, much of which may be discarded at the image sensor array.

[00185] Referring again to Fig. 11C, operation 1002 may include operation 1128 depicting accepting input of the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6C, shows input of a request for particular image data accepting

- 94 - module, wherein the particular image data is part of a scene that is a representation of the image data collected by the array of more than one image sensor 628 accepting input of the request for particular image data of the scene (e.g., a virtual tourism display of the Great Pyramids), wherein the scene is a representation of the image data collected by the array of more than one image sensor.

[00186] Referring again to Fig. 11C, operation 1128 may include operation 1130 depicting accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6C, shows input of a request for particular image data accepting module 630 accepting input of the request for particular image data of the scene, wherein the scene is a sampling (e.g., every other pixel, or every tenth pixel, or any subset of the entirety of the data collected by the image sensor array (e.g., for example, if the image sensor array also collects audio data, then a sampling in this context could mean only the pixel data, or only the audio data, or some combination thereof).

[00187] Referring again to Fig. 11C, operation 1128 may include operation 1132 depicting accepting input of the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6C, shows input of a request for particular image data accepting module 632 accepting input of the request for particular image data of the scene, wherein the scene is a subset (e.g., any set of data that is less than or equal to the total amount of data captured by the image sensor array, e.g., at a moment in time) of the image data (e.g., which, as stated above, may include visible and invisible spectrum data, audio data, video data, and metadata) collected by the array of more than one image sensor.

[00188] Referring again to Fig. 11C, operation 1128 may include operation 1134 depicting accepting input of the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor. For example, Fig. 6, e.g., Fig. 6C, shows input of a request for particular image data accepting module 634 accepting input of the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

- 95 - [00189] Referring now to Fig. 11D, operation 1002 may include operation 1136 depicting accepting input of the request for particular image data of a scene that is a football game. For example, Fig. 6, e.g., Fig. 6D, shows input of a request for particular image data accepting module 636 accepting input of the request for particular image data (e.g., a particular football player) of a scene that is a football game.

[00190] Referring again to Fig. 11D, operation 1002 may include operation 1138 depicting accepting input of the request for particular image data of a scene that is a street view of an area. For example, Fig. 6, e.g., Fig. 6D, shows input of a request for particular image data accepting module 638 accepting input of the request for particular image data (e.g., an image of a storefront where a targeted person is walking in) of a scene that is a street view of an area (e.g., a "live" street view of a busy intersection in Washington, DC).

[00191] Referring again to Fig. 11D, operation 1002 may include operation 1140 acquiring the request for particular image data of a scene that is a tourist destination. For example, Fig. 6, e.g., Fig. 6D, shows input of a request for particular image data accepting module 640 acquiring the request for particular image data (e.g., a view from a top of the Eiffel Tower) of a scene that is a tourist destination (e.g., the Eiffel Tower).

[00192] Referring again to Fig. 11D, operation 1002 may include operation 1142 depicting acquiring the request for particular image data of a scene that is an inside of a home. For example, Fig. 6, e.g., Fig. 6D, shows shows input of a request for particular image data accepting module 642 acquiring the request for particular image data (e.g., inside the kitchen) of a scene that is an inside of a home.

[00193] Referring again to Fig. 11D, operation 1002 may include operation 1144 depicting accepting input of the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene. For example, Fig. 6, e.g., Fig. 6D, shows input of a request for particular image data that is a portion of the scene accepting module 644 accepting input (e.g., touchscreen input) of the request for particular image data (e.g., audio and video data of a lion at the watering hole) of the scene (e.g., an animal watering hole), wherein the particular image data is an image (e.g., a word which in this context also can include video and text) that is a portion of the scene (e.g., the animal watering hole).

- 96 - [00194] Referring again to Fig. 11D, operation 1144 may include operation 1146 depicting accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field. For example, Fig. 6, e.g., Fig. 6D, shows input of a request for particular image data that includes a particular football player that is a portion of the scene that is a football field accepting module 644 accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

[00195] Referring again to Fig. 11D, operation 1144 may include operation 1148 depicting accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge. For example, Fig. 6, e.g., Fig. 6D, shows input of a request for particular image data that includes a license plate of a vehicle that is a portion of the scene that is a representation of a highway bridge accepting module 648 accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

[00196] Referring now to Fig. HE, operation 1002 may include operation 1150 depicting accepting input of the request for particular image data that is part of the scene, wherein the particular image data includes video data. For example, Fig. 6, e.g., Fig. 6E, shows input of a request for particular image video data accepting module 650 accepting input (e.g., through a human-body interaction, e.g., similar to the Microsoft Kinect) of the request for particular image data (e.g., data of a rhinoceros on the African steppe) that is part of the scene, wherein the particular image data includes video data.

[00197] Referring again to Fig. HE, operation 1002 may include operation 1152 depicting accepting input of the request for particular image data that is part of the scene, wherein the particular image data includes audio data. For example, Fig. 6, e.g., Fig. 6E, shows input of a request for particular image audio data accepting module 652 accepting input (e.g., through interaction with an augmented reality construct that is being projected to the requestor and overlayed with reality) of the request for particular image data (e.g., virtual

- 97 - tourism data of the inside of the Smithsonian Museum of Art) that is part of the scene, wherein the particular image data includes video data.

[00198] Referring again to Fig. HE, operation 1002 may include operation 1154 depicting accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface. For example, Fig. 6, e.g., Fig. 6E, shows input of a request for particular image data accepting through an audio interface module 654 accepting input of the request for particular image data that is part of the scene from a requestor device (e.g., a tablet device) that receives the request for particular image data through an audio interface (e.g., an interactive virtual companion, e.g., Microsoft's "Cortana" or Apple's "Siri").

[00199] Referring again to Fig. HE, operation 1002 may include operation 1156 depicting accepting input of the request for particular image data that is part of the scene from the requestor device that has a microphone that receives a spoken request for particular image data from the requestor. For example, Fig. 6, e.g., Fig. 6E, shows input of a request for particular image data accepting through a microphone audio interface module 656 accepting input of the request for particular image data that is part of the scene from the requestor device (e.g., a cellular smartphone) that has a microphone that receives a spoken request for particular image data from the requestor (e.g., the person interacting with the cell phone device with a microphone).

[00200] Referring now to Fig. 11F, operation 1002 may include operation 1158 depicting accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor. For example, Fig. 6, e.g., Fig. 6F, shows input of a request for particular image data accepting from the requestor module 658 accepting input of the request (e.g., a request to transmit an image) for the particular image (e.g., an image of the object that is moving in the scene) of the scene (e.g., the area around a front door to a secured compound) that is larger than the particular image, from the requestor (e.g., a device configured to store an image from a security camera of any object that moves within fifty feet of a door to a secured compound).

[00201] Referring again to Fig. 11F, operation 1158 may include operation 1160 depicting accepting input of the request for the particular image of the scene that is larger than the

- 98 - particular image, from the requestor that is a client operating a device. For example, Fig. 6, e.g., Fig. 6F, shows input of a request for particular image data accepting from the requestor module 660 accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a client (e.g., a person) operating a device (e.g., a laptop computer with a keyboard and a mouse).

[00202] Referring again to Fig. 11F, operation 1160 may include operation 1162 depicting accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a person operating a smart television with a remote control. For example, Fig. 6, e.g., Fig. 6F, shows input of a request for particular image data accepting from the requestor module 662 accepting input of the request for the particular image (e.g., a specific swimmer in an Olympic race) of the scene (e.g., the inside of a pool at an Olympics) that is larger than the particular image, from the requestor that is a person operating a smart television with a remote control.

[00203] Referring again to Fig. 11F, operation 1158 may include operation 1164 depicting accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device. For example, Fig. 6, e.g., Fig. 6F, shows input of a request for particular image data accepting from a requestor device module 664 accepting input of the request for the particular image (e.g., pictures of license plates of cars that pass a toll bridge) of the scene (e.g., a point on a toll bridge where vehicles can pass) that is larger than the particular image, from the requestor that is a device (e.g., the requestor is a separate device that records and tracks license plate numbers of vehicles that pass and the tolls that are paid, so that when a toll is not paid, the requestor device can initiate automation to receive the particular image that will show the license plate).

[00204] Referring again to Fig. 11F, operation 1164 may include operation 1166 depicting accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a component of a device. For example, Fig. 6, e.g., Fig. 6F, shows input of a request for particular image data accepting from a requestor device component module 666 accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a component (e.g.,

- 99 - an algorithm, a subroutine, a program, a chip, a module, or any combination thereof) of a device.

[00205] Referring again to Fig. 11F, operation 664 may include operation 1168 depicting accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device that is executing a subroutine. For example, Fig. 6, e.g., Fig. 6F, shows input of a request for particular image data accepting from a requestor device component module 668 accepting input (e.g., an electronic transmission from device to device) of the request for the particular image (e.g., an image of a warehouse door) of the scene (e.g., the entire warehouse) that is larger than the particular image, from the requestor that is a device that is executing a subroutine (e.g., a device that is executing a subroutine to check all the entry points of a warehouse at given intervals, without human intervention or direction).

[00206] Referring again to Fig. 11F, operation 664 may include operation 1170 depicting accepting, at the device, of the input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is the device that is executing a separate subroutine. For example, Fig. 6, e.g., Fig. 6F, shows input of a request for particular image data accepting at the requestor device module 670 accepting, at the device (e.g., a command computer in charge of security at a warehouse), of the input of the request for the particular image (e.g., a front door of the warehouse) of the scene that is larger than the particular image, from the requestor that is the device that is executing a separate subroutine (e.g., the same device that accepts the request for the image of the warehouse door also runs the subroutine that requests the image of the warehouse door at particular intervals, in a separate subroutine, which may share some of the programming logic and/or hardware as the acceptance of the request for the image).

[00207] Figs. 12A-12G depict various implementations of operation 1004, depicting transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene, according to embodiments. Referring now to Fig. 12A, operation 1004 may include operation 1202 depicting transmitting the request for the particular image data of the scene to the image

- 100 sensor array that includes multiple connected image sensors and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array module 702 configured to transmit the request to the image sensor array that includes multiple connected image sensors and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene, said module 702 transmitting the request for the particular image data of the scene to the image sensor array that includes multiple connected image sensors (e.g., the image sensors send data to a common processor) and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image (e.g., an image of a living room of a home) of the scene.

[00208] Referring again to Fig. 12A, operation 1004 may include operation 1204 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture the scene that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to an image sensor array that includes two inline image sensors angled toward each other module 704 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, said module 704 transmitting the request for the particular image data (e.g., an image of the drummer at a music concert) of the scene (e.g., the stage of a big music concert) to the image sensor array that includes two image sensors (e.g., CMOS sensors that are ten megapixels each) arranged side by side and angled toward each other, and that is configured to capture the scene (e.g., the stage of a music concert) that is larger than the requested image data).

[00209] Referring again to Fig. 12A, operation 1004 may include operation 1206 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture the scene that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A,

- 101 shows request for particular image data transmitting to the image sensor array that includes a pattern of image sensors arranged in a grid module 706 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, said module 706 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid (e.g., a 10x10 grid of three- megapixel image sensors) and that is configured to capture the scene that is larger than the requested image data.

[00210] Referring again to Fig. 12A, operation 1004 may include operation 1208 depicting transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture the scene that is larger than the requested image data. For example, Fig. 7, e.g., Fig. 7A, shows request for particular image data transmitting to the image sensor array that includes a pattern of image sensors arranged in a line module 708 configured to transmit the request to the image sensor array that has a field of view greater than one hundred twenty degrees and that is configured to capture the scene that is larger than the requested particular image data, said module 708 transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture the scene (e.g., a soccer field where a youth soccer game is occurring) that is larger than the requested image data (e.g., an image of a particular parent's child at a youth soccer game)

[00211] Referring now to Fig. 12B, operation 1004 may include operation 1210 depicting transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to the image sensor array module 710 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data, said module 710 transmitting the request for the particular image to the image sensor array that includes more than one image sensor and

- 102 that is configured to capture the scene that represents more image data (e.g., the scene represents a larger area and more data, even if the requestor is viewing a smaller part of it) than the requested particular image data (e.g., an image of a cubicle in an office that is part of an office employee productivity monitoring system).

[00212] Referring again to Fig. 12B, operation 1210 may include operation 1212 depicting transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to the image sensor array module 712 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times more image data than the requested particular image data, said module 712 transmitting the request for the particular image (e.g., a satellite with a mounted version of the image sensor array is moved to a militarily important target and the particular image is of potential enemy combatants that are detected in the scene) to the image sensor array that includes more than one image sensor and that is configured to capture the scene (e.g., high-resolution satellite data of the militarily important and targeted area) that represents more than ten times as much image data as the requested particular image data.

[00213] Referring again to Fig. 12B, operation 1210 may include operation 1214 depicting transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data. For example, Fig. 7, e.g., Fig. 7B, shows request for particular image data transmitting to the image sensor array module 714 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times more image data than the requested particular image data, said module 714 transmitting the request for the particular image (e.g., a drone/UAV with a mounted version of the image sensor array is moved to a militarily important target and the particular image is of potential enemy combatants that are detected in the scene) to the image sensor array that includes more than one image sensor and that is configured to capture the scene (e.g., high-resolution drone/UAV data of the militarily

- 103 important and targeted area) that represents more than one hundred times as much image data as the requested particular image data.

[00214] Referring now to Fig. 12C, operation 1004 may include operation 1216 depicting transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor. For example, Fig. 7, e.g., Fig. 7C, shows request for particular image data transmitting to a remote server configured to relay the request to the image sensor array module 716 configured to transmit the request to the remote server that is configured to package the request for particular image data and relay the request for particular image data to the image sensor array, said module 716 transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image (e.g., an image of a lion at a watering hole) to the image sensor array that includes more than one image sensor.

[00215] Referring again to Fig. 12C, operation 1216 may include operation 1218 depicting transmitting the request for the particular image to the remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor along with one or more other requests for other particular images. For example, Fig. 7, e.g., Fig. 7C, shows request for particular image data transmitting to a remote server configured to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package the request for particular image data and relay the request for particular image data along with one or more other image data requests to the image sensor array, said module 718 transmitting the request for the particular image (e.g., an image of a running back football player during a football game) to the remote server (e.g., a piece of hardware, that may be spatially distant, or not, from the image sensor array, but which has insufficient bandwidth to collect 100% of the data from the image sensor array as it is collected) that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor along with one or more requests for other particular images (e.g., another request might be for the quarterback football player, or for the defensive end football player, etc.)

- 104 [00216] Referring again to Fig. 12C, operation 1216 may include operation 1220 depicting transmitting the request for the particular image to the remote server that is configured to package multiple requests that include the request for the particular image and to transmit the package of multiple requests to the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows request for particular image data transmitting to a remote server configured to relay the request to the image sensor array module 720 configured to transmit the request to the remote server that is configured to package multiple requests that include the request for particular image data and relay the package of multiple requests to the image sensor array, said module 720 transmitting the request for the particular image to the remote server that is configured to package multiple requests that include the request for the particular image and to transmit the package of multiple requests to the image sensor array.

[00217] Referring again to Fig. 12C, operation 1216 may include operation 1222 depicting transmitting the request for the particular image to the remote server that is configured to combine multiple requests that include the request for the particular image and to transmit the package of multiple requests with redundant data requests removed to the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module 722 configured to transmit the request to the remote server that is configured to package multiple requests that include the request for particular image data and relay the package of multiple requests to the image sensor array, said module 722 transmitting the request for the particular image to the remote server that is configured to combine multiple requests that include the request for the particular image and to transmit the package of multiple requests with redundant data requests removed to the image sensor array, e.g., as shown in Fig. 3B, for example.

[00218] Referring again to Fig. 12C, operation 1222 may include operation 1224 depicting transmitting the request for the particular image to the remote server that is configured to combine multiple requests from multiple requestors that include the request for the particular image, and to transmit the multiple requests as a single combined request for image data to the image sensor array. For example, Fig. 7, e.g., Fig. 7C, shows request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module 724 configured to transmit the request to the remote server that

- 105 is configured to combine multiple requests that include the request for particular image data and transmit the combined multiple requests as a single combined request for image data to the image sensor array, said module 724 transmitting the request for the particular image to the remote server that is configured to combine multiple requests from multiple requestors that include the request for the particular image, and to transmit the multiple requests as a single combined request for image data to the image sensor array, as shown in Figs. 3B and 5B.

[00219] Referring now to Fig. 12D, operation 1004 may include operation 1226 depicting modifying the request for the particular image into an updated request for particular image data. For example, Fig. 7, e.g., Fig. 7D, shows request for particular image data modifying into updated request for particular image data module 726 modifying the request for the particular image into a request for updated particular image data (e.g., here "updated request means that the original request from the requestor has been modified, e.g., added to, altered, subtracted from, appended to, e.g., whether the actual request is changed or simply more data is added).

[00220] Referring again to Fig. 12D, operation 1004 may include operation 1228, which may appear in conjunction with operation 1226, operation 1228 depicting transmitting the updated request for particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene. For example, Fig. 7, e.g., Fig. 7D, shows request for updated particular image data transmitting to the image sensor array module 728 transmitting the updated request for particular image data (e.g., for image data of the eagle at the animal watering hole) to the image sensor array that includes more than one image sensor and that is configured to capture the scene (e.g., the animal watering hole) and retain the subset of the scene that includes the request for the particular image of the scene.

[00221] Referring again to Fig. 12D, operation 1226 may include operation 1230 depicting modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating. For example, Fig. 7, e.g., Fig. 7D, shows request for particular image data modifying into updated request for

- 106 particular image data that identifies a portion of the image data as update-targeted module 730 modifying the request for the particular image (e.g., a request for an image of a bird circling an oasis) into an updated request for particular image data that identifies a portion of the image as targeted for updating (e.g., using the area around the bird, or calculating the bird' s flight path, to modify the request for the particular image to include more data around the bird, so that more of the bird may be captured and cached locally, or displayed as needed).

[00222] Referring again to Fig. 12D, operation 1230 may include operation 1232 depicting modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game. For example, Fig. 7, e.g., Fig. 7D, shows request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted module 732 modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game.

[00223] Referring again to Fig. 12D, operation 1230 may include operation 1234 depicting modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game based on an algorithm that identified that portion of the image as most likely to have changed since a previous image. For example, Fig. 7, e.g., Fig. 7D, shows request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted based on an applied algorithm module 734 modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that

- 107 includes pixels that represent three spatial feet in all directions from the football player in the football game based on an algorithm that identified that portion of the image as most likely to have changed since a previous image.

[00224] Referring now to Fig. 12E, operation 1226 may include operation 1236 depicting modifying the request for the particular image into an updated request for particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images. For example, Fig. 7, e.g., Fig. 7E, shows request for particular image data modifying into updated request for particular image data based on one or more previously received images module 736 modifying the request for the particular image into an updated request for particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images (e.g., the request for the particular image comes in, and it's a request for a particular street, then the request is modified to only include those parts of the street that might have changed, e.g., a building has not changed, but the area around the hot dog stand might have, and in another embodiment, this could be extended to cars that have been parked in the same spot for more than a week, etc., in order to reduce the amount of data that is required to be requested by eliminating image data that is already stored locally and is not likely to have changed).

[00225] Referring again to Fig. 12E, operation 1236 may include operation 1238 depicting comparing one or more previously received images that are determined to be similar to the particular image. For example, Fig. 7, e.g., Fig. 7E, shows particular image data request to previous image data that contains one or more previously received images determined to be similar to the previous image data comparing module 752 comparing one or more previously received images (e.g., images of the same street corner as what is currently being requested) that are determined (e.g., through analysis of the image and/or the image properties (e.g., size, resolution, geolocation, color distribution, hue, saturation, etc.) to be similar to the particular image (e.g., the image of the street corner).

[00226] Referring again to Fig. 12E, operation 1236 may include operation 1240, which may appear in conjunction with operation 1238, operation 1240 depicting modifying the request for the particular image into the updated request for particular image data that identifies the portion of the image as targeted for updating based on the compared one or

- 108 more previously received images. For example, Fig. 7, e.g., Fig. 7E, shows particular image data request modifying based on compared previous image data module 740 modifying the request for the particular image into the updated request for the particular image data that identifies the portion of the image as targeted for updating (e.g., the portion that is likely to have changed) based on the compared one or more previously received images (e.g., if a portion of the image has not changed in the last, e.g., five, received images, then it can be decided that the portion does not need updating, e.g., in other embodiments, it may be as few as two unchanged portions, or as large as one thousand, or any countable number depending on implementation, and also, in some embodiments, depending on condition, e.g., if the bandwidth is lower, then there may be a more aggressive determination of portions that are not likely to have changed).

[00227] Referring again to Fig. 12E, operation 1238 may include operation 1242 depicting comparing one or more previously received images that are determined to be similar to the particular image to identify a portion of the particular image as targeted for updating. For example, Fig.7, e.g., Fig. 7E, shows particular image data request to previous image data that contains one or more previously received images determined to be similar to the previous image data comparing to identify an update-targeted portion of the particular image data module 742 comparing one or more previously received images that are determined to be similar to the particular image to identify a portion of the particular image as targeted for updating (e.g., a portion of the image, e.g., of the watering hole, where various animals have frequented, may need updating).

[00228] Referring again to Fig. 12E, operation 1242 may include operation 1244 depicting comparing a first previously received image with a second previously received image to determine a changed portion of the first previously received image that is different than a portion of the second previously received image. For example, Fig. 7, e.g., Fig. 7E, shows first previously received image data with second previously received image data and request for particular image data delta determining module 744 comparing a first previously received image with a second previously received image to determine a changed portion (e.g., where a hot dog stand used to be before the owner moved on, on a particular image of a street corner) of the first previously received image that is different than a portion of the second previously received image.

- 109 [00229] Referring again to Fig. 12E, operation 1242 may include operation 1246, which may appear in conjunction with operation 1244, operation 1246 depicting identifying the portion of the particular image that corresponds to the changed portion. For example, Fig. 7, e.g., Fig. 7E, shows particular image data request portion that corresponds to determined delta identifying module 746 identifying the portion of the particular image (e.g., a portion of the street view image where the hot dog vendor has left) that corresponds to the changed portion (e.g., the portion identified in previous images as where the hot dog vendor is moving).

[00230] Referring now to Fig. 12F, operation 1004 may include operation 1248 depicting generating an expanded request for the particular image. For example, Fig. 7, e.g., Fig. 7E, shows expanded request for particular image data generating module 748 generating an expanded (e.g., a request for more) request for the particular image.

[00231] Referring again to Fig. 12F, operation 1004 may include operation 1250, which may appear in conjunction with operation 1248, operation 1250 depicting transmitting the generated expanded request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene. For example, Fig. 7, e.g., Fig. 7F, shows expanded request for particular image data transmitting to the image sensor array module 750 configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data, said module 750 transmitting the generated expanded request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

[00232] Referring again to Fig. 12F, operation 1248 may include operation 1252 depicting generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image. For example, Fig. 7, e.g., Fig. 7F, shows expanded request for particular image data that includes the request for particular image data and border image data that borders the particular image data generating module 752 generating an expanded request for the

- 110 particular image (e.g., an image of a hallway in a museum as part of a virtual tourism exhibit) and a request for image data that borders the particular image (e.g., the image data that is spatially located in the scene near the request for the particular image in the scene).

[00233] Referring again to Fig. 12F, operation 1252 may include operation 1254 depicting generating the expanded request for the particular image that includes the request for the particular image and the request for image data that borders the particular image on all four sides of the particular image. For example, Fig. 7, e.g., Fig. 7F, shows expanded request for particular image data that includes the request for particular image data and border image data that borders the particular image data on all four sides generating module 754 generating the expanded request for the particular image that includes the request for the particular image (e.g., a request for a particular view in an augmented reality setting of a forest path) and the request for image data that borders (e.g., is spatially adjacent to, in the scene) the particular image on all four sides of the particular image (e.g., the request for the particular view in the augmented reality setting of a forest path).

[00234] Referring again to Fig. 12F, operation 1252 may include operation 1256 depicting determining a projected next side image that is an image that borders the particular image. For example, Fig. 7, e.g., Fig. 7F, shows projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data module756 determining a projected next side image (e.g., an image that is determined, e.g., projected to be the next particular image requested) that is an image that borders the particular image (e.g., is spatially located next to the particular image in the scene). An example of this is shown as anticipated next field of view 582C in Fig. 5C.

[00235] Referring again to Fig. 12F, operation 1252 may include operation 1258, which may appear in conjunction with operation 1256, operation 1258 depicting generating the expanded request for the particular image that includes the request for the particular image and a request for the projected next side image. For example, Fig. 7, e.g., Fig. 7F, shows expanded request for particular image data that includes the request for particular image data and next side image data generating module 758 generating the expanded request for the particular image that includes the request for the particular image and a request for the projected next side image (e.g., an image that is determined, e.g., projected to be the next

- Il l particular image requested) that is an image that borders the particular image (e.g., is spatially located next to the particular image in the scene). An example of this is shown as anticipated next field of view 582C in Fig. 5C.

[00236] Referring again to Fig. 12F, operation 1256 may include operation 1260 depicting determining a projected next side image based on a direction in which a device associated with the requestor is moving. For example, Fig. 7, e.g., Fig. 7F, shows projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining at least partially based on a detected motion of the device associated with the requestor module 760 determining a projected next side image based on a direction in which a device (e.g., a virtual reality helmet) associated with (e.g., being worn by) the requestor is moving (e.g., if the helmet is moving to the left, then the next requested image will be the next view from the left).

[00237] Referring again to Fig. 12F, operation 1260 may include operation 1262 depicting determining the projected next side image based on the direction that a head of the requestor is turning while the requestor is wearing a virtual reality device. For example, Fig. 7, e.g., Fig. 7F, shows projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining at least partially based on a detected head turn of the requestor that wears the device associated with the requestor module 762 determining the projected next side image based on the direction that a head of the requestor is turning while the requestor is wearing a virtual reality device (e.g., a headset).

[00238] Referring now to Fig. 12G, operation 1252 may include operation 1264 depicting generating an expanded request for the particular image that includes the request for the particular image, a request for border image data that borders the particular image on each side, and a request for secondary border image data that borders the border image data. For example, Fig. 7, e.g., Fig. 7F, shows expanded request for particular image data that includes the request for particular image data, first border image data that borders the particular image data, and second border image data that borders the first border image data generating module 764 generating an expanded request for the particular image that includes the request for the particular image (e.g., initial request 592 from Fig. 5D, a

- 112 request for border image data that borders the particular image on each side (e.g., expanded request 594 from Fig. 5D), and a request for secondary border image data that borders the border image data (e.g., a further expanded request 596 from Fig. 5D).

[00239] Referring again to Fig. 12G, operation 1264 may include operation 1266 depicting generating the expanded request for the particular image that includes the request for the particular image at a first resolution, the request for the border image data at a second resolution less than the first resolution, and the request for the secondary border image data at a third resolution less than or equal to the second resolution. For example, Fig. 7, e.g., Fig. 7G, shows expanded request for particular image data that includes the request for particular image data, first border image data that borders the particular image data, and second border image data that borders the first border image data generating module 766 configured to generate the expanded request for the particular image data that includes the request for the particular image data at a first resolution, the request for the first border image data at a second resolution less than the first resolution, and the request for the second border image data at a third resolution less than or equal to the second resolution, said module 766 generating the expanded request for the particular image that includes the request for the particular image at a first resolution (e.g., the initial request 592 from Fig. 5D), the request for the border image data at a second resolution less than the first resolution (e.g., the expanded request 594 from Fig. 5D), and the request for the secondary border image data (e.g., the further expanded request 596) at a third resolution less than or equal to the second resolution.

[00240] Figs. 13A-13C depict various implementations of operation 1006, depicting receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor, according to embodiments. Referring now to Fig. 13 A, operation 1006 may include operation 1302 depicting receiving only the particular image from the image sensor array, wherein the particular image represents fewer pixels than the scene. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive receiving module 802, configured to receive only the particular image data that represents fewer pixels than the scene from the image sensor array, said module 802 receiving only (e.g., at a particular time, or at a

- 113 particular resolution or rate, no other data is received, although this does not exclude other data being received at other times or at other resolutions or in other formats), wherein the particular image (e.g., an image of a lion at a watering hole) represents fewer pixels than the scene (e.g., the watering hole).

[00241] Referring again to Fig. 13 A, operation 1006 may include operation 1304 depicting receiving only the particular image from the image sensor array, wherein the particular image represents a smaller geographic area than a geographic area represented by the scene. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive receiving module 804 configured to receive only the particular image data that represents a smaller geographic area than the scene from the image sensor array, said module 804 receiving only the particular image (e.g., a corner view of Old Faithful the geyser) from the image sensor array, wherein the particular image represents a smaller geographic area (e.g., an image that includes an area twenty feet by twenty feet) than a geographic area represented by the scene (e.g., data that includes an area five thousand feet by five thousand feet).

[00242] Referring again to Fig. 13 A, operation 1006 may include operation 1306 depicting receiving only the particular image from a remote server that received a portion of the scene, wherein the remote server received the portion of the scene that included the request for the particular image and a second request for a second particular image that is at least partially different than the first particular image. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive receiving from a remote server module 806 receiving only the particular image from a remote server (e.g., a server, e.g., server 230, as described in various embodiments), that may act as a communications intermediary between the image sensor array and a device of the requestor, and which may, in some embodiments, perform processing on the request for the particular image, or to the particular image) that received a portion of the scene (e.g., the remote server may have received a portion of the scene that is the same as the particular image, or it may have received a portion of the scene that includes the particular image and other images that were requested at the same time as the particular image), wherein the remote server received the portion of the scene that included the request for the particular image (e.g., an image of a lion at the watering hole) and a second request for a second particular image

- 114 (e.g., an image of an eagle at the watering hole) that is at least partially different than the first particular image.

[00243] Referring again to Fig. 13 A, operation 1006 may include operation 1308 depicting receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the image sensor array discarded portions of the scene other than the particular image. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive receiving from a remote server module 808 receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, said module 808 receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the image sensor array discarded (e.g., the data may not be stored, or, in an embodiment, may be stored, at least temporarily, but is not stored in a place where overwriting will be prevented, as in a persistent memory) portions of the scene other than the particular image (e.g., pixels that were not part of the request for the particular image, or part of another request, may be discarded, that is, no steps may be taken to prevent their overwriting/deletion.

[00244] Referring again to Fig.13 A, operation 1006 may include operation 1310 depicting receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive receiving from a remote server module 810 receiving only the particular image data (e.g., an image of Old Faithful near the eastern corner) from the image sensor array, wherein data from the scene other than the particular image data (e.g., data that was not part of the request for the particular image) is stored at the image sensor array (e.g., in an embodiment, the image sensor array may have a large storage to keep the data that was not requested, because storage costs may be cheap relative to bandwidth costs, thus that data is kept locally at the image sensor array where storage is inexpensive, and not transmitted to the requestor, either directly or via the remote server).

[00245] Referring again to Fig. 13A, operation 1006 may include operation 1312 depicting receiving only the particular image data from a remote server configured to communicate

- 115 with the image sensor array, wherein a first portion of the scene data other than the particular image is stored at the image sensor array, and a second portion of the scene data other than the particular image is stored at the remote server. For example, Fig. 8, e.g., Fig. 8A, shows particular image data from the image sensor array exclusive receiving from a remote server module 812 configured to receive only the particular image data from a remote server deployed to communicate with the image sensor array, wherein a first portion of the scene data other than the particular image data is stored at the image sensor array and a second portion of the scene data other than the particular image data is stored at the remote server, said module 812 receiving only the particular image data from a remote server configured to communicate with the image sensor array, wherein a first portion of the scene data other than the particular image is stored at the image sensor array, and a second portion of the scene data other than the particular image is stored at the remote server. For example, in an embodiment, some of the data gathered by the image sensor array but not requested by the requestor may be deemed to be "useful" by the remote server, e.g., for caching purposes, analysis purposes, or other purposes. Thus, a second portion of the scene data, e.g., non-requested data may be transmitted to the remote server and stored there, and a first portion of the scene data, e.g., non-requested data, may be stored at the image sensor array. In an embodiment, the transmission to the remote server may take a lower priority than the transmission to the requestor (a transmission which may include the remote server), or may be transmitted at a different time than the transmission to the requestor, or transmitted with different specifications (e.g., different compression, different codec, or different resolution).

[00246] Referring now to Fig. 13B, operation 1006 may include operation 1314 depicting receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor. For example, Fig. 8, e.g., Fig. 8B, shows particular image data from the image sensor array exclusive receiving module 814 configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor device that is associated with the requestor, said module 814 receiving only the particular image from the image sensor array,

- 116 wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property (e.g., an available bandwidth to the requestor device) of the requestor device (e.g., a laptop computer device) that is associated with the requestor (e.g., that is operated by the requestor).

[00247] Referring again to Fig. 13B, operation 1314 may include operation 1316 depicting receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of the requestor device that is configured to store data about the requestor. For example, Fig. 8, e.g., Fig. 8B, shows particular image data from the image sensor array exclusive receiving module 816 configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a feature of a requestor device that is deployed to store data about with the requestor, said module 816 receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of the requestor device (e.g., a cellular smartphone device that is linked to a 4G LTE network) that is configured to store data about the requestor.

[00248] Referring again to Fig. 13B, operation 1314 may include operation 1318 depicting receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on an available bandwidth of the connection between the image sensor array and the device associated with the requestor. For example, Fig. 8, e.g., Fig. 8B, shows particular image data from the image sensor array exclusive receiving module 818 configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a bandwidth available to the requestor device, said module 818 receiving only the particular image from the image sensor array, wherein the particular image (e.g., an image of a lion at a watering hole) represents a subset of the scene (e.g., an image of the watering hole), and wherein a data size (e.g., measured in an electronic measure, e.g., bytes) of

- 117 particular image data of the particular image is at least partially based on an available bandwidth (e.g., a speed, reliability, or any other factor involving the transmission of data between two devices) of the connection between the image sensor array (e.g., which may use the remote server as an intermediary) and the device associated with the requestor (e.g., a tablet device, e.g., an Apple iPad).

[00249] Referring again to Fig. 13B, operation 1314 may include operation 1320 depicting receiving only the particular image from the image sensor array, wherein the data size of particular image data of the particular image is at least partially based on an available bandwidth of a connection between the device associated with the requestor and a remote server configured to communicate with the image sensor array. For example, Fig. 8, e.g., Fig. 8B, shows particular image data from the image sensor array exclusive receiving module 820 configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a bandwidth between the requestor device and a remote server that communicates with the image sensor array, said module 820 receiving only the particular image from the image sensor array, wherein the particular image (e.g., an image of a car crossing a highway bridge) represents a subset of the scene (e.g., an image of the bridge), and wherein a data size (e.g., measured in an electronic measure, e.g., bytes) of particular image data of the particular image is at least partially based on an available bandwidth (e.g., a speed, reliability, or any other factor involving the transmission of data between two devices) of the connection between the device associated with the requestor (e.g., a tablet device) and a remote server configured to communicate with the image sensor array.

[00250] Referring now to Fig. 13C, operation 1006 may include operation 1322 depicting receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a screen size of a requestor device associated with the requestor. For example, Fig. 8, e.g., Fig. 8C, shows particular image data from the image sensor array exclusive receiving module 822 configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a screen size of a requestor device that is associated with

- 118 the requestor, said module 822 receiving only the particular image from the image sensor array, wherein the particular image (e.g., an image of a car on a highway bridge) represents a subset of the scene (e.g., the highway bridge), and wherein a data size (e.g., a size in pixels, megabytes, or any other measure, compressed or uncompressed, with allowances made for types of transmission and parallelization) of particular image data of the particular image is at least partially based on a screen size (e.g., how large is the screen) of the requestor device (e.g., a television) associated with the requestor (e.g., the person who requested the image of the car). For example, it is conventionally assumed that, for a given screen size, and distance of eye from the screen, there is a resolution past which the human eye cannot resolve any additional data. Accordingly, in an embodiment, the screen size of the device, coupled with an estimate of the average viewing distance, may serve to bound the resolution of the image that is transmitted, to avoid transmitting more data than can be used.

[00251] Referring again to Fig. 13C, operation 1006 may include operation 1324 depicting receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a maximum resolution of the requestor device associated with the requestor. For example, Fig. 8, e.g., Fig. 8C, shows particular image data from the image sensor array exclusive receiving module 824 configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a maximum resolution of a requestor device that is associated with the requestor, said module 824 receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a maximum resolution of the requestor device associated with the requestor. For example, a higher-resolution version of the image than what the requestor device can display will not be transmitted, because that would be a waste of data that the device cannot use, in certain embodiments (e.g., assuming no further analysis is done on the data). Thus, the maximum resolution of the requestor device can set the size of the transmission by bounding the resolution of the received particular image. For example, in an embodiment, if a smartphone device with a resolution of 800x600 and a computer with a screen

- 119 resolution of 2560x1900 each request the same image, the smartphone device will get a much smaller, downgraded version of the image than the computer, because the smartphone would have to downgrade the larger image to display it anyway.

[00252] Figs. 14A-14C depict various implementations of operation 1008, depicting presenting the received particular image to the requestor, according to embodiments.

Referring now to Fig. 14A, operation 1008 may include operation 1402 depicting presenting the received particular image on a viewfinder of a device associated with the requestor. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data presenting on a device viewfinder module 902 presenting the received particular image on a viewfinder (e.g., a portion of a device capable of showing images, whether optically or digitally, in proximate area or far away) of a device associated with the requestor (e.g., a smartphone device).

[00253] Referring again to Fig. 14A, operation 1402 may include operation 1404 depicting displaying the received particular image on the viewfinder of the device associated with the requestor, wherein the device associated with the requestor is one or more of a cellular telephone device, a tablet device, a smartphone device, a laptop computer, a desktop computer, a television, and a wearable computer. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data presenting on a particular device viewfinder module 904 displaying the received particular image on the viewfinder of the device associated with the requestor, wherein the device associated with the requestor is one or more of a cellular telephone device, a tablet device, a smartphone device, a laptop computer, a desktop computer, a television, and a wearable computer.

[00254] Referring again to Fig. 14A, operation 1008 may include operation 1406 depicting presenting the received particular image that is an image of a football player in a football game to the requestor that is a spectator of the game that watches the football game on an Internet-enabled television. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data presenting module 906 presenting the received particular image that is an image of a football player in a football game to the requestor that is a spectator of the game that watches the football game on an Internet-enabled television (e.g., the requestor selects a

- 120 particular football player that the requestor wants to focus in on, e.g., the quarterback for the Washington DC team, using the remote or giving an oral command to the television).

[00255] Referring again to Fig. 14A, operation 1008 may include operation 1408 depicting presenting the received particular image that is an image of an eagle at an animal watering hole to the requestor that is a naturalist that monitors the animal watering hole from a screen on their smart watch. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data presenting module 908

[00256] Referring again to Fig. 14A, operation 1008 may include operation 1410 depicting modifying the received particular image into a modified particular image. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data modifying into modified particular image data module 910

[00257] Referring again to Fig. 14A, operation 1008 may include operation 1412, which may appear in conjunction with operation 1410, operation 1412 depicting presenting the modified received particular image to the requestor. For example, Fig. 9, e.g., Fig. 9A, shows modified particular image data presenting module 912

[00258] Referring again to Fig. 14A, operation 1410 may include operation 1414 depicting modifying the received particular image into a modified particular image, wherein the received particular image includes only portions of the scene that have changed since the last time the particular image was displayed. For example, Fig. 9, e.g., Fig. 9A, shows received particular image data that includes only changed portions of the scene modifying into modified particular image data module 914 modifying the received particular image into a modified particular image, wherein the received particular image includes only portions of the scene that have changed since the last time the particular image was displayed (e.g., where persons have moved, for example, on a soccer field, the entire field does not need to be retransmitted each time).

[00259] Referring again to Fig. 14A, operation 1414 may include operation 1416 depicting modifying the received particular image into a modified particular image, wherein one or more portions of the received particular image that have not changed are updated with existing image data. For example, Fig. 9, e.g., Fig. 9A, shows received particular image

- 121 data that includes only changed portions of the scene modifying into modified particular image data through addition of unchanged portions of existent image data module 916 modifying the received particular image into a modified particular image, wherein one or more portions of the received particular image that have not changed (e.g., static portions of the image, e.g., rocks, streets, buildings) are updated with existing image data (e.g., image data taken from a previously captured image of the same spot, e.g., in a live street view setting, the roads and buildings do not change, so that data can be added at the local device rather than transmitted from the image sensor array, in an embodiment).

[00260] Referring now to Fig. 14B, operation 1008 may include operation 1418 depicting presenting a portion of the received particular image to the requestor. For example, Fig. 9, e.g., Fig. 9B, shows portion of received particular image data presenting module 918 presenting (e.g., displaying, e.g., displaying as part of an augmented reality device in which the person can take a "virtual tour" of the Egyptian pyramids of Giza) a portion of the received particular image to the requestor.

[00261] Referring again to Fig. 14B, operation 1418 may include operation 1420 depicting presenting a first portion of the received particular image to the requestor. For example, Fig. 9, e.g., Fig. 9B, shows first portion of the received particular image data presenting module 920 presenting a first portion (e.g., a portion that represents a person's field of view in the area they are currently looking at, e.g., for a virtual tourism application or for an augmented reality/virtual reality game) of the received particular image (e.g., an image of the interior of one of the Egyptian pyramids) to the requestor (e.g., a person wearing a virtual tourism helmet that displays virtual reality type images).

[00262] Referring again to Fig. 14B, operation 1418 may include operation 1422, which may appear in conjunction with operation 1420, operation 1422 depicting storing a second portion of the received particular image. For example, Fig. 9, e.g., Fig. 9B, shows second portion of the received particular image data storing module 916 storing a second portion (e.g., a portion that is just outside the field of view of the person who is wearing the virtual tourism helmet, but which might come into view if the person swings their head, so to allow seamless transition, this second portion that is outside the field of view is stored and cached for quick deployment if necessary).

- 122 [00263] Referring again to Fig. 14B, operation 1422 may include operation 1424 depicting storing a second portion of the received particular image that is adjacent to the first portion of the received particular image and is configured to be used as cached image data when the requestor requests an image corresponding to the second portion of the received particular image. For example, Fig. 9, e.g., Fig. 9B, shows second portion of the received particular image data that is adjacent to the first portion of the received particular image data and is configured to be used as cached image data storing module 924 storing a second storing a second portion of the received particular image that is adjacent to the first portion of the received particular image and is configured to be used as cached image data when the requestor requests an image corresponding to the second portion of the received particular image (e.g., a portion that is just outside the field of view of the person who is wearing the virtual tourism helmet, but which might come into view if the person swings their head, so to allow seamless transition, this second portion that is outside the field of view is stored and cached for quick deployment if necessary).

[00264] Referring again to Fig. 14B, operation 1422 may include operation 1426 depicting storing a second portion of the received particular image, wherein the second portion of the received particular image is received at a lower resolution than the first portion of the received particular image. For example, Fig. 9, e.g., Fig. 9B, shows second portion of the received particular image data that is adjacent to the first portion of the received particular image data and is received at a lower resolution than the first portion of the received particular image data storing module 926 storing a second portion of the received particular image (e.g., a portion that is determined by an algorithm to be likely to be the next requested image from the user, e.g., because it is next up in the user's field of view, or because it relates to something the user just looked at, for example), wherein the second portion of the received particular image is received at a lower resolution than the first portion of the received particular image.

[00265] Referring now to Fig. 14C, operation 1008 may include operation 1428 depicting transmitting the received particular image to a component configured to analyze the received particular image. For example, Fig. 9, e.g., Fig. 9C, shows received particular image data transmitting to a component module configured transmit the received particular data to a component deployed to analyze the received particular image 928 transmitting the

- 123 received particular image (e.g., a set of images of a particularly busy on-ramp to an interstate) to a component (e.g., a traffic analysis computer that uses the captured images at various points on the roads to determine traffic patterns and bottlenecks) configured to analyze (e.g., determine how many cars are in the image, where the cars are going to, how fast the cars are moving, etc.) the particular image (e.g., a set of images of cars on a busy on-ramp to the interstate).

[00266] Referring again to Fig. 14C, operation 1008 may include operation 1430 depicting transmitting the received particular image to a component configured to store the received particular image. For example, Fig. 9, e.g., Fig. 9C, shows received particular image data transmitting to a component module configured to transmit the received particular data to a component deployed to store the received particular image 930 transmitting the received particular image (e.g., an image of an interior of a person's refrigerator) to a component (e.g., a home monitoring system that stores images of appliance interiors, among other things, to facilitate diet tracking, food ordering, etc., as part of a smart home system) configured to store the received particular image (e.g., the image of the interior of the person's refrigerator).

[00267] Referring again to Fig. 14C, operation 1008 may include operation 1432 depicting presenting the received particular image to the requestor, wherein the requestor is a client. For example, Fig. 9, e.g., Fig. 9C, shows received particular image data presenting module configured to present the received particular image data to a client requestor 932 presenting the received particular image (e.g., the image of a lion drinking at the watering hole) to the requestor (e.g., a person watching the watering hole on their television, who has used their smart remote to select a box around the lion to indicate that they want to watch the lion), wherein the requestor is a client (e.g., the person watching their television).

[00268] Referring again to Fig. 14C, operation 1008 may include operation 1434 depicting presenting the received particular image to the requestor, wherein the requestor is a component of a device. For example, Fig. 9, e.g., Fig. 9C, shows received particular image data presenting module configured to present the received particular image data to a device component requestor 934 presenting (e.g., storing in a memory of) the received particular image (e.g., a security image of a person walking outside of a building) to the requestor

- 124 (e.g., a subroutine that instructs to capture the faces of all persons leaving a specific building that is owned by the Federal Bureau of Investigation), wherein the requestor is a component (e.g., a subroutine, whether part of the requestor device or separate from the requestor device) of a device.

[00269] All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in any Application Data Sheet, are incorporated herein by reference, to the extent not inconsistent herewith.

[00270] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples.

Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software (e.g., a high-level computer program serving as a hardware specification), firmware, or virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101. In an embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, limited to patentable subject matter under 35 U.S.C. 101, and that designing the circuitry and/or writing the code for the software (e.g., a high-level computer program serving as a hardware specification) and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless

- 125 of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.)

[00271] While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.).

[00272] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically

- 126 be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).

[00273] Furthermore, in those instances where a convention analogous to "at least one of

A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A,

B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).

In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that typically a disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."

[00274] With respect to the appended claims, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise. Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

[00275] This application may make reference to one or more trademarks, e.g., a word, letter, symbol, or device adopted by one manufacturer or merchant and used to identify and/or distinguish his or her product from those of others. Trademark names used herein

- 127 are set forth in such language that makes clear their identity, that distinguishes them from common descriptive nouns, that have fixed and definite meanings, or, in many if not all cases, are accompanied by other specific identification using terms not covered by trademark. In addition, trademark names used herein have meanings that are well-known and defined in the literature, or do not refer to products or compounds for which knowledge of one or more trade secrets is required in order to divine their meaning. All trademarks referenced in this application are the property of their respective owners, and the appearance of one or more trademarks in this application does not diminish or otherwise adversely affect the validity of the one or more trademarks. All trademarks, registered or unregistered, that appear in this application are assumed to include a proper trademark symbol, e.g., the circle R or bracketed capitalization (e.g., [trademark name]), even when such trademark symbol does not explicitly appear next to the trademark. To the extent a trademark is used in a descriptive manner to refer to a product or process, that trademark should be interpreted to represent the corresponding product or process as of the date of the filing of this patent application.

[00276] Throughout this application, the terms "in an embodiment," 'in one embodiment," "in an embodiment," "in several embodiments," "in at least one embodiment," "in various embodiments," and the like, may be used. Each of these terms, and all such similar terms should be construed as "in at least one embodiment, and possibly but not necessarily all embodiments," unless explicitly stated otherwise. Specifically, unless explicitly stated otherwise, the intent of phrases like these is to provide non-exclusive and non-limiting examples of implementations of the invention. The mere statement that one, some, or may embodiments include one or more things or have one or more features, does not imply that all embodiments include one or more things or have one or more features, but also does not imply that such embodiments must exist. It is a mere indicator of an example and should not be interpreted otherwise, unless explicitly stated as such.

[00277] Those skilled in the art will appreciate that the foregoing specific exemplary processes and/or devices and/or technologies are representative of more general processes and/or devices and/or technologies taught elsewhere herein, such as in the claims filed herewith and/or elsewhere in the present application.

- 128 [00278] STRUCTURED DISCLOSURE DIRECTED TOWARD ONE OF SKILL IN THE ART [START] Those skilled in the art will appreciate the indentations and cross- referencing of the following as instructive. In general, each immediately following numbered paragraph in this "Structured Disclosure Directed Toward One Of Skill In The Art Section" should be treated as preceded by the word "Clause," as can be seen by the Clauses that cross-reference against them, and each cross-referencing paragraph is to be understood as permissively incorporating its parent clause(s) by reference.

[00279] As used in the herein, and in particular the following, thing/operation disclosures, the word "comprising" can generally be interpreted as "including but not limited to":

1. A computationally-implemented thing/operation, comprising:

accepting input of a request for a particular image of a scene that is larger than the particular image;

transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene;

receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor; and

presenting the received particular image to the requestor.

2. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

receiving input of the request for the particular image of the scene that is larger than the particular image.

3. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

receiving input from an automated component, of the request for the particular image of the

- 129 scene that is larger than the particular image.

4. The computationally-implemented thing/operation of clause 3, wherein said receiving input from an automated component, of the request for the particular image of the scene that is larger than the particular image comprises:

receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image.

5. The computationally- implemented thing/operation of clause 4, wherein said receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image comprises:

receiving input of the request for the particular image from the image object tracking component, of the requestor device that is associated with the requestor, wherein the particular image contains the tracked image object present in the scene that is larger than the particular image.

6. The computationally- implemented thing/operation of clause 4, wherein said receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image comprises:

receiving input from a player tracking component, that is part of an Internet-enabled television configured to track football players, of the request for the particular image that is an image of a football game that contains a tracked image object that is a particular football player present in the scene that is larger than the particular image of the football game.

7. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for the particular image of the scene that is larger than the particular image through an interface of a requestor device associated with the requestor.

- 130 8. The computationally- implemented thing/operation of clause 7, wherein said accepting input of the request for the particular image of the scene that is larger than the particular image through an interface of a requestor device associated with the requestor comprises:

receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene.

9. The computationally- implemented thing/operation of clause 8, wherein said receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene comprises:

receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder.

10. The computationally- implemented thing/operation of clause 8, wherein said receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene comprises:

receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene, said request at least partially based on a view of the scene.

11. The computationally- implemented thing/operation of clause 8, wherein said receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene comprises:

receiving the request for particular image data of the scene from the requestor device that is configured to receive the selection of the particular image, wherein the requestor device is

- 131 one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

12. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for the particular image of the scene that contains more pixels than the particular image.

13. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for the particular image of the scene that captures a larger spatial area than the particular image.

14. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for the particular image of the scene that includes more data than the particular image.

15. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

16. The computationally- implemented thing/operation of clause 15, wherein said accepting input of the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

accepting input of the request for particular image data of the scene, wherein the scene is a

- 132 sampling of the image data collected by the array of more than one image sensor.

17. The computationally- implemented thing/operation of clause 15, wherein said accepting input of the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

accepting input of the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

18. The computationally- implemented thing/operation of clause 15, wherein said accepting input of the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

accepting input of the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

19. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for particular image data of a scene that is a football game.

20. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for particular image data of a scene that is a street view of an area.

21. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

acquiring the request for particular image data of a scene that is a tourist destination.

- 133 22. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

acquiring the request for particular image data of a scene that is an inside of a home.

23. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

24. The computationally-implemented thing/operation of clause 23, wherein said accepting input of the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises:

accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

25. The computationally- implemented thing/operation of clause 23, wherein said accepting input of the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene comprises:

accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

26. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for particular image data that is part of the scene, wherein the particular image data includes video data.

- 134 27. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

28. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface.

29. The computationally- implemented thing/operation of clause 28, wherein said accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface comprises:

accepting input of the request for particular image data that is part of the scene from the requestor device that has a microphone that receives a spoken request for particular image data from the requestor.

30. The computationally- implemented thing/operation of clause 1, wherein said accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor.

31. The computationally- implemented thing/operation of clause 30, wherein said accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor comprises:

accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a client operating a device.

- 135 32. The computationally- implemented thing/operation of clause 31, wherein said accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a client operating a device comprises:

accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a person operating a smart television with a remote control.

33. The computationally- implemented thing/operation of clause 30, wherein said accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor comprises:

accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device.

34. The computationally-implemented thing/operation of clause 33, wherein said accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a component of a device.

35. The computationally-implemented thing/operation of clause 33, wherein said accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device that is executing a subroutine.

36. The computationally-implemented thing/operation of clause 33, wherein said accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

accepting, at the device, of the input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is the device that is executing a separate subroutine.

- 136 37. The computationally- implemented thing/operation of clause 1, wherein said transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes multiple connected image sensors and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

38. The computationally- implemented thing/operation of clause 1, wherein said transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture the scene that is larger than the requested image data.

39. The computationally- implemented thing/operation of clause 1, wherein said transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture the scene that is larger than the requested image data.

40. The computationally- implemented thing/operation of clause 1, wherein said transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is

- 137 greater than 120 degrees and that is configured to capture the scene that is larger than the requested image data.

41. The computationally- implemented thing/operation of clause 1, wherein said transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

42. The computationally- implemented thing/operation of clause 41, wherein said transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

43. The computationally- implemented thing/operation of clause 41, wherein said transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

44. The computationally- implemented thing/operation of clause 1, wherein said transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

- 138 transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor.

45. The computationally- implemented thing/operation of clause 44, wherein said transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor comprises:

transmitting the request for the particular image to the remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor along with one or more other requests for other particular images.

46. The computationally- implemented thing/operation of clause 44, wherein said transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor comprises:

transmitting the request for the particular image to the remote server that is configured to package multiple requests that include the request for the particular image and to transmit the package of multiple requests to the image sensor array.

47. The computationally- implemented thing/operation of clause 46, wherein said transmitting the request for the particular image to the remote server that is configured to package multiple requests that include the request for the particular image and to transmit the package of multiple requests to the image sensor array comprises:

transmitting the request for the particular image to the remote server that is configured to combine multiple requests that include the request for the particular image and to transmit the package of multiple requests with redundant data requests removed to the image sensor array.

48. The computationally- implemented thing/operation of clause 47, wherein said transmitting the request for the particular image to the remote server that is configured to combine multiple requests that include the request for the particular image and to transmit

- 139 the package of multiple requests with redundant data requests removed to the image sensor array comprises:

transmitting the request for the particular image to the remote server that is configured to combine multiple requests from multiple requestors that include the request for the particular image, and to transmit the multiple requests as a single combined request for image data to the image sensor array.

49. The computationally- implemented thing/operation of clause 1, wherein said transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

modifying the request for the particular image into an updated request for particular image data; and

transmitting the updated request for particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

50. The computationally-implemented thing/operation of clause 49, wherein said modifying the request for the particular image into an updated request for particular image data comprises:

modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating.

51. The computationally- implemented thing/operation of clause 50, wherein said modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating comprises:

modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game.

- 140 52. The computationally- implemented thing/operation of clause 51, wherein said modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game comprises:

modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game based on an algorithm that identified that portion of the image as most likely to have changed since a previous image.

53. The computationally- implemented thing/operation of clause 50, wherein said modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating comprises:

modifying the request for the particular image into an updated request for particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images.

54. The computationally-implemented thing/operation of clause 53, wherein said modifying the request for the particular image into an updated request for particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images comprises:

comparing one or more previously received images that are determined to be similar to the particular image; and

modifying the request for the particular image into the updated request for particular image data that identifies the portion of the image as targeted for updating based on the compared one or more previously received images.

- 141 55. The computationally- implemented thing/operation of clause 54, wherein said comparing one or more previously received images that are determined to be similar to the particular image comprises:

comparing one or more previously received images that are determined to be similar to the particular image to identify a portion of the particular image as targeted for updating.

56. The computationally-implemented thing/operation of clause 54, wherein said comparing one or more previously received images that are determined to be similar to the particular image comprises:

comparing a first previously received image with a second previously received image to determine a changed portion of the first previously received image that is different than a portion of the second previously received image; and

identifying the portion of the particular image that corresponds to the changed portion.

57. The computationally- implemented thing/operation of clause 1, wherein said transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

generating an expanded request for the particular image; and

transmitting the generated expanded request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

58. The computationally- implemented thing/operation of clause 57, wherein said generating an expanded request for the particular image comprises:

generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image.

59. The computationally-implemented thing/operation of clause 58, wherein said generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image comprises:

- 142 generating the expanded request for the particular image that includes the request for the particular image and the request for image data that borders the particular image on all four sides of the particular image.

60. The computationally- implemented thing/operation of clause 58, wherein said generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image comprises: determining a projected next side image that is an image that borders the particular image; and

generating the expanded request for the particular image that includes the request for the particular image and a request for the projected next side image.

61. The computationally- implemented thing/operation of clause 60, wherein said determining a projected next side image that is an image that borders the particular image comprises:

determining a projected next side image based on a direction in which a device associated with the requestor is moving.

62. The computationally- implemented thing/operation of clause 61, wherein said determining a projected next side image based on a direction in which a device associated with the requestor is moving comprises:

determining the projected next side image based on the direction that a head of the requestor is turning while the requestor is wearing a virtual reality device.

63. The computationally- implemented thing/operation of clause 57, wherein said generating an expanded request for the particular image comprises:

generating an expanded request for the particular image that includes the request for the particular image, a request for border image data that borders the particular image on each side, and a request for secondary border image data that borders the border image data.

64. The computationally-implemented thing/operation of clause 63, wherein said generating an expanded request for the particular image that includes the request for the

- 143 particular image, a request for border image data that borders the particular image on each side, and a request for secondary border image data that borders the border image data comprises:

generating the expanded request for the particular image that includes the request for the particular image at a first resolution, the request for the border image data at a second resolution less than the first resolution, and the request for the secondary border image data at a third resolution less than or equal to the second resolution.

65. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image from the image sensor array, wherein the particular image represents fewer pixels than the scene.

66. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image from the image sensor array, wherein the particular image represents a smaller geographic area than a geographic area represented by the scene.

67. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image from a remote server that received a portion of the scene, wherein the remote server received the portion of the scene that included the request for the particular image and a second request for a second particular image that is at least partially different than the first particular image.

- 144 68. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the image sensor array discarded portions of the scene other than the particular image.

69. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

70. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image data from a remote server configured to communicate with the image sensor array, wherein a first portion of the scene data other than the particular image is stored at the image sensor array, and a second portion of the scene data other than the particular image is stored at the remote server.

71. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor.

- 145 72. The computationally- implemented thing/operation of clause 71, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor comprises:

receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of the requestor device that is configured to store data about the requestor.

73. The computationally- implemented thing/operation of clause 71, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor comprises:

receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on an available bandwidth of the connection between the image sensor array and the device associated with the requestor.

74. The computationally- implemented thing/operation of clause 71, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on an available bandwidth of the connection between the image sensor array and the device associated with the requestor comprises: receiving only the particular image from the image sensor array, wherein the data size of particular image data of the particular image is at least partially based on an available bandwidth of a connection between the device associated with the requestor and a remote server configured to communicate with the image sensor array.

- 146 75. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a screen size of a requestor device associated with the requestor.

76. The computationally- implemented thing/operation of clause 1, wherein said receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a maximum resolution of the requestor device associated with the requestor.

77. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

presenting the received particular image on a viewfinder of a device associated with the requestor.

78. The computationally- implemented thing/operation of clause 77, wherein said presenting the received particular image on a viewfinder of a device associated with the requestor comprises:

displaying the received particular image on the viewfinder of the device associated with the requestor, wherein the device associated with the requestor is one or more of a cellular telephone device, a tablet device, a smartphone device, a laptop computer, a desktop computer, a television, and a wearable computer.

- 147 79. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

presenting the received particular image that is an image of a football player in a football game to the requestor that is a spectator of the game that watches the football game on an Internet-enabled television.

80. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

presenting the received particular image that is an image of an eagle at an animal watering hole to the requestor that is a naturalist that monitors the animal watering hole from a screen on their smart watch.

81. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

modifying the received particular image into a modified particular image; and

presenting the modified received particular image to the requestor.

82. The computationally- implemented thing/operation of clause 81, wherein said modifying the received particular image into a modified particular image comprises: modifying the received particular image into a modified particular image, wherein the received particular image includes only portions of the scene that have changed since the last time the particular image was displayed.

83. The computationally- implemented thing/operation of clause 82, wherein said modifying the received particular image into a modified particular image, wherein the received particular image includes only portions of the scene that have changed since the last time the particular image was displayed comprises:

modifying the received particular image into a modified particular image, wherein one or more portions of the received particular image that have not changed are updated with existing image data.

- 148 84. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

presenting a portion of the received particular image to the requestor.

85. The computationally- implemented thing/operation of clause 84, wherein said presenting a portion of the received particular image to the requestor comprises:

presenting a first portion of the received particular image to the requestor; and

storing a second portion of the received particular image.

86. The computationally-implemented thing/operation of clause 85, wherein said storing a second portion of the received particular image comprises:

storing a second portion of the received particular image that is adjacent to the first portion of the received particular image and is configured to be used as cached image data when the requestor requests an image corresponding to the second portion of the received particular image.

87. The computationally-implemented thing/operation of clause 85, wherein said storing a second portion of the received particular image comprises:

storing a second portion of the received particular image, wherein the second portion of the received particular image is received at a lower resolution than the first portion of the received particular image.

88. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

transmitting the received particular image to a component configured to analyze the received particular image.

89. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

transmitting the received particular image to a component configured to store the received particular image.

- 149 90. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

presenting the received particular image to the requestor, wherein the requestor is a client.

91. The computationally- implemented thing/operation of clause 1, wherein said presenting the received particular image to the requestor comprises:

presenting the received particular image to the requestor, wherein the requestor is a component of a device.

92. A computationally-implemented thing/operation, comprising means for accepting input of a request for a particular image of a scene that is larger than the particular image; means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene; means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor; and means for presenting the received particular image to the requestor.

93. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for receiving input of the request for the particular image of the scene that is larger than the particular image.

- 150 94. The computationally-implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for receiving input from an automated component, of the request for the particular image of the scene that is larger than the particular image.

95. The computationally- implemented thing/operation of clause 94, wherein said means for receiving input from an automated component, of the request for the particular image of the scene that is larger than the particular image comprises:

means for receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image.

96. The computationally- implemented thing/operation of clause 95, wherein said means for receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image comprises:

means for receiving input of the request for the particular image from the image object tracking component, of the requestor device that is associated with the requestor, wherein the particular image contains the tracked image object present in the scene that is larger than the particular image.

97. The computationally- implemented thing/operation of clause 95, wherein said means for receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image comprises:

means for receiving input from a player tracking component, that is part of an Internet- enabled television configured to track football players, of the request for the particular image that is an image of a football game that contains a tracked image object that is a

- 151 particular football player present in the scene that is larger than the particular image of the football game.

98. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for the particular image of the scene that is larger than the particular image through an interface of a requestor device associated with the requestor.

99. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene.

100. The computationally-implemented thing/operation of clause 99, wherein said means for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene comprises:

means for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder.

- 152 101. The computationally- implemented thing/operation of clause 100, wherein said means for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder comprises: means for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene, said request at least partially based on a view of the scene.

102. The computationally- implemented thing/operation of clause 100, wherein said means for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder comprises: means for receiving the request for particular image data of the scene from the requestor device that is configured to receive the selection of the particular image, wherein the requestor device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

103. The computationally- implemented thing/operation of clause 100, wherein said means for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder comprises: means for accepting input of the request for the particular image of the scene that contains more pixels than the particular image.

104. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for the particular image of the scene that captures a

- 153 larger spatial area than the particular image.

105. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for the particular image of the scene that includes more data than the particular image.

106. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

107. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

108. The computationally-implemented thing/operation of clause 107, wherein said means for accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor comprises:

means for accepting input of the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

- 154 109. The computationally- implemented thing/operation of clause 107, wherein said means for accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor comprises:

means for accepting input of the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

110. The computationally- implemented thing/operation of clause 107, wherein said means for accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor comprises:

means for accepting input of the request for particular image data of a scene that is a football game.

111. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for particular image data of a scene that is a street view of an area.

112. The computationally-implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for acquiring the request for particular image data of a scene that is a tourist destination.

- 155 113. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for acquiring the request for particular image data of a scene that is an inside of a home.

114. The computationally-implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

115. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

116. The computationally- implemented thing/operation of clause 115, wherein said means for accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field comprises:

means for accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

- 156 117. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for particular image data that is part of the scene, wherein the particular image data includes video data.

118. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

119. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface.

120. The computationally-implemented thing/operation of clause 119, wherein said means for accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface comprises:

means for accepting input of the request for particular image data that is part of the scene from the requestor device that has a microphone that receives a spoken request for particular image data from the requestor.

- 157 121. The computationally- implemented thing/operation of clause 92, wherein said means for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor.

122. The computationally- implemented thing/operation of clause 121, wherein means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor comprises:

means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a client operating a device.

123. The computationally- implemented thing/operation of clause 122, wherein said means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a client operating a device comprises: means for accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface.

124. The computationally- implemented thing/operation of clause 121, wherein means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor comprises:

means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device.

- 158 125. The computationally- implemented thing/operation of clause 124, wherein said means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a component of a device.

126. The computationally- implemented thing/operation of clause 124, wherein said means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device that is executing a subroutine.

127. The computationally- implemented thing/operation of clause 124, wherein said means for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

means for accepting, at the device, of the input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is the device that is executing a separate subroutine.

128. The computationally- implemented thing/operation of clause 92, wherein said means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes multiple connected image sensors and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

- 159 129. The computationally- implemented thing/operation of clause 92, wherein said means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture the scene that is larger than the requested image data.

130. The computationally-implemented thing/operation of clause 92, wherein said means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture the scene that is larger than the requested image data.

131. The computationally- implemented thing/operation of clause 92, wherein said means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

means for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture the scene that is larger than the requested image data.

132. The computationally- implemented thing/operation of clause 92, wherein said means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

- 160 means for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

133. The computationally- implemented thing/operation of clause 132, wherein said means for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

means for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

134. The computationally- implemented thing/operation of clause 132, wherein said means for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

means for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

135. The computationally- implemented thing/operation of clause 92, wherein said means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

means for transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor.

- 161 136. The computationally-implemented thing/operation of clause 135, wherein said means for transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor comprises:

means for transmitting the request for the particular image to the remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor along with one or more other requests for other particular images.

137. The computationally-implemented thing/operation of clause 135, wherein said means for transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor comprises:

means for transmitting the request for the particular image to the remote server that is configured to package multiple requests that include the request for the particular image and to transmit the package of multiple requests to the image sensor array.

138. The computationally- implemented thing/operation of clause 137, wherein said means for transmitting the request for the particular image to the remote server that is configured to package multiple requests that include the request for the particular image and to transmit the package of multiple requests to the image sensor array comprises: means for transmitting the request for the particular image to the remote server that is configured to combine multiple requests that include the request for the particular image and to transmit the package of multiple requests with redundant data requests removed to the image sensor array.

- 162 139. The computationally-implemented thing/operation of clause 138, wherein said means for transmitting the request for the particular image to the remote server that is configured to combine multiple requests that include the request for the particular image and to transmit the package of multiple requests with redundant data requests removed to the image sensor array comprises:

means for transmitting the request for the particular image to the remote server that is configured to combine multiple requests from multiple requestors that include the request for the particular image, and to transmit the multiple requests as a single combined request for image data to the image sensor array.

140. The computationally- implemented thing/operation of clause 92, wherein said means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

means for modifying the request for the particular image into an updated request for particular image data; and means for transmitting the updated request for particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

141. The computationally- implemented thing/operation of clause 140, wherein said means for modifying the request for the particular image into an updated request for particular image data comprises:

means for modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating.

- 163 142. The computationally- implemented thing/operation of clause 141, wherein said means for modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating comprises:

means for modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game.

143. The computationally- implemented thing/operation of clause 142, wherein said means for modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game comprises:

means for modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game based on an algorithm that identified that portion of the image as most likely to have changed since a previous image.

144. The computationally- implemented thing/operation of clause 141, wherein said means for modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating comprises:

means for modifying the request for the particular image into an updated request for particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images.

- 164 145. The computationally- implemented thing/operation of clause 144, wherein said means for modifying the request for the particular image into an updated request for particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images comprises:

means for comparing one or more previously received images that are determined to be similar to the particular image; and means for modifying the request for the particular image into the updated request for particular image data that identifies the portion of the image as targeted for updating based on the compared one or more previously received images.

146. The computationally- implemented thing/operation of clause 145, wherein said means for comparing one or more previously received images that are determined to be similar to the particular image comprises:

means for comparing one or more previously received images that are determined to be similar to the particular image to identify a portion of the particular image as targeted for updating.

147. The computationally- implemented thing/operation of clause 145, wherein said means for comparing one or more previously received images that are determined to be similar to the particular image comprises:

means for comparing a first previously received image with a second previously received image to determine a changed portion of the first previously received image that is different than a portion of the second previously received image; and means for identifying the portion of the particular image that corresponds to the changed portion.

- 165 148. The computationally- implemented thing/operation of clause 92, wherein said means for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises:

means for generating an expanded request for the particular image; and means for transmitting the generated expanded request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

149. The computationally- implemented thing/operation of clause 148, wherein said means for generating an expanded request for the particular image comprises:

means for generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image.

150. The computationally-implemented thing/operation of clause 149, wherein said means for generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image comprises:

means for generating the expanded request for the particular image that includes the request for the particular image and the request for image data that borders the particular image on all four sides of the particular image.

151. The computationally-implemented thing/operation of clause 149, wherein said means for generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image comprises:

means for determining a projected next side image that is an image that borders the

- 166 particular image; and means for generating the expanded request for the particular image that includes the request for the particular image and a request for the projected next side image.

152. The computationally- implemented thing/operation of clause 151, wherein said means for determining a projected next side image that is an image that borders the particular image comprises:

means for determining a projected next side image based on a direction in which a device associated with the requestor is moving.

153. The computationally- implemented thing/operation of clause 152, wherein said means for determining a projected next side image based on a direction in which a device associated with the requestor is moving comprises:

means for determining the projected next side image based on the direction that a head of the requestor is turning while the requestor is wearing a virtual reality device.

154. The computationally-implemented thing/operation of clause 148, wherein said means for generating an expanded request for the particular image comprises:

means for generating an expanded request for the particular image that includes the request for the particular image, a request for border image data that borders the particular image on each side, and a request for secondary border image data that borders the border image data.

155. The computationally- implemented thing/operation of clause 154, wherein said means for generating an expanded request for the particular image that includes the request for the particular image, a request for border image data that borders the particular image

- 167 on each side, and a request for secondary border image data that borders the border image data comprises:

means for generating the expanded request for the particular image that includes the request for the particular image at a first resolution, the request for the border image data at a second resolution less than the first resolution, and the request for the secondary border image data at a third resolution less than or equal to the second resolution.

156. The computationally-implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the particular image represents fewer pixels than the scene.

157. The computationally- implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the particular image represents a smaller geographic area than a geographic area represented by the scene.

158. The computationally- implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image from a remote server that received a portion of the scene, wherein the remote server received the portion of the scene that included the request for the particular image and a second request for a second particular image that is at

- 168 least partially different than the first particular image.

159. The computationally-implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the image sensor array discarded portions of the scene other than the particular image.

160. The computationally- implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

161. The computationally- implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image data from a remote server configured to communicate with the image sensor array, wherein a first portion of the scene data other than the particular image is stored at the image sensor array, and a second portion of the scene data other than the particular image is stored at the remote server.

- 169 162. The computationally- implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor.

163. The computationally- implemented thing/operation of clause 162, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of the requestor device that is configured to store data about the requestor.

164. The computationally- implemented thing/operation of clause 162, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on an available bandwidth of the connection between the image sensor array and the device associated with the requestor.

- 170 165. The computationally- implemented thing/operation of clause 162, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the data size of particular image data of the particular image is at least partially based on an available bandwidth of a connection between the device associated with the requestor and a remote server configured to communicate with the image sensor array.

166. The computationally- implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a screen size of a requestor device associated with the requestor.

167. The computationally- implemented thing/operation of clause 92, wherein said means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

means for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a maximum resolution of the requestor device associated with the requestor.

- 171 168. The computationally- implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

means for presenting the received particular image on a viewfinder of a device associated with the requestor.

169. The computationally-implemented thing/operation of clause 168, wherein said means for presenting the received particular image on a viewfinder of a device associated with the requestor comprises:

means for displaying the received particular image on the viewfinder of the device associated with the requestor, wherein the device associated with the requestor is one or more of a cellular telephone device, a tablet device, a smartphone device, a laptop computer, a desktop computer, a television, and a wearable computer.

170. The computationally-implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

means for presenting the received particular image that is an image of a football player in a football game to the requestor that is a spectator of the game that watches the football game on an Internet-enabled television.

171. The computationally- implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

means for presenting the received particular image that is an image of an eagle at an animal watering hole to the requestor that is a naturalist that monitors the animal watering hole from a screen on their smart watch.

172. The computationally-implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

- 172 means for modifying the received particular image into a modified particular image; and means for presenting the modified received particular image to the requestor.

173. The computationally-implemented thing/operation of clause 172, wherein said means for modifying the received particular image into a modified particular image comprises:

means for modifying the received particular image into a modified particular image, wherein the received particular image includes only portions of the scene that have changed since the last time the particular image was displayed.

174. The computationally-implemented thing/operation of clause 173, wherein said means for modifying the received particular image into a modified particular image, wherein the received particular image includes only portions of the scene that have changed since the last time the particular image was displayed:

means for modifying the received particular image into a modified particular image, wherein one or more portions of the received particular image that have not changed are updated with existing image data.

175. The computationally-implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

means for presenting a portion of the received particular image to the requestor.

176. The computationally-implemented thing/operation of clause 175, wherein said means for presenting a portion of the received particular image to the requestor comprises: means for presenting a first portion of the received particular image to the requestor; and means for storing a second portion of the received particular image.

- 173 177. The computationally-implemented thing/operation of clause 176, wherein said means for presenting the received particular image to the requestor comprises:

means for storing a second portion of the received particular image that is adjacent to the first portion of the received particular image and is configured to be used as cached image data when the requestor requests an image corresponding to the second portion of the received particular image.

178. The computationally-implemented thing/operation of clause 176, wherein said means for presenting the received particular image to the requestor comprises:

means for storing a second portion of the received particular image, wherein the second portion of the received particular image is received at a lower resolution than the first portion of the received particular image.

179. The computationally-implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

means for transmitting the received particular image to a component configured to analyze the received particular image.

180. The computationally- implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

means for transmitting the received particular image to a component configured to store the received particular image.

181. The computationally- implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

means for presenting the received particular image to the requestor, wherein the requestor is

- 174 a client.

182. The computationally- implemented thing/operation of clause 92, wherein said means for presenting the received particular image to the requestor comprises:

means for presenting the received particular image to the requestor, wherein the requestor is a component of a device.

183. A computationally-implemented thing/operation, comprising

circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image; circuitry for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene; circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor; and circuitry for presenting the received particular image to the requestor.

184. The computationally- implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for receiving input of the request for the particular image of the scene that is larger than the particular image.

- 175 185. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for receiving input from an automated component, of the request for the particular image of the scene that is larger than the particular image.

186. The computationally- implemented thing/operation of clause 185, wherein said circuitry for receiving input from an automated component, of the request for the particular image of the scene that is larger than the particular image comprises:

circuitry for receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image.

187. The computationally- implemented thing/operation of clause 186, wherein said circuitry for receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image comprises:

circuitry for receiving input of the request for the particular image from the image object tracking component, of the requestor device that is associated with the requestor, wherein the particular image contains the tracked image object present in the scene that is larger than the particular image.

188. The computationally-implemented thing/operation of clause 186, wherein said circuitry for receiving input from an image object tracking component, of the request for the particular image that contains a tracked image object present in the scene that is larger than the particular image comprises:

circuitry for receiving input from a player tracking component, that is part of an

Internet-enabled television configured to track football players, of the request for the particular image that is an image of a football game that contains a tracked

- 176 image object that is a particular football player present in the scene that is larger than the particular image of the football game.

189. The computationally- implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image through an interface of a requestor device associated with the requestor.

190. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene.

191. The computationally- implemented thing/operation of clause 190, wherein said circuitry for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least a portion of the scene comprises:

circuitry for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder.

192. The computationally- implemented thing/operation of clause 191, wherein said circuitry for receiving input of the request for the particular image of the scene that is larger

- 177 than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder comprises: circuitry for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene, said request at least partially based on a view of the scene.

193. The computationally- implemented thing/operation of clause 191, wherein said circuitry for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder comprises: circuitry for receiving the request for particular image data of the scene from the requestor device that is configured to receive the selection of the particular image, wherein the requestor device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

194. The computationally- implemented thing/operation of clause 191, wherein said circuitry for receiving input of the request for the particular image of the scene that is larger than the particular image from the requestor through the interface of the requestor device that is configured to display at least the portion of the scene in a viewfinder comprises: circuitry for accepting input of the request for the particular image of the scene that contains more pixels than the particular image.

195. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for the particular image of the scene that captures a larger spatial area than the particular image.

- 178 196. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for the particular image of the scene that includes more data than the particular image.

197. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for particular image data of the scene, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

198. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor.

199. The computationally-implemented thing/operation of clause 198, wherein said circuitry for accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor comprises:

circuitry for accepting input of the request for particular image data of the scene, wherein the scene is a subset of the image data collected by the array of more than one image sensor.

- 179 200. The computationally-implemented thing/operation of clause 198, wherein said circuitry for accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor comprises:

circuitry for accepting input of the request for particular image data of the scene, wherein the scene is a low-resolution version of the image data collected by the array of more than one image sensor.

201. The computationally-implemented thing/operation of clause 198, wherein said circuitry for accepting input of the request for particular image data of the scene, wherein the scene is a sampling of the image data collected by the array of more than one image sensor comprises:

circuitry for accepting input of the request for particular image data of a scene that is a football game.

202. The computationally- implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for particular image data of a scene that is a street view of an area.

203. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for acquiring the request for particular image data of a scene that is a tourist destination.

- 180 204. The computationally- implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for acquiring the request for particular image data of a scene that is an inside of a home.

205. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for particular image data of the scene, wherein the particular image data is an image that is a portion of the scene.

206. The computationally- implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field.

207. The computationally- implemented thing/operation of clause 206, wherein said circuitry for accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a particular football player and the scene is a football field comprises:

circuitry for accepting input of the request for particular image data of the scene, wherein the particular image data includes image data of a license plate of a vehicle, and the scene is an image representation of a highway bridge.

- 181 208. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for particular image data that is part of the scene, wherein the particular image data includes video data.

209. The computationally- implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for particular image data that is part of the scene, wherein the particular image data includes audio data.

210. The computationally-implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface.

211. The computationally-implemented thing/operation of clause 210, wherein said circuitry for accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface comprises:

circuitry for accepting input of the request for particular image data that is part of the scene from the requestor device that has a microphone that receives a spoken request for particular image data from the requestor.

- 182 212. The computationally- implemented thing/operation of clause 183, wherein said circuitry for accepting input of a request for a particular image of a scene that is larger than the particular image comprises:

circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor.

213. The computationally- implemented thing/operation of clause 212, wherein circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor comprises:

circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a client operating a device.

214. The computationally-implemented thing/operation of clause 213, wherein said circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a client operating a device comprises: circuitry for accepting input of the request for particular image data that is part of the scene from a requestor device that receives the request for particular image data through an audio interface.

215. The computationally- implemented thing/operation of clause 212, wherein circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor comprises:

circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device.

- 183 216. The computationally- implemented thing/operation of clause 215, wherein said circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a component of a device.

217. The computationally- implemented thing/operation of clause 215, wherein said circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device that is executing a subroutine.

218. The computationally- implemented thing/operation of clause 215, wherein said circuitry for accepting input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is a device comprises:

circuitry for accepting, at the device, of the input of the request for the particular image of the scene that is larger than the particular image, from the requestor that is the device that is executing a separate subroutine.

219. The computationally-implemented thing/operation of clause 183, wherein said circuitry for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises: circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes multiple connected image sensors and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

- 184 220. The computationally- implemented thing/operation of clause 183, wherein said circuitry for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises: circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes two image sensors arranged side by side and angled toward each other and that is configured to capture the scene that is larger than the requested image data.

221. The computationally- implemented thing/operation of clause 183, wherein said circuitry for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises: circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a grid and that is configured to capture the scene that is larger than the requested image data.

222. The computationally- implemented thing/operation of clause 183, wherein said circuitry for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises: circuitry for transmitting the request for the particular image data of the scene to the image sensor array that includes the array of image sensors arranged in a line such that a field of view is greater than 120 degrees and that is configured to capture the scene that is larger than the requested image data.

- 185 223. The computationally-implemented thing/operation of clause 183, wherein said circuitry for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises: circuitry for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

224. The computationally-implemented thing/operation of clause 223, wherein said circuitry for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

circuitry for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than ten times as much image data as the requested particular image data.

225. The computationally- implemented thing/operation of clause 223, wherein said circuitry for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

circuitry for transmitting the request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times as much image data as the requested particular image data.

226. The computationally- implemented thing/operation of clause 183, wherein said circuitry for transmitting the request for the particular image to an image sensor array that

- 186 includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises: circuitry for transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor.

227. The computationally- implemented thing/operation of clause 226, wherein said circuitry for transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor comprises:

circuitry for transmitting the request for the particular image to the remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor along with one or more other requests for other particular images.

228. The computationally- implemented thing/operation of clause 226, wherein said circuitry for transmitting the request for the particular image to a remote server that is configured to relay the request for the particular image to the image sensor array that includes more than one image sensor comprises:

circuitry for transmitting the request for the particular image to the remote server that is configured to package multiple requests that include the request for the particular image and to transmit the package of multiple requests to the image sensor array.

229. The computationally- implemented thing/operation of clause 228, wherein said circuitry for transmitting the request for the particular image to the remote server that is configured to package multiple requests that include the request for the particular image and to transmit the package of multiple requests to the image sensor array comprises:

circuitry for transmitting the request for the particular image to the remote server

- 187 that is configured to combine multiple requests that include the request for the particular image and to transmit the package of multiple requests with redundant data requests removed to the image sensor array.

230. The computationally-implemented thing/operation of clause 229, wherein said circuitry for transmitting the request for the particular image to the remote server that is configured to combine multiple requests that include the request for the particular image and to transmit the package of multiple requests with redundant data requests removed to the image sensor array comprises:

circuitry for transmitting the request for the particular image to the remote server that is configured to combine multiple requests from multiple requestors that include the request for the particular image, and to transmit the multiple requests as a single combined request for image data to the image sensor array.

231. The computationally- implemented thing/operation of clause 183, wherein said circuitry for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises: circuitry for modifying the request for the particular image into an updated request for particular image data; and circuitry for transmitting the updated request for particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

232. The computationally- implemented thing/operation of clause 231, wherein said circuitry for modifying the request for the particular image into an updated request for particular image data comprises:

- 188 circuitry for modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating.

233. The computationally- implemented thing/operation of clause 232, wherein said circuitry for modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating comprises:

circuitry for modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game.

234. The computationally-implemented thing/operation of clause 233, wherein said circuitry for modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game comprises:

circuitry for modifying the request for the particular image, which is a request for an image of a football player in a football game, into an updated request for particular image data that identifies the portion of the image as targeted for updating as the portion of the image that includes pixels that represent three spatial feet in all directions from the football player in the football game based on an algorithm that identified that portion of the image as most likely to have changed since a previous image.

- 189 235. The computationally- implemented thing/operation of clause 232, wherein said circuitry for modifying the request for the particular image into an updated request for particular image data that identifies a portion of the image as targeted for updating comprises:

circuitry for modifying the request for the particular image into an updated request for particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images.

236. The computationally-implemented thing/operation of clause 235, wherein said circuitry for modifying the request for the particular image into an updated request for particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images comprises:

circuitry for comparing one or more previously received images that are determined to be similar to the particular image; and circuitry for modifying the request for the particular image into the updated request for particular image data that identifies the portion of the image as targeted for updating based on the compared one or more previously received images.

237. The computationally- implemented thing/operation of clause 236, wherein said circuitry for comparing one or more previously received images that are determined to be similar to the particular image comprises:

circuitry for comparing one or more previously received images that are determined to be similar to the particular image to identify a portion of the particular image as targeted for updating.

238. The computationally-implemented thing/operation of clause 236, wherein said circuitry for comparing one or more previously received images that are determined to be similar to the particular image comprises:

- 190 circuitry for comparing a first previously received image with a second previously received image to determine a changed portion of the first previously received image that is different than a portion of the second previously received image; and circuitry for identifying the portion of the particular image that corresponds to the changed portion.

239. The computationally- implemented thing/operation of clause 183, wherein said circuitry for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene comprises: circuitry for generating an expanded request for the particular image; and circuitry for transmitting the generated expanded request for the particular image to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

240. The computationally-implemented thing/operation of clause 239, wherein said circuitry for generating an expanded request for the particular image comprises:

circuitry for generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image.

241. The computationally- implemented thing/operation of clause 240, wherein said circuitry for generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image comprises:

circuitry for generating the expanded request for the particular image that includes

- 191 the request for the particular image and the request for image data that borders the particular image on all four sides of the particular image.

242. The computationally-implemented thing/operation of clause 240, wherein said circuitry for generating an expanded request for the particular image that includes the request for the particular image and a request for image data that borders the particular image comprises:

circuitry for determining a projected next side image that is an image that borders the particular image; and circuitry for generating the expanded request for the particular image that includes the request for the particular image and a request for the projected next side image.

243. The computationally- implemented thing/operation of clause 242, wherein said circuitry for determining a projected next side image that is an image that borders the particular image comprises:

circuitry for determining a projected next side image based on a direction in which a device associated with the requestor is moving.

244. The computationally-implemented thing/operation of clause 243, wherein said circuitry for determining a projected next side image based on a direction in which a device associated with the requestor is moving comprises:

circuitry for determining the projected next side image based on the direction that a head of the requestor is turning while the requestor is wearing a virtual reality device.

245. The computationally- implemented thing/operation of clause 239, wherein said circuitry for generating an expanded request for the particular image comprises:

- 192 circuitry for generating an expanded request for the particular image that includes the request for the particular image, a request for border image data that borders the particular image on each side, and a request for secondary border image data that borders the border image data.

246. The computationally- implemented thing/operation of clause 245, wherein said circuitry for generating an expanded request for the particular image that includes the request for the particular image, a request for border image data that borders the particular image on each side, and a request for secondary border image data that borders the border image data comprises:

circuitry for generating the expanded request for the particular image that includes the request for the particular image at a first resolution, the request for the border image data at a second resolution less than the first resolution, and the request for the secondary border image data at a third resolution less than or equal to the second resolution.

247. The computationally-implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents fewer pixels than the scene.

248. The computationally-implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a smaller geographic area than a geographic

- 193 area represented by the scene.

249. The computationally- implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image from a remote server that received a portion of the scene, wherein the remote server received the portion of the scene that included the request for the particular image and a second request for a second particular image that is at least partially different than the first particular image.

250. The computationally- implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the image sensor array discarded portions of the scene other than the particular image.

251. The computationally- implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image data from the image sensor array, wherein data from the scene other than the particular image data is stored at the image sensor array.

- 194 252. The computationally- implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image data from a remote server configured to communicate with the image sensor array, wherein a first portion of the scene data other than the particular image is stored at the image sensor array, and a second portion of the scene data other than the particular image is stored at the remote server.

253. The computationally-implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor.

254. The computationally-implemented thing/operation of clause 253, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of the requestor device that is configured to store data about the requestor.

- 195 255. The computationally-implemented thing/operation of clause 253, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on an available bandwidth of the connection between the image sensor array and the device associated with the requestor.

256. The computationally-implemented thing/operation of clause 253, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents the subset of the scene, and wherein the size characteristic of the particular image is at least partially based on a property of a requestor device that is associated with the requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the data size of particular image data of the particular image is at least partially based on an available bandwidth of a connection between the device associated with the requestor and a remote server configured to communicate with the image sensor array.

257. The computationally-implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a screen size of a requestor device associated with the requestor.

- 196 258. The computationally-implemented thing/operation of clause 183, wherein said circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor comprises:

circuitry for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a data size of particular image data of the particular image is at least partially based on a maximum resolution of the requestor device associated with the requestor.

259. The computationally- implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for presenting the received particular image on a viewfinder of a device associated with the requestor.

260. The computationally-implemented thing/operation of clause 259, wherein said circuitry for presenting the received particular image on a viewfinder of a device associated with the requestor comprises:

circuitry for displaying the received particular image on the viewfinder of the device associated with the requestor, wherein the device associated with the requestor is one or more of a cellular telephone device, a tablet device, a smartphone device, a laptop computer, a desktop computer, a television, and a wearable computer.

261. The computationally-implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for presenting the received particular image that is an image of a football

- 197 player in a football game to the requestor that is a spectator of the game that watches the football game on an Internet-enabled television.

262. The computationally- implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for presenting the received particular image that is an image of an eagle at an animal watering hole to the requestor that is a naturalist that monitors the animal watering hole from a screen on their smart watch.

263. The computationally-implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for modifying the received particular image into a modified particular image; and circuitry for presenting the modified received particular image to the requestor.

264. The computationally-implemented thing/operation of clause 263, wherein said circuitry for modifying the received particular image into a modified particular image comprises:

circuitry for modifying the received particular image into a modified particular image, wherein the received particular image includes only portions of the scene that have changed since the last time the particular image was displayed.

265. The computationally- implemented thing/operation of clause 264, wherein said circuitry for modifying the received particular image into a modified particular image, wherein the received particular image includes only portions of the scene that have changed since the last time the particular image was displayed:

circuitry for modifying the received particular image into a modified particular

- 198 image, wherein one or more portions of the received particular image that have not changed are updated with existing image data.

266. The computationally- implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for presenting a portion of the received particular image to the requestor.

267. The computationally- implemented thing/operation of clause 266, wherein said circuitry for presenting a portion of the received particular image to the requestor comprises:

circuitry for presenting a first portion of the received particular image to the requestor; and circuitry for storing a second portion of the received particular image.

268. The computationally-implemented thing/operation of clause 267, wherein said means for presenting the received particular image to the requestor comprises:

circuitry for storing a second portion of the received particular image that is adjacent to the first portion of the received particular image and is configured to be used as cached image data when the requestor requests an image corresponding to the second portion of the received particular image.

269. The computationally- implemented thing/operation of clause 267, wherein said means for presenting the received particular image to the requestor comprises:

circuitry for storing a second portion of the received particular image, wherein the second portion of the received particular image is received at a lower resolution than the first portion of the received particular image.

- 199 270. The computationally- implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for transmitting the received particular image to a component configured to analyze the received particular image.

271. The computationally- implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for transmitting the received particular image to a component configured to store the received particular image.

272. The computationally- implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for presenting the received particular image to the requestor, wherein the requestor is a client.

273. The computationally-implemented thing/operation of clause 183, wherein said circuitry for presenting the received particular image to the requestor comprises:

circuitry for presenting the received particular image to the requestor, wherein the requestor is a component of a device.

274. A thing/operation, comprising:

a signal -bearing medium bearing: one or more instructions for accepting input of a request for a particular image of a scene that is larger than the particular image; one or more instructions for transmitting the request for the particular image to an

- 200 image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene; one or more instructions for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor; and one or more instructions for presenting the received particular image to the requestor.

275. A thing/operationcomprising:

one or more interchained physical machines ordered for accepting input of a request for a particular image of a scene that is larger than the particular image; one or more interchained physical machines ordered for transmitting the request for the particular image to an image sensor array that includes more than one image sensor and that is configured to capture the scene and retain a subset of the scene that includes the request for the particular image of the scene; one or more interchained physical machines ordered for receiving only the particular image from the image sensor array, wherein the particular image represents a subset of the scene, and wherein a size characteristic of the particular image is at least partially based on a property of a requestor; and one or more interchained physical machines ordered for presenting the received particular image to the requestor.

276. A thing/operation, comprising:

an input of a request for particular image data accepting module, wherein the

- 201 particular image data is part of a scene that is larger than a particular image associated with the particular image data;

an inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene;

a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor; and

a presenting the received particular image to the requestor.

277. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data.

278. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting module configured to accept input from an automated component for the request for particular image data.

279. The thing/operation of clause 278, wherein said input of a request for particular image data accepting module configured to accept input from an automated component for the request for particular image data comprises:

an image object tracking algorithm for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data and the requested particular image data includes a tracked image object present in the scene.

- 202 280. The thing/operation of clause 279, wherein said image object tracking algorithm for particular image data accepting module comprises:

an image object tracking algorithm for particular image data accepting through a requestor device interface module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data and the requested particular image data includes a tracked image object present in the scene.

281. The thing/operation of clause 279, wherein said image object tracking algorithm for particular image data accepting module comprises:

an image object tracking algorithm for particular image data accepting through an interface of an Internet-enabled television device interface module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data and the requested particular image data includes a tracked image object that is a football player present in the scene that is a football field.

282. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting through a requestor device interface module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data.

283. The thing/operation of clause 282, wherein said input of a request for particular image data accepting through a requestor device interface module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting through a requestor- associated device interface module, wherein the requestor device interface is configured to display at least a portion of the scene.

- 203 284. The thing/operation of clause 283, wherein said input of a request for particular image data accepting through a requestor-associated device interface module, wherein the requestor device interface is configured to display at least a portion of the scene comprises: an input of a request for particular image data accepting through the requestor- associated device interface module, wherein the requestor device interface is configured to display at least a portion of the scene in a viewfinder.

285. The thing/operation of clause 283, wherein said input of a request for particular image data accepting through a requestor-associated device interface module, wherein the requestor device interface is configured to display at least a portion of the scene comprises: an input of a request for particular image data that is at least partially based on a view of the scene accepting through the requestor-associated device interface module, wherein the requestor-associated device interface is configured to display at least a portion of the scene.

286. The thing/operation of clause 283, wherein said input of a request for particular image data accepting through a requestor-associated device interface module, wherein the requestor device interface is configured to display at least a portion of the scene comprises: an input of a request for particular image data accepting through a specific requestor-associated device interface module, wherein the specific requestor- associated device is one or more of a smartphone, television, computer screen, tablet, camera, appliance, and augmented reality device.

287. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of the request for particular image data accepting module, wherein the particular image data is part of the scene that contains more pixels than the particular image associated with the particular image data.

- 204 288. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of the request for particular image data accepting module, wherein the particular image data is part of the scene that captures a larger spatial area than the particular image.

289. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of the request for particular image data accepting module, wherein the particular image data is part of the scene that includes more data than the particular image.

290. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of the request for particular image data accepting module, wherein the scene is a representation of the image data collected by the array of more than one image sensor.

291. The thing/operation of clause 290, wherein said input of the request for particular image data accepting module, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a sampling of the image data collected by the array of more than one image sensor.

292. The thing/operation of clause 290, wherein said input of the request for particular image data accepting module, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

an input of the request for particular image data accepting module, wherein the

- 205 scene is a subset of the image data collected by the array of more than one image sensor.

293. The thing/operation of clause 290, wherein said input of the request for particular image data accepting module, wherein the scene is a representation of the image data collected by the array of more than one image sensor comprises:

an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a lower resolution expression of the image data collected by the array of more than one image sensor.

294. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is an animal oasis.

295. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a street view of an area.

296. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is a tourist destination available for virtual tourism.

297. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

- 206 an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is an interior of a commercial retail property.

298. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data that is a portion of the scene accepting module, wherein the particular image data is an image that is a portion of the scene.

299. The thing/operation of clause 298, wherein said input of a request for particular image data that is a portion of the scene accepting module, wherein the particular image data is an image that is a portion of the scene comprises:

an input of a request for particular image data that includes a particular football player that is a portion of the scene that is a football field accepting module.

300. The thing/operation of clause 298, wherein said input of a request for particular image data that is a portion of the scene accepting module, wherein the particular image data is an image that is a portion of the scene comprises:

an input of a request for particular image data that includes a license plate of a vehicle that is a portion of the scene that is a representation of a highway bridge accepting module.

301. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image video data accepting module, wherein the particular image video data is part of the scene that is larger than at least the particular image associated with the particular image video data.

- 207 302. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image audio data accepting module, wherein the particular image audio data is part of the scene that is larger than at least the particular image associated with the particular image audio data.

303. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting through an audio interface module, wherein the particular image data is part of the scene that is larger than the particular image associated with the particular image data.

304. The thing/operation of clause 303, wherein said input of a request for particular image data accepting through an audio interface module, wherein the particular image data is part of the scene that is larger than the particular image associated with the particular image data comprises:

an input of a request for particular image data accepting through a microphone audio interface module, wherein the particular image data is part of the scene that is larger than the particular image associated with the particular image data.

305. The thing/operation of clause 276, wherein said input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data comprises:

an input of a request for particular image data accepting from the requestor module, wherein the particular image data is part of the scene that is larger than the particular image associated with the particular image data.

306. The thing/operation of clause 305, wherein said input of a request for particular image data accepting from the requestor module, wherein the particular image data is part

- 208 of the scene that is larger than the particular image associated with the particular image data comprises:

an input of a request for particular image data accepting from the requestor module, wherein the requestor is a client operating a device.

307. The thing/operation of clause 306, wherein said input of a request for particular image data accepting from the requestor module, wherein the requestor is a client operating a device comprises:

an input of a request for particular image data accepting from the requestor module, wherein the requestor is a user operating a smart television with a remote control.

308. The thing/operation of clause 305, wherein said input of a request for particular image data accepting from the requestor module, wherein the particular image data is part of the scene that is larger than the particular image associated with the particular image data comprises:

an input of a request for particular image data accepting from a requestor device module, wherein the requestor is a device.

309. The thing/operation of clause 308, wherein said input of a request for particular image data accepting from a requestor device module, wherein the requestor is a device comprises:

an input of a request for particular image data accepting from a component of a requestor device module, wherein the requestor is a component of the requestor device.

310. The thing/operation of clause 308, wherein said input of a request for particular image data accepting from a requestor device module, wherein the requestor is a device comprises:

an input of a request for particular image data accepting from a requestor device module, wherein the requestor is a device configured to carry out a request subroutine.

- 209 311. The thing/operation of clause 308, wherein said input of a request for particular image data accepting from a requestor device module, wherein the requestor is a device comprises:

an input of a request for particular image data accepting at the requestor device module, wherein the requestor is the requestor device that is executing a separate subroutine.

312. The thing/operation of clause 276, wherein said inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene comprises:

a request for particular image data transmitting to an image sensor array module configured to transmit the request to the image sensor array that includes multiple connected image sensors and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

313. The thing/operation of clause 276, wherein said inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene comprises:

a request for particular image data transmitting to an image sensor array that includes two inline image sensors angled toward each other module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

314. The thing/operation of clause 276, wherein said inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is

- 210 configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene comprises:

a request for particular image data transmitting to the image sensor array that includes a pattern of image sensors arranged in a grid module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

315. The thing/operation of clause 276, wherein said inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene comprises:

a request for particular image data transmitting to the image sensor array that includes a pattern of image sensors arranged in a line module configured to transmit the request to the image sensor array that has a field of view greater than one hundred twenty degrees and that is configured to capture the scene that is larger than the requested particular image data.

316. The thing/operation of clause 276, wherein said inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene comprises:

a request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data.

317. The thing/operation of clause 316, wherein said request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to

- 211 capture the scene that represents more image data than the requested particular image data comprises:

a request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents ten times more image data than the requested particular image data.

318. The thing/operation of clause 316, wherein said request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more image data than the requested particular image data comprises:

a request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that represents more than one hundred times more image data than the requested particular image data.

319. The thing/operation of clause 276, wherein said inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene comprises:

a request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package the request for particular image data and relay the request for particular image data to the image sensor array.

320. The thing/operation of clause 319, wherein said request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package the request for particular image data and relay the request for particular image data to the image sensor array comprises:

- 212 a request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package the request for particular image data and relay the request for particular image data along with one or more other image data requests to the image sensor array.

321. The thing/operation of clause 319, wherein said request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package the request for particular image data and relay the request for particular image data to the image sensor array comprises:

a request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package multiple requests that include the request for particular image data and relay the package of multiple requests to the image sensor array.

322. The thing/operation of clause 321, wherein said request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package multiple requests that include the request for particular image data and relay the package of multiple requests to the image sensor array comprises:

a request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package multiple requests that include the request for particular image data and relay the package of multiple requests to the image sensor array.

323. The thing/operation of clause 322, wherein said request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to package

- 213 multiple requests that include the request for particular image data and relay the package of multiple requests to the image sensor array comprises:

a request for particular image data transmitting to a remote server deployed to relay the request to the image sensor array module configured to transmit the request to the remote server that is configured to combine multiple requests that include the request for particular image data and transmit the combined multiple requests as a single combined request for image data to the image sensor array.

324. The thing/operation of clause 276, wherein said inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene comprises:

a request for particular image data modifying into updated request for particular image data module configured to modify the request for the particular image into a request for updated particular image data; and

a request for updated particular image data transmitting to the image sensor array module configured to transmit the request for updated particular image data to the image sensor array that includes more than one image sensor and that is configured to capture the scene and retain the subset of the scene that includes the request for the particular image of the scene.

325. The thing/operation of clause 324, wherein said request for particular image data modifying into updated request for particular image data module configured to modify the request for the particular image into a request for updated particular image data comprises: a request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted module configured to modify the request for the particular image into a request for updated particular image data that identifies a portion of the image as targeted for updating.

326. The thing/operation of clause 325, wherein said request for particular image data modifying into updated request for particular image data that identifies a portion of the

- 214 image data as update-targeted module configured to modify the request for the particular image into a request for updated particular image data that identifies a portion of the image as targeted for updating comprises:

a request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted module wherein the request for particular image data is a request for an image of an eagle that circles an animal oasis and the updated request for particular image data identifies a twenty foot spatial radius around the eagle as the portion of the image data that is update-targeted.

327. The thing/operation of clause 326, wherein said request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted module wherein the request for particular image data is a request for an image of an eagle that circles an animal oasis and the updated request for particular image data identifies a twenty foot spatial radius around the eagle as the portion of the image data that is update-targeted comprises:

a request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted based on an applied algorithm module wherein the request for particular image data is a request for an image of an eagle that circles an animal oasis and the updated request for particular image data identifies a twenty foot spatial radius around the eagle as the portion of the image data that is update-targeted based on an algorithm that determined that portion of the image data as likely to have changed since a previous reception of image data.

328. The thing/operation of clause 325, wherein said request for particular image data modifying into updated request for particular image data that identifies a portion of the image data as update-targeted module configured to modify the request for the particular image into a request for updated particular image data that identifies a portion of the image as targeted for updating comprises:

a request for particular image data modifying into updated request for particular image data based on one or more previously received images module configured to

- 215 modify the request for the particular image into a request for updated particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images.

329. The thing/operation of clause 328, wherein said request for particular image data modifying into updated request for particular image data based on one or more previously received images module configured to modify the request for the particular image into a request for updated particular image data that identifies the portion of the image as targeted for updating based on one or more previously received images comprises:

a particular image data request to previous image data that contains one or more previously received images determined to be similar to the previous image data comparing module; and

a particular image data request modifying based on compared previous image data module configured to modify the request for the particular image into the request for updated particular image data that identifies the portion of the image as targeted for updating based on the compared one or more previously received images.

330. The thing/operation of clause 329, wherein said particular image data request to previous image data that contains one or more previously received images determined to be similar to the previous image data comparing module comprises:

a particular image data request to previous image data that contains one or more previously received images determined to be similar to the previous image data comparing to identify an update-targeted portion of the particular image data module configured to one or more previously received images that are determined to be similar to the particular image to identify a portion of the particular image data as targeted for updating.

331. The thing/operation of clause 329, wherein said particular image data request to previous image data that contains one or more previously received images determined to be similar to the previous image data comparing module comprises:

a first previously received image data with second previously received image data and request for particular image data delta determining module configured to

- 216 determine a deltaed portion between first previously received image data and second previously received image data; and

a particular image data request portion that corresponds to determined delta identifying module configured to identify the portion of the particular image data that corresponds to the deltaed portion.

332. The thing/operation of clause 276, wherein said inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene comprises:

an expanded request for particular image data generating module; and

an expanded request for particular image data transmitting to the image sensor array module configured to transmit the request to the image sensor array that includes more than one image sensor and that is configured to capture the scene that is larger than the requested particular image data.

333. The thing/operation of clause 332, wherein said expanded request for particular image data generating module comprises:

an expanded request for particular image data that includes the request for particular image data and border image data that borders the particular image data generating module configured to generate an expanded request for particular image data that includes the request for the particular image data and a request for border image data that borders the particular image data.

334. The thing/operation of clause 333, wherein said expanded request for particular image data that includes the request for particular image data and border image data that borders the particular image data generating module comprises:

an expanded request for particular image data that includes the request for particular image data and border image data that borders the particular image data on all four sides generating module configured to generate an expanded request for particular image data that includes the request for the particular image data and a request for

- 217 border image data that borders the particular image data on all four sides.

335. The thing/operation of clause 333, wherein said expanded request for particular image data that includes the request for particular image data and border image data that borders the particular image data generating module comprises:

a projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining module configured to determine a projected next side image data that includes the image that borders the particular image of the particular image data; and

an expanded request for particular image data that includes the request for particular image data and next side image data generating module.

336. The thing/operation of clause 335, wherein said projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining module comprises:

a projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining at least partially based on a detected motion of the device associated with the requestor module configured to determine the projected next side image based on motion of the device associated with the requestor.

337. The thing/operation of clause 336, wherein said projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining at least partially based on a detected motion of the device associated with the requestor module comprises:

a projected next side image data that is image data corresponding to an image that borders the particular image of the particular image data determining at least partially based on a detected head turn of the requestor that wears the device associated with the requestor module configured to determine the projected next side image data based on the direction of a turn of a head of the requestor while the requestor is wearing a virtual reality device as the device associated with the requestor.

- 218 338. The thing/operation of clause 332, wherein said expanded request for particular image data generating module comprises:

an expanded request for particular image data that includes the request for particular image data, first border image data that borders the particular image data, and second border image data that borders the first border image data generating module.

339. The thing/operation of clause 338, wherein said expanded request for particular image data that includes the request for particular image data, first border image data that borders the particular image data, and second border image data that borders the first border image data generating module comprises:

an expanded request for particular image data that includes the request for particular image data, first border image data that borders the particular image data, and second border image data that borders the first border image data generating module configured to generate the expanded request for the particular image data that includes the request for the particular image data at a first resolution, the request for the first border image data at a second resolution less than the first resolution, and the request for the second border image data at a third resolution less than or equal to the second resolution.

340. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data that represents fewer pixels than the scene from the image sensor array.

341. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular

- 219 image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

a receiving only the particular image from the image sensor array, wherein the particular image represents a representation of a smaller geographic area than a geographic area represented by the scene.

342. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

a particular image data from the image sensor array exclusive receiving from a remote server module, wherein the remote server received the portion of the scene that included the request for the particular image data and a second request for second particular image data that is at least partially different than the particular image data.

343. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

a particular image data from the image sensor array exclusive receiving from a remote server module, wherein the image sensor array discarded portions of the scene other than the particular image data.

344. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

- 220 a particular image data from the image sensor array exclusive receiving from a remote server module, wherein portions of the scene other than the particular image data are stored at the image sensor array.

345. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

a particular image data from the image sensor array exclusive receiving from a remote server module configured to receive only the particular image data from a remote server deployed to communicate with the image sensor array, wherein a first portion of the scene data other than the particular image data is stored at the image sensor array and a second portion of the scene data other than the particular image data is stored at the remote server.

346. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor device that is associated with the requestor.

347. The thing/operation of clause 346, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor device that is associated with the requestor comprises:

- 221 a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a feature of a requestor device that is deployed to store data about with the requestor.

348. The thing/operation of clause 346, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor device that is associated with the requestor comprises: a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a bandwidth available to the requestor device.

349. The thing/operation of clause 346, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a bandwidth available to the requestor device comprises:

a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the size characteristic of the particular image data is at least partially based on a bandwidth between the requestor device and a remote server that

communicates with the image sensor array.

350. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

- 222 a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a screen size of a requestor device that is associated with the requestor.

351. The thing/operation of clause 276, wherein said particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor comprises:

a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a maximum resolution of a requestor device that is associated with the requestor.

352. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a received particular image data presenting on a device viewfinder module configured to present the received particular image data to the requestor on a viewfinder of a device associated with the requestorY

353. The thing/operation of clause 352, wherein said received particular image data presenting on a device viewfinder module configured to present the received particular image data to the requestor on a viewfinder of a device associated with the requestor comprises:

a received particular image data presenting on a particular device viewfinder module, wherein the particular device is one or more of a cellular telephone device, a tablet device, a smartphone device, a laptop computer, a desktop computer, a television, and a wearable computer.

- 223 354. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a received particular image data presenting module configured to present the received particular image data to the requestor that is a spectator of a baseball game on a requestor device that is an internet-enabled television.

355. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a received particular image data presenting module configured to present the received particular image data to the requestor that is a naturalist that observes an animal watering hole from a smartwatch touchscreen.

356. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a received particular image data modifying into modified particular image data module configured to modify the received particular image into a modified particular image; and

a modified received particular image to the requestor presenting module.

357. The thing/operation of clause 356, wherein said received particular image data modifying into modified particular image data module comprises:

a received particular image data that includes only changed portions of the scene modifying into modified particular image data module.

358. The thing/operation of clause 357, wherein said received particular image data that includes only changed portions of the scene modifying into modified particular image data module comprises:

a received particular image data that includes only changed portions of the scene modifying into modified particular image data through addition of unchanged portions of existent image data module configured to modify the received particular image into a modified particular image, wherein one or more portions of the received particular image that have not changed are updated with existing image

- 224 data.

359. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a portion of received particular image data presenting module configured to present a portion of the received particular image data to the requestor.

360. The thing/operation of clause 359, wherein said portion of received particular image data presenting module configured to present a portion of the received particular image data to the requestor comprises:

a first portion of the received particular image data presenting module; and a second portion of the received particular image data storing module.

361. The thing/operation of clause 360, wherein said second portion of the received particular image data storing module comprises:

a second portion of the received particular image data that is adjacent to the first portion of the received particular image data and is configured to be used as cached image data storing module configured to store a second portion of the received particular image data that is adjacent to the first portion of the received particular image and is configured to be used as cached image data when the requestor requests an image corresponding to the second portion of the received particular image data.

362. The thing/operation of clause 360, wherein said second portion of the received particular image data storing module comprises:

a second portion of the received particular image data that is adjacent to the first portion of the received particular image data and is received at a lower resolution than the first portion of the received particular image data storing module configured to store the second portion of the received particular image at a lower resolution than the first portion of the received particular image.

- 225 363. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a received particular image data transmitting to a component module configured transmit the received particular data to a component deployed to analyze the received particular image.

364. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a received particular image data transmitting to a component module configured to transmit the received particular data to a component deployed to store the received particular image.

365. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a received particular image data presenting module configured to present the received particular image data to a client requestor.

366. The thing/operation of clause 276, wherein said presenting the received particular image to the requestor comprises:

a received particular image data presenting module configured to present the received particular image data to a device component requestor.

367. A thing/operation, comprising:

one or more general purpose integrated circuits configured to receive instructions to configure as an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data at one or more first particular times;

one or more general purpose integrated circuits configured to receive instructions to configure as a inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene at one

- 226 or more second particular times;

one or more general purpose integrated circuits configured to receive instructions to configure as a particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size

characteristic of the particular image data is at least partially based on a feature of a requestor at one or more third particular times; and

one or more general purpose integrated circuits configured to receive instructions to configure as a presenting the received particular image to the requestor at one or more fourth particular times.

368. The thing/operation of clause 367, wherein said one or more second particular times occur prior to the one or more third particular times and one or more fourth particular times and after the one or more first particular times.

369. A thing/operation comprising:

an integrated circuit configured to purpose itself as an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data at a first time; the integrated circuit configured to purpose itself as a inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene at a second time;

the integrated circuit configured to purpose itself as an particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor at a third time; and

the integrated circuit configured to purpose itself as a presenting the received particular image to the requestor at a fourth time.

- 227 370. A thing/operation, comprising:

one or more elements of programmable hardware programmed to function as an input of a request for particular image data accepting module, wherein the particular image data is part of a scene that is larger than a particular image associated with the particular image data;

the one or more elements of programmable hardware programmed to function as a inputted request for the particular image data transmitting module configured to transmit the request for the particular image data to an image sensor array that includes more than one image sensor and that is configured to capture the scene and to retain a subset of the scene that includes the request for the particular image data of the scene;

the one or more elements of programmable hardware programmed to function as an particular image data from the image sensor array exclusive receiving module configured to receive only the particular image data from the image sensor array, wherein the particular image represents a subset of the scene and wherein a size characteristic of the particular image data is at least partially based on a feature of a requestor; and

the one or more elements of programmable hardware programmed to function as a presenting the received particular image to the requestor.

END Of Claims for 1114-003-004-000000; End of Pre-Claims Specification; Claims to

Follow from this point

- 228