Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE PROCESSING SYSTEM AND METHOD
Document Type and Number:
WIPO Patent Application WO/2019/217756
Kind Code:
A1
Abstract:
A high-definition image is preprocessed to generate a substantially losslessly-reconstructable set of image components that include a relatively low-resolution base image and a plurality of extra-data images that provide for progressively substantially losslessly reconstructing the high-definition image from the base image, wherein a single primary-color component of the extra-data images provides for relatively quickly reconstructing full-resolution intermediate images during the substantially lossless-reconstruction process.

Inventors:
KELLY SHAWN L (US)
Application Number:
PCT/US2019/031627
Publication Date:
November 14, 2019
Filing Date:
May 09, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HIFIPIX INC (US)
International Classes:
G06K9/40; G06T9/00; H04N1/56; H04N19/186; H04N19/44
Foreign References:
US20080144962A12008-06-19
US20060071825A12006-04-06
US20140286588A12014-09-25
US20100008595A12010-01-14
Other References:
TAUBMAN ET AL.: "JPEG2000: Standard for interactive imaging", PROCEEDINGS OF THE IEEE, vol. 90, no. 8, August 2002 (2002-08-01), XP002257522, Retrieved from the Internet [retrieved on 20190816], DOI: 10.1109/JPROC.2002.800725
Attorney, Agent or Firm:
VANVOORHIES, Kurt L. (US)
Download PDF:
Claims:
CLAIMS

1. A method of processing an image, comprising:

a. receiving image data of a relatively-high-resolution image incorporating a plurality of color components:

b. for each color component of said plurality of color components, successively compacting said image data of said color component so as to form both a plurality of successively-lower-resolution images and a corresponding plurality of sets of extra data, wherein each successively lower-resolution image of said plurality of lower- resolution images, in combination with a corresponding set of extra data, provides for substantially-losslessly reconstructing a next-higher resolution image of said plurality of successively-lower-resolution images;

c. for each color component of said plurality of color components, forming a corresponding color-test image by reconstructing each of said color components of said high-resolution image from said plurality of successively-lower-resolution images in combination with corresponding sets of said extra data associated with only said color component, and reconstructing a component of a color-reference image corresponding to said color component from said plurality of lower-resolution images in combination with said corresponding set of said extra data corresponding to said color component; and

d. storing a color-component indicator as a stored color-component indicator identifying the color component for which a corresponding said color-test image is least different from said color-reference image.

2. A method of processing an image as recited in claim 1, wherein the operation of successively compacting said image data comprises transforming the image data from every pair of adjacent row pixels of a first image to corresponding image data of a single corresponding row pixel of a second image, wherein said first image is either said relatively-high-resolution image or an image of said plurality of successively-lower- resolutions images, and said second image is a different image of said plurality of successively-lower-resolutions images.

3. A method of processing an image as recited in claim 1, wherein the operation of successively compacting said image data comprises wherein the operation of successively compacting said image data comprises transforming the image data from every pair of adjacent column pixels of a first image to corresponding image data of a single corresponding column pixel of a second image, wherein said first image is either said relatively-high-resolution image or an image of said plurality of successively-lower- resolutions images, and said second image is a different image of said plurality of successively-lower-resolutions images.

4. A method of processing an image as recited in claim 1, wherein the operation of successively compacting said image data comprises wherein the operation of successively compacting said image data comprises transforming the image data from a quad of adjacent column and row pixels of a first image to corresponding image data of a single corresponding pixel of a second image, wherein said first image is either said relatively- high-resolution image or an image of said plurality of successively-lower-resolutions images, and said second image is a different image of said plurality of successively-lower- resolutions images.

5. A method of processing an image as recited in claim 1, wherein the operation of successively compacting said image data comprises transforming the image data from every pair of adjacent row pixels of a first image to corresponding image data of a single corresponding row pixel of a second image, and transforming the image data from every pair of adjacent column pixels of said second image to corresponding image data of a single corresponding column pixel of a third image, wherein said first image is either said relatively-high-resolution image or an image of said plurality of successively-lower- resolutions images, and said second and third images are different images of said plurality of successively-lower-resolutions images.

6. A method of processing an image as recited in claim 1, wherein the operation of successively compacting said image data comprises transforming the image data from every pair of adjacent column pixels of a first image to corresponding image data of a single corresponding column pixel of a second image, and transforming the image data from every pair of adjacent row pixels of said second image to corresponding image data of a single corresponding row pixel of a third image, wherein said first image is either said relatively-high-resolution image or an image of said plurality of successively-lower- resolutions images, and said second and third images are different images of said plurality of successively-lower-resolutions images. 7 A method of processing an image as recited in claim 1, wherein said relatively-high- resolution image further incorporates a transparency component, further comprising:

a. scaling or interpolating said transparency component of a lowest-resolution image of said plurality of successively-lower-resolution images to form a first transparency-test image in one-to-one pixel correspondence with said relatively-high-resolution image; b. forming a second transparency-test image by reconstructing said transparency components of said high-resolution image from said plurality of successively-lower- resolution images in combination with corresponding sets of said extra data associated with only said color component said stored color-component indicator;

c. reconstructing a transparency-reference image corresponding to said transparency component from said plurality of lower-resolution images in combination with said corresponding set of said extra data corresponding to said transparency component; and d. storing a transparency-component indicator as a stored transparency-component indicator identifying which of said first or second transparency-test image is least different from said transparency -reference image.

8. A method of processing an image as recited in claim 1, further comprising, responsive to receiving a request from a client device for said relatively -high-resolution image:

a. transmitting to said client device all color components of a lowest-resolution image of said plurality of successively -lower-resolution image; and

b. in order of increasing resolution, transmitting to said client device only a selected color component of a corresponding set of said extra data of each of said plurality corresponding plurality of sets of extra data, wherein said selected color component corresponds to the color component identified by said stored color-component indicator.

9. A method of processing an image as recited in claim 1, further comprising, following transmission of each set of said extra data of each of said plurality corresponding plurality of sets of extra data, in order of increasing resolution, transmitting to said client device all remaining components of said extra data of said plurality corresponding plurality of sets of extra data.

10. A method of processing an image as recited in claim 8, further comprising transmitting to said client device, said stored transparency-component indicator.

11. A method of processing an image as recited in claim 9, further comprising transmitting to said client device, said stored color-component indicator.

12. A method of processing an image, comprising:

a. receiving from an image server, all color components of a lowest-resolution image of a plurality of successively-higher-resolution images;

b. displaying said lowest-resolution image on a display; and

c. for each remaining image of said plurality of successively-higher-resolution images, in order of increasing resolution:

i. receiving from said image server, a single color component of a corresponding set of said extra data of each of said plurality corresponding plurality of sets of extra data;

ii. reconstructing a next-higher-resolution image from a combination of a previous image and said single color component of said corresponding set of said extra data, wherein said previous image comprises said lowest-resolution image for a first set of said extra data, and otherwise comprises a next-lower-resolution reconstructed image;

iii. displaying said next-higher-resolution image on a display; and

iv. at least temporarily saving a color component of said next-higher-resolution image, wherein said color component corresponds to said single color component of said corresponding set of said extra data.

13. A method of processing an image as recited in claim 12, further comprising:

a. receiving from said image server, remaining components of said extra data of a corresponding set of extra data associated with a candidate image, wherein said candidate image is the lowest-resolution image of said plurality of successively-higher- resolution images that has not been substantially-losslessly -reconstructed;

b. substantially-losslessly reconstructing a successively-higher-resolution substantially- losslessly -reconstructed image responsive to a combination of said candidate image and said remaining components of said extra data;

c. displaying said successively-higher-resolution substantially-losslessly-reconstructed image; and

d. at least temporarily saving said successively-higher-resolution substantially-losslessly- reconstructed image.

14. A method of processing an image as recited in claim 12, further comprising receiving from said image server, a color-component indicator that identifies said single color component.

15. A method of processing an image as recited in claim 12, further comprising:

a. receiving from said image server, a transparency-component indicator; and

b. prior to displaying said next-higher-resolution image:

i. if said transparency-component indicator indicates a first state, scaling or interpolating a transparency component of said lowest-resolution image to form a transparency component of said next-higher-resolution image in one-to-one pixel correspondence with said next-higher-resolution image; and

ii. if said transparency-component indicator indicates a second state, reconstructing said transparency component of said next-higher-resolution image from said previous image in combination with said single color component of said corresponding set of said extra data.

Description:
IMAGE PROCESSING SYSTEM AND METHOD

CROSS-REFERENCE TO RELATED APPLICATIONS

The instant application claims the benefit of prior U.S. Provisional Application Serial No. 62/669,306 filed on 09 May 2018, which is incorporated herein by reference in its entirety.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates an image processing system that provides for preprocessing a high- definition image to provide for a progressive, accelerated download from an internet server of components thereof in a form that provides for a progressive lossless reconstruction of the high-definition image on a client system;

FIG. 2 illustrates a process for progressively compacting a high-definition image using successive horizontal and vertical compaction processes illustrated in FIGS. 3a-b and 4a-b, respectively, to form a corresponding reduced-resolution counterpart image and a plurality of associated extra-data images that, in combination with the reduced-resolution counterpart image, provide for losslessly reconstructing the high-definition image;

FIG. 3a illustrates the equations of a process for horizontally compacting a pair of vertically-adjacent image cells of a source image, so that the resulting compacted image has half the number of rows as the source image when the process is applied to the entire source image, and so as to generate a corresponding extra-data image in one-to-one pixel correspondence with the resulting compacted image;

FIG. 3b illustrates the compaction of a pair of image pixels in accordance with the horizonal-compaction process illustrated in FIG. 3a;

FIG. 4a illustrates the equations of a process for vertically compacting a pair of horizontally-adjacent image cells of a source image, so that the resulting compacted image has half the number of columns as the source image when the process is applied to the entire source image, and so as to generate a corresponding extra-data image in one-to-one pixel correspondence with the resulting compacted image;

FIG. 4b illustrates the compaction of a pair of image pixels in accordance with the vertical-compaction process illustrated in FIG. 4a;

FIG. 5 illustrates a portion of a high-definition source image comprising M=16 rows and N=16 columns of image pixels, and the associated image-pixel elements of four adjacent image pixels associated with a pair of adjacent rows, and a corresponding pair of adjacent columns, of the source image;

FIG. 6a illustrates a horizonal compaction of the high-definition source image illustrated in FIG. 5, in accordance with the horizonal-compaction process illustrated in FIGS. 3a and 3b;

FIG. 6b illustrates the extra-data image resulting from the horizontal-compaction process as applied to the high-definition source image illustrated in FIG. 5, to generate the horizontally-compacted image illustrated in FIG. 6a, wherein the image-pixel elements of the extra-data image illustrated in FIG. 6b are in one-to-one correspondence with those of the horizontally-compacted image illustrated in FIG. 6a;

FIG. 7a illustrates a vertical compaction of the horizontally-compacted image illustrated in FIG. 6a, in accordance with the vertical-compaction process illustrated in FIGS. 4a and 4b;

FIG. 7b illustrates the extra-data image resulting from the vertical-compaction process as applied to the horizontally-compacted image illustrated in FIG. 6a, to generate the vertically-compacted image illustrated in FIG. 7a, wherein the image-pixel elements of the extra-data image illustrated in FIG. 7b are in one-to-one correspondence with those of the vertically-compacted image illustrated in FIG. 7a;

FIG. 8a illustrates the equations of a process for bidirectionally compacting a quad of image cells from a pair of adjacent rows and a pair of adjacent columns of a source image, so that the resulting compacted image has half the number of rows and half the number of columns as the source image when the process is applied to the entire source image, and so as to generate a corresponding extra-data image that has three extra-data pixel elements corresponding to each corresponding pixel element in the resulting compacted image;

FIG. 8b illustrates the compaction of a quad of image pixels in accordance with the process illustrated in FIG. 8a;

FIG. 9a illustrates a bidirectional compaction of the high-definition source image illustrated in FIG. 5, in accordance with the bidirectional-compaction process illustrated in

FIGS. 8a and 8b;

FIG. 9b illustrates the extra-data image resulting from the bidirectional-compaction process as applied to the high-definition source image of illustrated in FIG. 5, to generate the bidirectionally-compacted image illustrated in FIG. 9a, wherein for each pixel element in the bidirectionally-compacted image illustrated in FIG. 9a, there are three corresponding image- pixel elements in the extra-data image illustrated in FIG. 9b;

FIG. 10a illustrates a horizonal compaction of the bidirectionally-compacted image illustrated in FIGS. 7a and 9a, in accordance with the horizonal-compaction process illustrated in FIGS. 3a and 3b; FIG. 10b illustrates the extra-data image resulting from the horizontal-compaction process as applied to the high-definition source image of illustrated in FIGS. 7a and 9a, to generate the horizontally-compacted image illustrated in FIG. 10a, wherein the image-pixel elements of the extra-data image illustrated in FIG. 10b are in one-to-one correspondence with those of the horizontally-compacted image illustrated in FIG. 10a;

FIG. 11a illustrates a vertical compaction of the horizontally-compacted image illustrated in FIG. 10a, in accordance with the vertical-compaction process illustrated in FIGS. 4a and 4b;

FIG. lib illustrates the extra-data image resulting from the vertical-compaction process as applied to the horizontally-compacted image illustrated in FIG. 10a, to generate the vertically-compacted image illustrated in FIG. 11a, wherein the image-pixel elements of the extra-data image illustrated in FIG. lib are in one-to-one correspondence with those of the vertically-compacted image illustrated in FIG. 11a;

FIG. 12a illustrates the equations of a process for losslessly vertically reconstructing a pair of horizontally -adjacent image cells of a source image from a corresponding value of a corresponding image cell of a corresponding vertically-compacted image in combination with a corresponding value of a corresponding extra-data image cell of a corresponding extra-data image;

FIG. 12b illustrates a lossless vertical reconstruction of a pair of horizontally-adjacent image pixels in accordance with the lossless vertical-reconstruction process illustrated in FIG. 12a;

FIG. 12c illustrates application of the lossless vertical reconstruction process illustrated in FIG. 12a, as applied to the Red (R), Green (G), Blue (B) and transparency (a) image-pixel elements of a vertically-compacted image pixel to generate corresponding image-pixel elements of corresponding horizontally-adjacent image pixels of a corresponding source image;

FIG. 13a illustrates the equations of a process for losslessly horizontally reconstructing a pair of vertically-adjacent image cells of a source image from a corresponding value of a corresponding image cell of a corresponding horizonally -compacted image in combination with a corresponding value of a corresponding extra-data image cell of a corresponding extra data image;

FIG. 13b illustrates a lossless horizontal reconstruction of a pair of vertically-adjacent image pixels in accordance with the lossless horizontal-reconstruction process illustrated in

FIG. 13a; FIG. 13c illustrates application of the lossless horizontal reconstruction process illustrated in FIG. 13a, as applied to the Red (R), Green (G), Blue (B) and transparency (a) image-pixel elements of a horizontally-compacted image pixel to generate corresponding image-pixel elements of corresponding vertically-adjacent image pixels of a corresponding source image;

FIG. 14a illustrates the equations of a process for losslessly bidirectionally reconstructing a quad of image cells from a pair of adjacent rows and a pair of adjacent columns of a source image, from a corresponding value of a corresponding image cell of a corresponding bidirectionally-compacted image in combination with corresponding values of corresponding extra-data image cells of a corresponding extra-data image;

FIG. 14b illustrates a lossless bidirectional reconstruction of a quad of image cells from a pair of adjacent rows and a pair of adjacent columns of a source image in accordance with the lossless bidirectional-reconstruction process illustrated in FIG. 14a;

FIG. 14c illustrates application of the lossless bidirectional reconstruction process illustrated in FIG. 14a, as applied to the Red (R), Green (G), Blue (B) and transparency (a) image-pixel elements of a bidirectionally-compacted image pixel to generate corresponding image-pixel elements of corresponding quad of image cells from a pair of adjacent rows and a pair of adjacent columns of a source image;

FIG. 15 illustrates a process for losslessly reconstructing a compacted image that was compacted in accordance with the process illustrated in FIG. 2, wherein the lossless reconstruction is in accordance with successive applications of the vertical and horizonal reconstruction processes illustrated in FIGS. 12a- 12c and 13a-13c, respectively;

FIG. 16 illustrates a process for selecting a lead-primary-color extra-data image component to be used for approximately reconstructing a high-definition image;

FIG. 17 illustrates a process for selecting a method for approximately reconstructing transparency pixel elements of a high-definition image;

FIG. 18 illustrates a process - called from the process illustrated in FIG. 16— for approximately reconstructing primary-color pixel elements of a compacted image that was compacted in accordance with the process illustrated in FIG. 2, wherein the approximate reconstruction is in accordance with successive applications of the vertical and horizonal reconstruction processes illustrated in FIGS. 12a-12c and 13a-13c, respectively, but using only corresponding extra-data associated with the red extra-data pixel element data;

FIG. 19 illustrates a process - called from the process illustrated in FIG. 16— for approximately reconstructing primary-color pixel elements of a compacted image that was compacted in accordance with the process illustrated in FIG. 2, wherein the approximate reconstruction is in accordance with successive applications of the vertical and horizonal reconstruction processes illustrated in FIGS. 12a-12c and 13a-13c, respectively, but using only corresponding extra-data associated with the green extra-data pixel element data;

FIG. 20 illustrates a process - called from the process illustrated in FIG. 16— for approximately reconstructing primary-color pixel elements of a compacted image that was compacted in accordance with the process illustrated in FIG. 2, wherein the approximate reconstruction is in accordance with successive applications of the vertical and horizonal reconstruction processes illustrated in FIGS. 12a-12c and 13a-13c, respectively, but using only corresponding extra-data associated with the blue extra-data pixel element data;

FIG. 21 illustrates a process - called from the process illustrated in FIG. 17— for approximately reconstructing transparency pixel elements of a compacted image that was compacted in accordance with the process illustrated in FIG. 2, wherein the approximate reconstruction is in accordance with successive applications of the vertical and horizonal reconstruction processes illustrated in FIGS. 12a-12c and 13a-13c, respectively, but using only corresponding extra-data associated with the lead-primary-color extra-data pixel element data that had been identified by the process illustrated in FIG. 16;

FIG. 22 illustrates a process - associated with the download and display of high- definition images in accordance with the image processing system illustrated in FIG. 1— for approximately reconstructing a high-definition image from a compacted image that was compacted in accordance with the process illustrated in FIG. 2, wherein the approximate reconstruction is in accordance with successive applications of the vertical and horizonal reconstruction processes illustrated in FIGS. 12a-12c and 13a-13c, respectively, but using only corresponding extra-data associated with the lead-primary-color extra-data pixel element data that had been identified by the process illustrated in FIG. 16, and using the method for approximately reconstructing transparency pixel elements that had been identified by the process illustrated in FIG. 17;

FIG. 23 illustrates a hybrid process - associated with the download and display of high-definition images in accordance with the image processing system illustrated in FIG. 1, following the process illustrated in FIG. 22 - comprising a hybrid of the processes illustrated in FIGS. 15 and 22 for approximately reconstructing, but with higher fidelity than from the process illustrated in FIG. 22, a high-definition image from a compacted image that was compacted in accordance with the process illustrated in FIG. 2; FIG. 24 illustrates a hybrid process - associated with the download and display of high-definition images in accordance with the image processing system illustrated in FIG. 1, following the process illustrated in FIG. 23 - comprising a hybrid of the processes illustrated in FIGS. 15 and 23 for approximately reconstructing, but with higher fidelity than from the process illustrated in FIG. 23, a high-definition image from a compacted image that was compacted in accordance with the process illustrated in FIG. 2;

FIG. 25 illustrates a hybrid process - associated with the download and display of high-definition images in accordance with the image processing system illustrated in FIG. 1, following the process illustrated in FIG. 24 - comprising a hybrid of the processes illustrated in FIGS. 15 and 24 for approximately reconstructing, but with higher fidelity than from the process illustrated in FIG. 24, a high-definition image from a compacted image that was compacted in accordance with the process illustrated in FIG. 2;

FIG. 26 illustrates a hybrid process - associated with the download and display of high-definition images in accordance with the image processing system illustrated in FIG. 1, following the process illustrated in FIG. 25 - comprising a hybrid of the processes illustrated in FIGS. 15 and 25 for approximately reconstructing, but with higher fidelity than from the process illustrated in FIG. 25, a high-definition image from a compacted image that was compacted in accordance with the process illustrated in FIG. 2;

FIG. 27 illustrates a hybrid process - associated with the download and display of high-definition images in accordance with the image processing system illustrated in FIG. 1, following the process illustrated in FIG. 26 - comprising a hybrid of the processes illustrated in FIGS. 15 and 26 for approximately reconstructing, but with higher fidelity than from the process illustrated in FIG. 26, a high-definition image from a compacted image that was compacted in accordance with the process illustrated in FIG. 2; and

FIG. 28 illustrates a process - associated with the download and display of high- definition images in accordance with the image processing system illustrated in FIG. 1, following the process illustrated in FIG. 27 - comprising a hybrid of the processes illustrated in FIGS. 15 and 26 for losslessly reconstructing a high-definition image from a compacted image that was compacted in accordance with the process illustrated in FIG. 2.

DESCRIPTION OF EMBODIMENT(S)

Referring to FIG. 1, an image processing system 10 provides for uploading a high- definition image 12 from a website proprietor 14, and sequentially compacting this into a losslessly-reconstructable set of image components 16 that can be readily transmitted from an internet server 18 to an associated internet client 20 at the request of a user 22 of an associated internet website 24 who wishes to display the high-definition image 12 on a display 26 of an internet-connected device 28. For example, in one set of embodiments, the high-definition image 12 is converted to losslessly-reconstructable form 16 by an image processing application 30 running either as a separate internet-based image server 32, on a computing device 34 of the website proprietor 14, or on the internet server 18.

The high-definition image 12 comprises a Cartesian array of M rows by N columns of pixels 36, wherein each pixel 36 comprises a pixel element R, G, B for each of the primary colors, red R, green G, and blue B, and possibly a pixel element a for transparency a, i.e. a total of four pixel elements R, G, B, a, each of which for example, comprises an Np-bit unsigned integer that can range in value from 0 to g. For example, in one set of embodiments, Np=8, so that g =255. Accordingly, the high-definition image 12, with four pixel elements R, G, B, a per pixel 36, comprises a total of NxMx4 pixel elements R, G, B, a, which for a large high-definition image 12 can require an unacceptably long period of time (from the standpoint of the user 22) to fully transmit if otherwise transmitted in original form directly over the internet 38 from the internet website 24 hosted by the internet server 18, to the internet-connected device 28 of the user 22 for presentation on the display 26 associated therewith.

For example, the internet-connected device 28 of the user 22 may comprise either a desktop or laptop personal computer (P.C.), a tablet device, a smart phone device, or a user- wearable device such as internet-connected eyewear, a virtual-reality display or a wrist- connected device, any of which might support functionality to provide for either panning or zooming images, either of which can pose additional demands on associated bandwidth for image transmission or display. Furthermore, internet websites are presently automatically ranked based at least in part on the speed at which associated webpages and associated images are displayed. Limitations in transmission bandwidth force digital images to be delivered either slowly with lossless compression to preserve quality or much more quickly with a compromise in that quality due to more lossy compression approaches. Historically, internet applications have almost universally adopted lossy compression approaches for delivering more complex, non-graphical images such as digital photographs because the delays of lossless approaches are unacceptable to most internet users and limitations in device displays often could not present the advantages of lossless compression anyway. However, as device displays increase in both pixel quantity and quality, and as user expectations for better images increase, especially as zooming in on images becomes an increasingly common practice, there is a greater demand for increased perceived image quality as long as there is not a significant perceived compromise in delivery and presentation speed.

Accordingly, there exists a need to provide the user 22 with perceived high-quality images at a perceived speed that is acceptably fast— or at least not unacceptably slow - culminating with a display of an essentially lossless reconstruction 12’ of the original high- definition image 12, thereby confirming the perception of the high quality of the displayed image. To this end, the losslessly-reconstructable set of image components 16 includes a base image IMG P,P generated as a result of P stages of compaction in each of the horizontal (row) and vertical (column) directions of the original high-definition image 12, wherein each stage of compaction - for example, two to one compaction in an exemplary embodiment— completed in both directions results in a reduction in the number of pixel elements R, G, B, a by a factor of 4 in the exemplary embodiment. More particularly, in the superscript“P, P”, the first“P” indicates the level of horizontal compaction of rows of the high-definition image 12, and the second “P” indicates the level of vertical compaction of columns of the high- definition image 12, wherein, following each level of compaction, the corresponding number of rows or columns in the resulting compacted image is half the corresponding number of rows or columns in the source image subject to that level of compaction. In the exemplary embodiment, each level of compaction reduces the corresponding number of rows or columns by half, with a given level of bidirectional compaction reducing the number of pixels 36 to 25% of the corresponding number of pixels 36 in the source image subject to that level of compaction. Accordingly, the total number of pixels 36 in the base image IMG P,P is less than the corresponding number of pixels 36 in the high-definition image 12 by a factor of l/(4 p ); for example, 1/256 for P=4 levels of compaction.

The losslessly-reconstructable set of image components 16 further includes a parameter b and a series of extra-data images ED - both of which are described more fully hereinbelow — that provide for substantially losslessly reconstructing the original high- definition image 12, i.e. a lossless reconstruction 12’ that might differ from the original high-definition image 12 as a result of either truncation errors in the associated reconstruction calculations, or a the result of the effects of data compression or other artifacts that are introduced by the process of transmitting the losslessly-reconstructable set of image components 16 over the internet 38 from the internet server 18 to the internet client 20. The extra-data images ED, in cooperation with parameter b for some of the intermediate images, provide for successively reconstructing and displaying a series of intermediate reconstructed images with successively-improving quality and resolution, culminating with a display of the ultimate lossless reconstruction 12’ of the high-definition image 12. Although the total number of pixel elements R, G, B, a in the losslessly-reconstructable set of image components 16 is the same as in the original high-definition image 12, the former are structured so as to provide for quickly displaying the base image IMG P,P — thereby conveying the nature of the content thereof,— and then successively improving the quality of the displayed image, so as to thereby accommodate the link speed of the internet 38 without adversely affecting the perceived speed at which the image is displayed and the perceived quality thereof.

For example, referring to FIGS. 2, 3a-b and 4a-b, in one set of embodiments, in accordance with an associated image compaction process 200, the extra-data images ED, ED 1 0 , ED 1,1 , ED 2 1 , ED 2 2 , ED P P 1 , ED P,P and base image IMG P,P are generated as a result of P levels of successive horizontal and vertical compactions of the original high- definition image 12, IMG 0,0 and subsequent, successively-generated compacted intermediate images IMG 1,0 , IMG 1,1 , IMG 2,1 , IMG 2,2 , IMG P,P 1 , finally leading to the generation of the base image IMG P,P , which is stored, along with the extra-data images ED, ED 1 0 , ED 1 1 , ED 2 · 1 , ED 2 2 , ED P,P 1 , ED P P , in the associated losslessly-reconstructable set of image components 16. More particularly, a first intermediate, compacted image IMG 1,0 and a corresponding first extra-data image ED 1 0 are first generated by a horizontal compaction process 300, CH{} illustrated in FIGS 3a-b. More particularly, FIG. 3a illustrates the equations used to compact a given pair of pixel element values PV k,l (i k . j 1 ), PV k (i k + 1. j 1 ) of corresponding pixel elements R, G, B, a of a pair of vertically-adjacent pixels 36 in rows i k and i k +l and column j 1 of the corresponding source image 40, wherein for this first compaction process the source image 40 is the original, uncompacted high-definition image 12 for which k=0 and 1=0. For example, FIG. 5 illustrates an example of a M°=16 x N°=16 source image, with the rows indexed by row index i°, and the columns indexed by column index j°. The superscript“0” for each of these indices and the total numbers of rows M° and columns N° is indicative of the corresponding compaction level of 0. A horizontal compaction reduces the number of rows M 1= 8 in the resulting first intermediate, compacted image IMG 1,0 to half the number of rows M° of the corresponding source image 40, i.e. the high- definition image 12, with the rows of the first intermediate, compacted image IMG 1,0 then indexed by a corresponding row index i 1 that ranges in value from 1 to M 1 . For horizontal compaction, generally the relationship of the row indices i k , i k+1 of the source 40, I G 1 " 1 and compacted IMG k+1,1 images, respectively, is given by i k+1 =(i k +l)/2. Accordingly, the horizontal compaction of the pair of pixel element values PV k,l (i k , j 1 ), PV k,l (i k +l, j 1 ) from the source image 40 results in a single pixel element value PV k+ F'(i k+1 j i ) m th e resulting compacted image IMG k+1,1 and a corresponding extra-data image pixel element value ED k+I (i k+l . j 1 ) in the corresponding extra-data image ED k+I . For example, FIGS. 6a and 6b respectively illustrate the first intermediate, compacted image IMG 1,0 and the first extra- data image ED 1,0 resulting from the horizontal compaction of the high-definition image 12, IMG 0,0 illustrated in FIG. 5.

Following the horizontal compaction of the original high-definition image 12 to generate the first intermediate, compacted image IMG 1,0 and associated first extra-data image ED 1,0 , the first intermediate, compacted image IMG 1,0 - used as a source image 40 - is vertically compacted by a vertical compaction process 400, CV{} illustrated in FIGS 4a-b to generate a corresponding second intermediate, compacted image IMG 1,1 and a corresponding second extra-data image ED 1 1 . More particularly, FIG. 4a illustrates the equations used to compact a given pair of horizontally -adjacent pixel element values PV k ’'(i k , j 1 ), PV k,l (i k , j'+l) of corresponding pixel elements R, G, B, a of a pair of horizontally- adjacent pixels 36 in columns j 1 and j'+l and row i k of the corresponding source image 40. Accordingly, the vertical compaction of the pair of pixel element values PV k,l (i k . j 1 ), PV k (i k . j'+l) from the source image 40 results in a single compacted pixel element value PV k +l (i k . j' +1 ) in the resulting compacted image IMG k,l+1 and a corresponding extra-data image pixel element value ED k,l+1 (i k , j' +1 ) in the corresponding extra-data image ED k,l+1 , For example, FIGS. 7a and 7b respectively illustrate the second intermediate, compacted image IMG 1,1 and the second extra-data image ED 1,1 resulting from the vertcial compaction of the first intermediate, compacted image IMG 1,0 illustrated in FIG. 6a.

Referring to FIGS. 8a-b and 9a-b, alternatively, both the horizonal an vertical compactions can be accomplished with a simultaneous bidirectional compaction by a bidirectional compaction process 800, CB{} illustrated in FIGS 8a-b, in accordance with the equations illustrated in FIG. 8a for a quad of pixel element values PV k ’'(i k , j'), PV k ’'(i k , j'+l), PV k l (i k +l, j 1 ), PV k ’'(i k +l, j'+l) illustrated in FIG. 8b, so as to provide for generating a single corresponding compacted pixel element value PV k+l ,l+l (i k+l , j' +1 ) in combination with three corresponding associated extra-data image pixel element values ED k+1,l+1,1 (i k+1 , j' +1 ), EF) k+ u +i ,2 j k+i . j' +1 ), ED k+1 l+1 3 (i k+1 j l+l ). wherein FIG. 9a illustrates the associated intermediate, compacted image IMG 1,1 generated directly from the high-definition image 12, IMG 0,0 using the equations illustrated in FIG. 8a, and FIG. 9b illustrates the corresponding associated extra-data image ED 1 1 resulting from this bidirectional compaction process.

Returning to FIG. 2, following the vertical compaction process 400 to generate the second intermediate, compacted image IMG 1,1 , the second intermediate, compacted image IMG 1,1 - as a source image 40 - is then compacted using the horizontal compaction process 300 to generate a third intermediate, compacted image IMG 2,1 and a corresponding third extra-data image ED 2,1 , respective examples of which are illustrated in FIGS. 10a and 10b, respectively. The third intermediate, compacted image IMG 2,1 - as a source image 40 - is then compacted using the vertical compaction process 400 to generate a fourth intermediate, compacted image IMG 2,2 and a corresponding fourth extra-data image ED 2,2 , respective examples of which are illustrated in FIGS. 11a and lb, respectively. The compaction process continues with successive alternations of horizontal and vertical compaction until the final application of the vertical compaction process 400 to generate the base image IMG P,P and the corresponding last extra-data image ED P F> .

Referring to FIGS. 12a-b, a lossless vertical reconstruction process 1200, RV{} provides for losslessly reconstructing adjacent pixel element values PV k,l (i k . j 1 ), PV k (i k . j'+l) from a corresponding compacted pixel element value PV k,l+1 (i k , j l+1 ) using the corresponding associated extra-data image pixel element value ED k +l (i k . j l+1 ), both of which were generated by the vertical compaction process 400 of FIGS. 4a and 4b, wherein FIG. 12c illustrates the application of the equations of the lossless vertical reconstruction process 1200 illustrated in FIG. 12a to each of the associated pixel elements R, G, B, a, i.e. X= R, G, B, or a.

Referring to FIGS. 13a-b, a lossless horizontal reconstruction process 1300, RH{} provides for losslessly reconstructing adjacent pixel element values PV k,l (i k . j 1 ), PV k (i k + 1. j 1 ) from a corresponding compacted pixel element value pv k+i *(i k+i j 1 ) using the corresponding associated extra-data image pixel element value ED k+1, '(i k+1 , j 1 ), both of which were generated by the horizontal compaction process 300 of FIGS. 3a and 3b, wherein FIG. 13c illustrates the application of the equations of the lossless horizontal reconstruction process 1300 illustrated in FIG. 13a to each of the associated pixel elements R, G, B, a, i.e. X= R, G, B, or a.

Referring to FIGS. 14a-b, a lossless bidirectional reconstruction process 1400, RB{} provides for losslessly reconstructing a quad of pixel element values PV k (i k , j 1 ), PV k ’'(i k , j'+l), PV k,l (i k +l, j 1 ), PV k,l (i k +l, j'+l) from a corresponding compacted pixel element value PV k+i,i+i j k+i . j' +1 ) using the corresponding associated extra-data image pixel element values ED k+l ' l+l ' 2 (i k+l . j l+1 ), ED k+1,l+1,3 (i k+1 , j l+1 ), all of which were generated by the bidirectional compaction process 300 of FIGS. 8a and 8b, wherein FIG. 14c illustrates the application of the equations of the lossless bidirectional reconstruction process 1400 illustrated in FIG. 14a to each of the associated pixel elements R, G, B, a, i.e. X= R, G, B, or a.

Referring to FIG. 15, a first aspect of an image reconstruction process 1500 provides for a substantially lossless reconstruction 12% IMG’ 0,0 of the high-definition image 12, IMG 0,0 , beginning with application of the lossless vertical reconstruction process 1200 to the base image IMG P,P to generate a first intermediate reconstructed image IMG P,P 1 , and then applications of the lossless horizontal reconstruction process 1300 to the first intermediate reconstructed image IMG P P 1 to generate a second intermediate reconstructed image IMG P 1,P 1 , continuing with successive alternating applications of the lossless vertical 1200 and horizontal 1300 reconstruction processes to each previously-generated intermediate reconstructed image that is used as a source image 40 to the subsequent lossless reconstruction process, wherein each lossless reconstruction process, horizontal or vertical, is a counterpart to the corresponding compaction process, horizontal or vertical, that had been used to generate the associated compacted image and associated extra-data image that is being reconstructed. The extra-data images ED P P , ED P P 1 , ED 2 2 , ED 2 ’% ED 1 1 , ED 1 0 are the same as those generated during the associated image compaction process 200 illustrated in FIG. 2.

Each pixel of the extra-data images ED P,P , ED P,P % ..., ED 2,2 , ED 2,1 , ED 1,1 , ED 1,0 includes associated extra-data image pixel values for each of the associated pixel components R, G, B, a. Although this complete set of extra-data image pixel values for each of the associated pixel components R, G, B, a provides for the substantially lossless reconstruction 12% IMG’ 0,0 of the high-definition image 12, IMG 0,0 , it has been found that a reconstruction of the high-definition image 12, IMG 0,0 using only one of the primary-color pixel components R, G, B from the associated extra-data images ED P P , ED P P % ..., ED 2 2 , ED 2 ’% ED 1,1 , ED 1,0 provides for an approximate reconstruction of the high-definition image 12, IMG 0,0 that has sufficiently-high fidelity to be used as an intermediate reconstructed image, which can be made available for display more quickly that can the substantially lossless reconstruction 12% IMG’ 0,0 of the high-definition image 12, IMG 0,0 , because the approximate reconstruction of the high-definition image 12, IMG 0,0 is dependent upon only one of the primary-color pixel components R, G, B, which requires only 25% of the extra-data as would be used for a lossless reconstruction 12% IMG’ 0,0 . Furthermore, it has been found that the fidelity of the approximate reconstruction of the high-definition image 12, IMG 0,0 can be dependent upon which of the primary-color pixel components R, G, B is selected for the approximate reconstruction, wherein the primary-color pixel component R, G, B that provides for the highest-fidelity approximate reconstruction is referred to as the lead-primary-color pixel component X, wherein X is one of R, G and B.

Referring to FIG. 16, the lead-primary-color pixel component X is identified by a lead-primary-color identification process 1600. Beginning with step (1602), for a high- definition image 12 having at least two primary-color components, in step (1604), each of the pixel components R, G, B, a, of the associated pixel element data for each of the corresponding pixel elements R, G, B, a is compacted in accordance with the above-described image compaction process 200 to generate the base image IMG P,P and the associated extra- data images ED P P , ED P P 1 , ED 2,2 , ED 2,1 , ED 1,1 , ED 1,0 of the losslessly-reconstructable set of image components 16. Then, in step (1606), for each primary-color pixel component R, G, B, the corresponding primary-color pixel component R, G, B of the extra-data images ED P P , ED P P 1 , ED 2 2 , ED 2 1 , ED 1 1 , ED 1 0 is used exclusively to reconstruct an approximate image, i.e. a test image, containing all of the primary-color pixel components R, G, B from the base image IMG P,P and the associated intermediate images IMG P,P 1 , ..., IMG 2,2 , IMG 2,1 , IMG 1,1 , IMG 1,0 , but using corresponding extra-data images ED P P , ED P P 1 , ..., ED 2,2 , ED 2,1 , ED 1,1 , ED 1,0 of only the corresponding one of the primary-color pixel components R, G, B. More particularly, referring to FIG. 18, an R-approximate image IMG(R’) 0 0 is generated by a red-approximate image reconstruction process 1800, which is the same as the image reconstruction process 1500 illustrated in FIG. 15, except that only the red-image component is used from the extra-data images EDR p p , EDR p p 1 , ..., EDR 2 2 ,

EDR 2 1 , EDR 1 1 , EDR 1,0 to reconstruct each of the corresponding pixel elements R, G, B, a from the base image IMG P,P and each of the associated intermediate images IMG P,P 1 , ..., IMG 2,2 , IMG 2,1 , IMG 1,1 , IMG 1,0 , i.e. for each of the associated primary-color pixel components R, G, B and for the associated transparency a component, regardless or color or transparency. Furthermore, referring to FIG. 19, a G-approximate image IMG(G’) 0 0 is generated by a green-approximate image reconstruction process 1900, which is the same as the image reconstruction process 1500 illustrated in FIG. 15, except that only the green- image component is used from the extra-data images EDG p,p , EDG p,p 1 , ..., EDG 2 2 , EDG 2 1 , EDG 1 1 , EDG 1 ,0 to reconstruct each of the corresponding pixel elements R, G, B, a from the base image IMG P P and each of the associated intermediate images IMG P,P 1 , ..., IMG 2,2 , IMG 2,1 , IMG 1,1 , IMG 1,0 , i.e. for each of the associated primary-color pixel components R, G, B and for the associated transparency a component, regardless or color or transparency. Yet further, referring to FIG. 20, a B-approximate image IMG(B’) 0 0 is generated by a blue- approximate image reconstruction process 2000, which is the same as the image reconstruction process 1500 illustrated in FIG. 15, except that only the blue-image component is used from the extra-data images EDB p,p , EDB p,p % EDB 2 2 , EDB 2 1 , EDB 1,1 , EDB 1,0 to reconstruct each of the corresponding pixel elements R, G, B, a from the base image IMG P,P and each of the associated intermediate images IMG P,P % IMG 2,2 , IMG 2,1 , IMG 1,1 , IMG 1,0 , i.e. for each of the associated primary-color pixel components R, G, B and for the associated transparency a component, regardless or color or transparency. Then, returning to FIG. 16, in step (1608), each resulting approximate reconstructed image, i.e. separately, each of the R-approximate image IMG(R’) 0 0 , the G-approximate image IMG(G’) 0 0 , and the B-approximate image IMG(B’) 0 0 , is compared with the lossless reconstruction 12% IMG’ 0,0 that was based upon the complete set of extra-data image elements for each of the pixel components R, G, B, a. More particularly, a sum-of-squared difference SSDR, SSDG, SSDB— between the lossless reconstruction 12% IMG’ 0,0 and the respective R-approximate image IMG(R)’ 0,0 , G-approximate image IMG(G)’ 0,0 and B- approximate image IMG(B)’ 0 0 , respectively - is calculated as sum of the square of the difference between the values of corresponding pixel elements R, G, B, a for each pixel elements R, G, B, a and for all pixels 36 of the respective images, i.e. IMG’ 0,0 and IMG(R)’ 0 0 , IMG(G) ,0 ’° or IMG(B) ,0 ’°. Based upon comparisons in at least one of steps (1610) and (1612), in one of steps (1514) through (1618), the primary-color pixel component R, G, B associated with the smallest-valued sum-of-squared difference SSDR, SSDG, SSDB is then identified as the lead-primary-color pixel component X, and, in step (1620), the corresponding extra-data images EDx p p , EDx p p % ..., EDx 2 2 , EDx 2 1 , EDx 1 1 , EDx 1,0 are saved, as are the extra-data images EDY p,p , EDY^ 1 , ..., EDY 2 2 , EDY 2 1 , EDY 1,1 , EDY 1,0 and EDz p p , EDz p,p % ..., EDz 2 2 , EDz 2 1 , EDz 1 1 , EDz 1,0 for the remaining primary- color pixel components R, G, B. Accordingly if X= R, then {Y,Z}={G, B}; if X= G, then {Y,Z}={R, B}; and if X= B, then {Y,Z}={R, G}.

Referring to FIG. 17, following the identification of the lead-primary-color pixel component X by the lead-primary-color identification process 1600, from step (1622) thereof, a transparency-approximation-method-identification process 1700 is used to identify a method of approximating the transparency pixel element a when approximating the high-definition image 12 prior to receiving the complete set of extra-data images ED for the Y, Z and a pixel components. More particularly, in step (1702), the transparency component a of the base image IMG a p,p is scaled or interpolated to generate a scaled/interpolated image IMG a mterp containing the same number of pixels - i.e. NxM - as the high-definition image 12, and in one-to-one correspondence therewith. Then, in step (1704), referring to FIG. 21, an X-approximate image IMG a (X) ,0 ’° is generated by an X- approximate image reconstruction process 2100, which is the same as the image reconstruction process 1500 illustrated in FIG. 15, except that only the X-image component is used from the extra-data images EDx p p , EDx p p % EDx 2 2 , EDx 2 1 , EDx 1 1 , EDx 1 0 to reconstruct the transparency pixel element a from the base image IMG a p,p and each of the associated intermediate images IMG X P P 1 , IMG a 2 2 , IMG a 2,1 , IIMG a ' % IIMG a 1,0 . Then, in step (1706), the sum-of-squared difference SSD a _interp between the transparency a of the lossless reconstruction 12% IMG a ,0 ’° and the scaled/interpolated image IIMG ajnterp is calculated, as is the sum-of-squared difference SSDx between the transparency a of the lossless reconstruction 12% IMG a 0,0 and the X-approximate image IMG a (X) ,0 ’°. If, in step (1708), the sum-of-squared difference SSDx based on the X-approximate image IMG cc (X) ,0 ’° is less than the sum-of-squared difference SSD <x interp based on the scaled/interpolated image IIMG ajnterp . then, in step (1710), a parameter b is set to cause the extra-data images EDx p p , EDx p p % EDx 2 2 , EDx 2 ’% EDx 1 1 , EDx 1 0 to be used to reconstruct approximate images of the transparency pixel elements a when the associated extra-data images ED a p,p , ED a p,p % ED a 2 2 , ED a 2 1 , ED 0 1J , ED a 1 ,0 are not available for an associated lossless reconstruction. Otherwise, from step (1708), the parameter b is set so as to cause the associated transparency pixel elements a to be scaled or interpolated when the associated extra-data images ED a p,p , ED a p p % ED a 2 2 , ED a 2,1 , ED a 1,1 , ED a 1 ,0 are not available for an associated lossless reconstruction. Following steps (1708) or (1710), in step (1714), the parameter b is stored for future use.

Referring to FIGS. 22-28, following the identification by the lead-primary-color identification process 1600 of the lead-primary-color pixel component X, and the identification by the transparency-approximation-method-identification process 1700 of the method of approximating the transparency pixel element a, the losslessly- reconstructable set of image components 16 can be transmitted— in order of reconstruction -

- from the internet server 18 to the internet client 20 upon demand by the internet- connected device 28 under control of the user 22, and subsequently progressively displayed on the display 26 of the internet-connected device 28 as the various losslessly- reconstructable set of image components 16 are received. Referring to FIG. 22, in accordance with a first approximate image reconstruction process 2200, after initially receiving and displaying the base image IMG P,P , following receipt of each extra-data images

EDx p p , EDx p p % ..., EDx 2 2 , EDx 2 1 , EDx 1 1 , EDx 1 ,0 for the lead-primary-color pixel component X, an X-approximate high-definition image is generated for each of the primary- color pixel components R, G, B, and for the transparency component a, using the lead- primary-color pixel component X of the corresponding extra-data images EDx p p , EDx p p l , EDx 2 2 , EDx 2 1 , EDx 1,1 , EDx 1 ,0 alone to progressively reconstruct the corresponding primary-color images (IMGx p p l , IMG Y p p 1 , IMGz ,p p l ), (IMGx 2 · 2 , I MG Y 2 ' 2 ,

IMGz 2 2 ), (IMGx 2 1 , IMGY 2 1 , IMGZ 2 1 ), (IMGx 1 1 , IMGY 1 1 , IMGZ 1 1 ), (IMGx 1 0 , I IVIGY 1 ' 0 , IMGZ’ 1 0 ), and (IMGx 0,0 , IMGY’ 0,0 , IMGZ ,0 ’°), and to reconstruct the transparency component a in accordance with the transparency-approximation method identified by parameter b, wherein the resulting primary-color images IMGx p,p l , IMGx 2,2 , IMGx 2,1 , IMGx 1,1 , IMGx 1,0 , IMGx 0,0 associated with the lead-primary-color pixel component X - having been losslessly reconstructed— are saved for subsequent image reconstruction. Then, referring to FIGS. 22-28, as the subsequent extra-data images ED Y p p , ED Y p p % ED Y 2 2 , ED Y 2 1 , EDY 1,1 , EDY 1 0 and ED Z P P , ED Z P P 1 , ..., ED Z 2 2 , ED Z 2,1 , EDz 1 1 , ED Z 1 0 are received for the remaining primary-color pixel components Y,Z, and as the extra-data images ED <x P p , ED <x P p 1 , ..., ED« 2 2 , EDa 2 1 , EDa 1 1 , ED a 1 ,0 are received for the transparency component a, then the remaining approximate image components (IMG <X P P 1 , IMG Y p p 1 , IMGz p p l ), ..., (IMGa 2 2 , IMGY 2 2 , IMGZ 2 2 ), (IMG» 2 1 , IMGY 2 1 , IMGZ 2 1 ), (IMGa 1 ,1 , IMGY 1 1 , IMGZ 1 1 ), (IMGa 1 °, IMGY 1 0 , IMGZ 1 0 ), and (IMG a °’°, IMGY 0 0 , IMGZ 0 0 ) are progressively replaced with corresponding losslessly-reconstructed image components and saved as needed for continual improvement of the reconstructed image, until, as illustrated in FIG. 28, the lossless reconstruction 12% IMG’ 0,0 is ultimately generated an displayed.

More particularly, following reconstruction of the X-approximate high-definition image (IMGx 0,0 , IMG Y 0,0 , IMG Z 0,0 ), and the exact reconstruction of the X-component IMGx 0,0 of the high-definition image 12, at the end of the first approximate reconstruction process 2200, referring to FIG. 23, in accordance with a second approximate image reconstruction process 2300, following receipt of the extra-data images ED Y 11 11 , ED Z P,P ED a p,p , the image components IMG Y p,p 1 , I MG Z p p ' are reconstructed exactly from the base image IMG P,P by a vertical reconstruction, using the corresponding associated extra-data images ED Y P p , ED Z P P ED a p,p , after which the exactly -reconstructed image components IMG Y p,p 1 , IMG Z 1 ’’ 1 1 are saved. Then, the remaining steps of the second approximate image reconstruction process 2300 - the same as for the first approximate image reconstruction process 2200 - are applied to the reconstruction of the remaining approximate image components IMG Y 1 " 1 , IMGz ,k ” for primary color components Y and Z.

Following reconstruction of the X-approximate high-definition image (IMGx 0,0 , IMG Y 0,0 , IMG Z 0,0 ) by the second approximate image reconstruction process 2300, referring to FIG. 24, in accordance with a third approximate image reconstruction process 2400, following receipt of the next set of remaining extra-data images ED Y p p 1 , ED Z p,p 2 ED X P P 1 , the image components IMG Y p 1,p 1 , IMGz p l,p l are reconstructed exactly from the image components IMG Y p,p 1 , IMGz p,p l saved from the second approximate image reconstruction process 2300, by a horizontal reconstruction using the corresponding associated extra-data images ED Y p p 1 , ED Z p,p 2 ED X p p 1 , after which the exactly- reconstructed image components IMG Y p 1 p 1 , IMGz p l,p l are saved. Then, the remaining steps of the second approximate image reconstruction process 2300 - the same as for the first 2200 and second 2300 approximate image reconstruction processes - are applied to the reconstruction of the remaining approximate image components I MGY 1 " 1 , IMGz ,k ’’ for primary color components Y and Z.

The processes or vertical and horizontal reconstruction are successively repeated, in each case with receipt of the next set of remaining extra-data images EDy 1 " 1 , EDz k, ‘ ED a k ’', followed by reconstruction commencing with the highest-resolution previously-saved exactly- reconstructed image components I MGY 1 " 1 , IMGz k, ‘ for primary color components Y and Z, so as to provide for exact reconstruction of the next image components IMG Y k+1,1 , IMGz k+1 ,1 or IMG Y k l+1 , IMGz k l+1

Eventually, referring to FIG. 25, in accordance with a fourth approximate image reconstruction process 2500, following receipt of the third next-to-last set of remaining extra-data images EDY 2 2 , EDZ 2,2 ED a 2,2 , the image components IMGY 2 1 , IMGZ 2,1 are reconstructed exactly by vertical reconstruction from the image components IMG Y 2,2 , IMG Z 2,2 saved from the most-recent horizontal reconstruction, using the corresponding associated extra-data images ED Y 2 2 , ED Z 2,2 ED a 2 2 , after which the exactly -reconstructed image components IMG Y 2,1 , IMG Z 2,1 are saved. Then, the remaining steps of the fourth approximate image reconstruction process 2500 - the same as for the previous approximate image reconstruction processes 2200, 2300, 2400 - are applied to the reconstruction of the remaining approximate image components I IMG Y 1 " 1 , IMGz ,k ’’ for primary color components Y and Z.

Then, referring to FIG. 26, in accordance with a fifth approximate image reconstruction process 2600, following receipt of the second next-to-last set of remaining extra-data images EDY 2 1 , EDZ 2 1 ED a 2,1 , the image components IMGY 1 1 , IMGZ 1 1 are reconstructed exactly by horizontal reconstruction from the image components IMG Y 2,1 , IMGz 2,1 saved from the fourth approximate image reconstruction process 2500, using the corresponding associated extra-data images ED Y 2 1 , ED Z 2,1 ED a 2,1 , after which the exactly- reconstructed image components IMG Y 1,1 , IMG Z 1,1 are saved. Then, the remaining steps of the fifth approximate image reconstruction process 2600 - the same as for the previous approximate image reconstruction processes 2200, 2300, 2400, 2500 - are applied to the reconstruction of the remaining approximate image components HVIGv 1 " 1 , IMGz ,k ’' for primary color components Y and Z.

Then, referring to FIG. 27, in accordance with a sixth approximate image reconstruction process 2700, following receipt of the next-to-last set of remaining extra-data images EDY 1,1 , EDZ 1,1 ED a 1,1 , the image components IMGY 1,0 , IMGZ 1 ,0 are reconstructed exactly by vertical reconstruction from the image components IMGY 1,1 , IMGZ 1 ,1 saved from the fifth approximate image reconstruction process 2600, using the corresponding associated extra-data images EDY 1,1 , EDZ 1,1 ED a 1,1 , after which the exactly -reconstructed image components IMGY 1,9 , IMGZ 1 ,0 are saved. Then, the remaining step of the sixth approximate image reconstruction process 2700 - the same as for the previous approximate image reconstruction processes 2200, 2300, 2400, 2500, 2600 - are applied to the reconstruction of the remaining approximate image component IMGY 0 , IMGz’ 0 for primary color components Y and Z.

Finally, referring to FIG. 28, in accordance with a final image reconstruction process 2800, following receipt of the last set of remaining extra-data images EDY 1,0 , EDZ 1,0 ED a 1,0 , the remaining final reconstructed high-definition image components IMGY 0,0 , IMGZ 0,0 are reconstructed exactly by horizontal reconstruction from the image components IMGY 1 ,0 , IMGz 1,0 saved from the sixth approximate image reconstruction process 2700, using the corresponding associated extra-data images EDY 1,0 , EDZ 1,0 ED a 1,0 , after which the exactly- reconstructed image components IMGY 0,0 , IMGZ 0,0 are displayed or saved.

It should be understood that the order in which the complete set of losslessly- reconstructed pixel elements R, G, B, a are generated is not limiting. For example, this could be strictly in order of increasing image resolution (i.e. increasing total number of pixels 36); in order of pixel element R, G, B, a, for example, completing all resolutions of each primary- color pixel component X, Y, Z before continuing with the next, followed by all resolutions of the transparency component a; or a hybrid thereof.

Furthermore, the relative ordering of horizontal and vertical compaction, and resulting vertical and horizontal reconstruction, could be reversed, with the high-definition image 12 being initially vertically compacted rather than initially horizontally compacted.

The image processing system 10 is not limited to the illustrated 2:1 compaction ratio of rows or columns in the source image 40 to corresponding rows or columns in the compacted image. For example, the teachings of the instant application could also be applied in cooperation with the image processing systems disclosed in U.S. Patent Nos. 8,855,195 and 8,798,136, each of which are incorporated herein by reference, wherein, in summary, an original, high-resolution image is sequentially compacted on a server device through a number of lower resolution levels or representations of the same image to a final low-resolution image, hereinafter referred to as the base image, while with each such compaction to a lower resolution image, extra data values are also generated and thereafter stored with the base image on the server device so that, when later losslessly sent to a receiving device, the base image and extra data values can be processed by the receiving device through reconstruction algorithms to losslessly reconstruct each sequentially higher resolution representation and to ultimately restore the original image, subject to minor truncation errors. This previous image transmission algorithm is applied independently to all color channels or components comprising the original high-resolution image such as, for example, primary colors of red, green, blue and as well as an alpha (transparency) component if present, so that the extra data values are effectively generated and stored on the server device and, upon later demand, sent by the server device to the receiving device as extra data images comprised of extra data pixels, each pixel comprised of values for the same number of color components as the original image, and which together with all extra data images from each level of compaction form a single set of extra data images supporting the sequential reconstruction by the receiving device of progressively higher resolution images from the base to final image.

As another illustrative example, an original image having resolution of Mo horizontal by No vertical pixels— wherein for convenience. Mo and No are evenly divisible by eight, with each image pixel having four color component values including red, green, blue and an alpha or transparency channel, each value having a minimum possible value of 0 and a maximum possible value of 255,— is sequentially compacted on a server device to progressively lower resolutions of half the previous resolution, three times in each direction, first horizontally then vertically each time, resulting in a base image resolution comprised of Mo/8 horizontal by No/8 vertical pixels, and an extra data image for each of the six total compactions, with each of the base image and the extra data images comprising pixels having values for all four components, i.e. R, G, B, and a, of the original image. Each such compaction to half resolution in a given direction is accomplished by calculating each Lower Resolution Pixel Value, LRPV, for each component of the lower resolution image as the average value of the corresponding, sequential pair of Higher Resolution Pixel Values, HRPVi and HRPV2 in the prior higher resolution image that are adjacent in that direction. LRPV = (HRPVi + HRPVi) / 2

Each such operation on HRPVi and HRPV2 is also accompanied by the calculation of a corresponding Extra Data Pixel Value EDPV given by

EDPV = (HRPVi - HRPVi + 255) / 2

Therefore with each compaction in each direction of the relatively higher resolution image there is an extra data pixel value calculated for each lower resolution pixel value calculated. Accordingly, the resolution (ie horizontal and vertical pixel count) of each extra data image formed is equal to the resolution of the lower resolution image formed from the compacted higher resolution image. Such extra data images can be treated as single images for purposes of storage and later use or they can be combined to abut other extra data images to form larger extra data images for additional efficiency, for example, each larger extra data image including both extra data images corresponding to the compaction of both directions of each progressively lower image resolution. In any case, all such extra data images subsequently form a complete set of extra data values for the receiving device to sequentially and progressively reconstruct from the base image in the reverse order of compaction up through each higher resolution until that of the original image is achieved. In each such reconstruction the corresponding pair of adjacent, higher resolution pixel values, HRPVi and HRPV2, for each component are determined from each lower resolution pixel value, LRPV, and the corresponding extra data pixel value, EDPV, through reconstruction derived from the above formulae:

HRPVi = LRPV + EDPV - 255/2

HRPVi = LRPV - EDPV + 255/2

In this particular example, compaction is shown to reduce every two adjacent pixels of a higher resolution image to one, representing a lower resolution image having half the resolution of the higher resolution image in the direction of compaction. Such formulae can be modified to support a variety of algorithms to achieve a variety of alternative compactions, such as, but not limited to, four pixels to three or three pixels to two, depending on the desired resolution of the relatively lower resolution images.

During compaction from each higher resolution image to the next lower resolution image representation, each pixel value of the extra data image for each color component is fundamentally derived from the difference of two spatially adjacent pixel values of that color component from the higher resolution image. In many practical imaging applications, especially those involving high-resolution photography, the difference between two spatially adjacent pixel values of one primary color channel is often substantially similar to the difference between those same pixels for the other primary color channels. In fact, when all such extra data pixel values of all primary colors from typical photographic images are displayed as a full color extra data image, that image appears substantially like a grayscale image with only sparsely populated pixels of visually obvious non-gray values. For this reason, the color palette of extra data pixel values necessary to represent the extra data image typically contains significantly fewer colors than the color palette of the higher resolution image. Since many lossless compression algorithms rely on smaller color palettes for their effectiveness, one advantage of the previously referenced algorithm is that, when losslessly compressed, the total bandwidth required to transmit the lower resolution base image and the set of extra data images combined is typically less than the bandwidth required to transmit the higher resolution image alone. Assuming the receiving device can rapidly execute the reconstruction algorithm, which is almost universally the case with today’s devices due the simplicity of related computational operations, the image processing system 10 supports a significantly faster transmission and presentation of losslessly compressed images.

The image processing system 10 inherently provides multiple, progressively higher resolution representations of the high-resolution image prior to achieving the final original resolution. This allows a server device such as a web server to first send the base image as a relatively very small file followed by each respective extra data image file so that a receiving and display device such as a web browser can quickly show the base image and then progressively improve the quality of that image through reconstruction as it receives the extra data image files rather than waiting for a single high-resolution image file before showing it.

In accordance with the image processing system 10, an original image comprising an array of pixel values each having two or more color components is sequentially compacted— on a server device— to one or more progressively lower resolution representations culminating in a lowest resolution base image, each such compaction resulting in an accompanying two dimensional array of extra data pixels comprising extra data values for each color component and therefore forming an extra data image, with all of the extra data images together forming a complete set of extra data images, whereby the complete set of extra data images can be used in a reconstruction process to reconstruct the base image into progressively higher resolution images culminating with the original image.

In further accordance with the image processing system 10, reconstruction is then applied on the server device to each of the primary color components of the base image to reconstruct a primary color test image of the same resolution as the original high-resolution image, but using the extra data image pixel values of only that single primary color as a substitute for all extra data image primary colors for that particular test image, and thereby, having done so for each single primary color, creating an intermediate test image for each. The pixel values of each primary color test image are then compared to all primary color pixel values of the original high-resolution image to determine which test image results in the best approximation of that original high-resolution. Such best approximation can be based on any comparative process as would occur to one skilled in the art, including but not limited to a summation of the results of the least squared error between all pixel values of each primary color of the original and test image. That primary color component of the extra data images resulting in the best approximation is hereinafter referred to as the lead primary color.

The complete set of extra data images is then divided into two subsets of extra data images, wherein a first subset includes all the pixel values of the complete set for just the lead primary color component and a second subset includes the pixel values of only the remaining color components, the two subsets together effectively providing all pixel values of the complete set, and the two subsets thereafter stored on the server device with the base image which itself now also includes a value indicating the lead primary color in its metadata.

If the original image— and therefore also the second subset of extra data images— includes a non-primary color component to be treated as an alpha channel, then the server device further uses reconstruction of the component of the base image to reconstruct a first high-resolution test image using the first set of extra data images. The server also creates a second high-resolution test image by simply scaling up the alpha channel of the base image to the same resolution as the original image using conventional scaling algorithms. Both such test images are then compared to the alpha channel component of the original high-resolution image to determine which method offers the best approximation in accordance with the same method used to determine the lead primary color. An indication to use either alpha channel scaling or reconstruction of the alpha channel with the first extra data subset as a best approximation is then stored as an additional value in the metadata of the base image.

In further accordance with the image processing system 10, upon demand from a receiving device, the server sends the base image with its metadata (or uses an alternate means of communicating the lead primary color and method for alpha channel treatment) followed by the first subset of extra data images comprising pixel values of the lead primary color, and thereafter, the server device sends the second subset of extra data images for the remaining color components. While sequentially receiving the first subset of extra data images, the receiving device applies reconstruction, through algorithms resident on the receiving device, or provided to the receiving device by the server device, for example, through the Hypertext Markup Language of a web page, to progressively reconstruct an intermediate image having the resolution of the original high-resolution image from the base image and the pixel values of the first subset of extra data images, and using such pixel values as an estimate for all other extra data primary color components. If an alpha or transparency component is also present on the receiving device, the receiving device, as instructed by the metadata of the base image, either scales that component up to the final resolution, or uses the first subset of extra data image values for reconstruction as well. Since the base image includes all colors of the original high-resolution image, this process therefore creates an intermediate image with the full color and resolution of the original image, albeit with less than full fidelity due to the use of a single primary color of the extra data images during reconstruction. Thereafter, and upon receiving the second subset set of extra data images, the receiving device then performs progressive reconstruction using the base image and the pixel values of the remaining extra data image components of the second subset, replacing the final image pixel values for the remaining primary color components and alpha channel (if present) with the reconstructed values when complete, and thereby fully and losslessly restoring the original image.

The intermediate image created by the image processing system 10 presents the full resolution of the final image in much less time than required to directly display the high- definition image 12 because the reconstruction to that resolution only requires the transmission and reception of a single color component of the extra data images (i.e. the first subset) instead of all color components of the complete set. While the fidelity of this intermediate image is very likely to be less than that of the final image, it will nonetheless be a very good representation if the pixel values of first subset of extra data images are good estimates of the corresponding pixel values of all other primary colors. As mentioned earlier, compaction of typical images shows that extra data images whose pixel values are primarily based on differences of spatially adjacent pixel values of the original image that appear substantially as grayscale images. This implies that all color components of such extra data pixels are very similar, and therefore that using one primary color value is a reasonable estimate for all other primary color values of an extra data pixel when used for reconstruction.

Of course, such similarities can not necessarily be extended to an alpha or transparency channel because such a component often has spatial characteristics far different from those of the primary color channels. In fact, in typical images, alpha channel content likely has a spatial structure that is smoothly varying (for example, to support gradient blending), and therefore simple scaling to the higher resolution can be both simple and sufficient for the alpha channel of an intermediate image. In any case, the aforementioned testing on the server device of such scaling compared to reconstruction of the alpha component using the lead primary color extra data will provide the best guidance for which method should be used by the receiving device for the intermediate image.

All color components of the extra data images are still contained in the combination of first and second subsets. Such extra data images are simply sent as a first subset of essentially grayscale images representing the chosen first subset primary color component, for example, green, while the second subset contains the remaining color component values, for example, red, blue and alpha. In other words, such extra data images fundamentally comprise the same amount of total data, whether sent as the complete, full color image, or as the two subsets of the compacted image and associated extra data. Accordingly, from that perspective, there is no additional bandwidth required by the image processing system 10 to transmit and receive the complete extra data image values relative to transmitting a high-definition image 12 in its entirety. Assuming the additional reconstruction processing by the receiving device adds negligible time, the image processing system 10 therefore provides for the transmission and display of the final, high-resolution image in substantially the same time as might otherwise be required to display the high-definition image 12 while also providing for a high-resolution approximation to that high-definition image in significantly less time than if the high- definition image 12 were otherwise directly received and displayed.

From the perspective of the internet server 18, for example, acting as Webserver, the image processing system 10 initially receives and preprocess a high-definition image 12, i.e. a high-resolution image, having at least two primary color components. The high-definition image 12 is progressively decimated in width and height while also creating a set of extra data images comprising extra data pixel values for all color components or channels, resulting in the creation and storage of a lower resolution base image, in such a way that the reverse decimation process can be used, beginning with the base image, to losslessly reconstruct the original high-resolution image. Then, for each primary color, reverse decimation is used to reconstruct a high-resolution test image from the base image, using the extra data image pixel values for that primary color, for all primary colors of the test image. Then, the internet server 18/webserver determines which reconstructed primary color test image produces the least total mean squared error between all primary color pixel values of the test image and those of the original high-resolution image, and indicates this least mean squared error color as the“lead” color in the base image metadata. Then, a first extra data image subset is created and stored from the extra data images having pixel values only for this lead color, and a second extra data image subset is also created and stored from the extra data images having pixel values excluding this color, but including all remaining colors of the set of extra data images. If the high-definition image 12 includes an alpha or transparency channel as one of the color channels, the internet server 18/webserver uses reverse decimation to reconstruct that channel of the high-resolution image from the alpha channel of the base image using the first extra data image subset to create a first alpha channel test image, and uses conventional scaling algorithms to scale up the alpha channel of the base image to the resolution of the original high-resolution image to create a second alpha channel test image. Then the internet server 18/webserver determines which of either the first alpha channel test image or second alpha channel test image produces the least total mean squared error between such image and the alpha channel of the original high-resolution image, and as a result, indicates the associated method as a value in the metadata of the base image metadata. Then, upon demand from a receiving device of an internet client 20, i.e. an internet- connected device 28, the internet server 18/webserver communicates thereto the base image (with metadata) and the first extra data image subset followed by the second extra data image subset, so as to provide for the substantially lossless reconstruction of the high-definition image 12.

The image processing system 10 therefore provides for the transmission and display of a lossless, high-resolution image by producing an intermediate image having the same resolution as the final high-resolution image, albeit with lower intermediate fidelity, but in a much faster time than the presentation of that final image and with virtually no increase in the bandwidth required for the delivery and display of that final image. This relatively much faster presentation of this high-resolution intermediate image therefore significantly accelerates the user’s perception of how fast the image content appears, thereby supporting the transmission and display of such lossless, high-resolution images without otherwise excessive perceived delay.

While specific embodiments have been described in detail in the foregoing detailed description and illustrated in the accompanying drawings, those with ordinary skill in the art will appreciate that various modifications and alternatives to those details could be developed in light of the overall teachings of the disclosure. It should be understood, that any reference herein to the term“or” is intended to mean an“inclusive or” or what is also known as a“logical OR”, wherein when used as a logic statement, the expression“A or B” is true if either A or B is true, or if both A and B are true, and when used as a list of elements, the expression“A, B or C” is intended to include all combinations of the elements recited in the expression, for example, any of the elements selected from the group consisting of A, B, C, (A, B), (A, C), (B, C), and (A, B, C); and so on if additional elements are listed. Furthermore, it should also be understood that the indefinite articles "a" or "an", and the corresponding associated definite articles“the’ or“said”, are each intended to mean one or more unless otherwise stated, implied, or physically impossible. Yet further, it should be understood that the expressions“at least one of A and B, etc.”,“at least one of A or B, etc.”,“selected from A and B, etc.” and“selected from A or B, etc.” are each intended to mean either any recited element individually or any combination of two or more elements, for example, any of the elements from the group consisting of“A”,“B”, and“A AND B together”, etc.. Yet further, it should be understood that the expressions“one of A and B, etc.” and“one of A or B, etc.” are each intended to mean any of the recited elements individually alone, for example, either A alone or B alone, etc., but not A AND B together. Furthermore, it should also be understood that unless indicated otherwise or unless physically impossible, that the above-described embodiments and aspects can be used in combination with one another and are not mutually exclusive. Accordingly, the particular arrangements disclosed are meant to be illustrative only and not limiting as to the scope of the invention, which is to be given the full breadth of the appended claims, and any and all equivalents thereof.

What is claimed is: