Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE PROCESSING METHOD, APPARATUS AND COMPUTER PROGRAM
Document Type and Number:
WIPO Patent Application WO/2018/142110
Kind Code:
A1
Abstract:
The present invention provides a novel algorithm for salience detection based on a dual rail antagonistic structure to predict where people look in images in a free-viewing condition. Furthermore, the proposed algorithm can be effectively applied to both still and moving images in visual media without any parameter tuning in real-time.

Inventors:
SOYEL HAMIT (GB)
MCOWAN PETER (GB)
Application Number:
PCT/GB2018/050246
Publication Date:
August 09, 2018
Filing Date:
January 29, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV LONDON QUEEN MARY (GB)
International Classes:
G06K9/46
Foreign References:
US20060215922A12006-09-28
Other References:
GUPTA SOUMYAJIT ET AL: "Psychovisual saliency in color images", 2013 FOURTH NATIONAL CONFERENCE ON COMPUTER VISION, PATTERN RECOGNITION, IMAGE PROCESSING AND GRAPHICS (NCVPRIPG), IEEE, 18 December 2013 (2013-12-18), pages 1 - 4, XP032582144, DOI: 10.1109/NCVPRIPG.2013.6776158
HAONAN YU ET AL: "Automatic interesting object extraction from images using complementary saliency maps", PROCEEDINGS OF THE ACM MULTIMEDIA 2010 INTERNATIONAL CONFERENCE : ACM MM'10 & CO-LOCATED WORKSHOPS ; OCTOBER 25 - 29, FIRENZE, ITALY, 1 January 2010 (2010-01-01), New York, NY, USA, pages 891, XP055461099, ISBN: 978-1-60558-933-6, DOI: 10.1145/1873951.1874105
IOANNIS KATRAMADOS ET AL: "Real-time visual saliency by Division of Gaussians", IMAGE PROCESSING (ICIP), 2011 18TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, 11 September 2011 (2011-09-11), pages 1701 - 1704, XP032079937, ISBN: 978-1-4577-1304-0, DOI: 10.1109/ICIP.2011.6115785
ACHANTA R ET AL: "Frequency-tuned salient region detection", 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION : CVPR 2009 ; MIAMI [BEACH], FLORIDA, USA, 20 - 25 JUNE 2009, IEEE, PISCATAWAY, NJ, 20 June 2009 (2009-06-20), pages 1597 - 1604, XP031607123, ISBN: 978-1-4244-3992-8
INFORMATION RESOURCES MANAGEMENT ASSOCIATION (IRMA): "Image Processing: Concepts, Methodologies, Tools, and Applications: Concepts, Methodologies, Tools, and Applications", 31 May 2013, IGI GLOBAL, ISBN: 1466639954, pages: 204 - 205, XP002779367
POYNTON, CHARLES A.: "Digital Video and HDTV: Algorithms and Interfaces", 1 January 2003, MORGAN KAUFMANN, ISBN: 1558607927, pages: 203 - 203, XP002779366
AKBARI A ET AL: "Adaptive saliency-based compressive sensing image reconstruction", 2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), IEEE, 11 July 2016 (2016-07-11), pages 1 - 6, XP032970825, DOI: 10.1109/ICMEW.2016.7574688
SCHACHTER, BRUCE: "Biological models for automatic target detection", SPIE, vol. 6967, no. 69670Y, 14 April 2008 (2008-04-14), XP040437539, DOI: 10.1117/12.778496
Attorney, Agent or Firm:
LEEMING, John Gerard (GB)
Download PDF:
Claims:
CLAIMS

1. A method of processing an image to identify conspicuous regions thereof, the method comprising:

receiving an input image;

deriving first and second antagonistic images from the input image; and

obtaining a conspicuity map based on the first and second antagonistic images.

2. A method according to claim 1 wherein the first antagonistic image is a luminance image and the second antagonistic image is a negative luminance image.

3. A method according to claim 2 wherein obtaining a conspicuity map comprises deriving aggregated minimum ratio matrices using a Division of Gaussians method. 4. A method according to claim 3 wherein obtaining a conspicuity map comprises performing a weighted sum of the minimum ratio matrices.

5. A method according to claim 1 wherein the first and second antagonistic images are RG and BY color opponency images.

6. A method according to any one of the preceding claims wherein deriving first and second antagonistic images comprises blurring the input image.

7. A method according to any one of the preceding claims wherein a plurality of conspicuity maps are derived and further comprising obtaining a salience map from the conspicuity maps.

8. A method according to claim 7 wherein the conspicuity maps include at least one of: luminance conspicuity maps, color conspicuity maps and edge conspicuity maps.

9. A method according to claim 7 or 8 wherein obtaining a salience maps comprises calculating a weighted average of the conspicuity maps using weights dependent on peak values of the respective conspicuity maps.

10. A method according to claim 7, 8 or 9 further comprising blurring and/or center- weighting the salience map.

11. A method according to claim 7, 8, 9 or 10 wherein the input image is one of a sequence of images and further comprising calculating a motion salience map based on the input image and one or more preceding images in the sequence of images; and combining the salience map with the motion salience map.

12. A method according to any one of the preceding claims further comprising displaying the conspicuity map or the salience map superimposed on the input image.

13. A method according to claim 12 further comprising capturing the input image using a camera and displaying the conspicuity map or the salience map superimposed on the input image substantially in real time.

14. A method of compressing an image or a sequence of images, the method comprising identifying conspicuous regions of the image or sequence of images using the method of any one of the preceding claims and compressing conspicuous regions with greater fidelity than other regions of the image.

15. A computer program comprising computer interpretable code that, when executed on a computer system, instructs the computer system to perform a method according to any one of the preceding claims.

16. An image processing apparatus comprising a processor and a memory, the memory storing a computer program according to claim 15.

17. An image processing apparatus according to claim 16 further comprising an image capture device configured to capture an image as the input image.

Description:
Image Processing Method, Apparatus and Computer Program

[0001] The present invention relates to image processing and in particular to methods, apparatus and computer programs for automatically identifying regions of interest in an image or scene.

[0002] It is known that a human observer of an image or scene does not devote equal attention to all parts of the visible scene or image but rather certain features will catch the eye more than others. In various fields it is desirable to know what features in an image or scene will attract the user's attention most. For example, when designing a user interface (e.g. a GUI for a computer system, a control panel for a machine or a dashboard for a vehicle) it is important to ensure that the most important information or status indicators come first to the user's attention. Another example is signage, e.g. in buildings where it is desirable that emergency exit notices stand out or in transportation where signs and signals need to be easily identified and interpreted without undue distraction to drivers.

[0003] A known approach to identifying the areas in an image or scene that will attract attention is to have test subjects view the image or scene whilst being monitored by an eye tracking device. The eye tracking device observes the eyes of the test subject and works out where he or she is looking. This approach is time consuming, especially as it is necessary to use many test subjects to obtain an unbiased result.

[0004] According to the invention, there is provided a method of processing an image to identify conspicuous regions thereof, the method comprising:

receiving an input image;

deriving first and second antagonistic images from the input image; and

obtaining a conspicuity map based on the first and second antagonistic images.

[0005] Embodiments of the present invention can therefore provide an automatic and objective determination of which parts of an image or scene will attract the attention of an observer. The use of two antagonistic images improves the accuracy of the results. For the purpose of the present invention, antagonistic images are images that encode data from one channel but with opposite senses. In one of a pair of antagonistic images a high channel value is encoded as a high signal value whilst in the other of the pair a high channel value is encoded as a low signal value. In the case of a luminance channel, one of the pair of antagonistic images may be the original image and the other an inverted image. In the case of color channels, the pair of antagonistic images may be different color difference signals.

[0006] The use of antagonistic images can be considered as analogous to the human visual system which encodes information from the eye photoreceptors in the form of ON-center and OFF-center pathways projecting to central visual structures from the retina. These two pathways originate at the bipolar cell level: one class of bipolar cells becomes hyperpolarized in response to light, as do all photoreceptor cells, and the other class becomes depolarized on exposure to light, thereby inverting the receptor signal; it is the difference between these pathways that is further processed. This antagonistic encoding can also be found in color perception where it is the balance between two separate channels that is encoded rather than just a single signal. For example the differences in, red versus green and blue versus yellow.

[0007] In the primary visual cortex, different cells detect features such as color, luminance, orientation and motion depending on the selectivity of their receptive fields. An embodiment of the invention can employ five feature channels which analyze the input image:

one luminance channel, two color channels, one for orientation and one motion channel. Input images are transformed employing the antagonistic approach into positive and negative features in each of the five channels, again using the two measures, the direct and inverse signals to extract the sensory conspicuity features of the feature channels individually.

[0008] In an embodiment if the invention, the antagonistic feature channels are combined to generate the final salience map, for example using a dynamic weighting procedure ensuring the contribution of each conspicuity map is never fixed but is instead dependent on the activity peaks in the signal.

[0009] Since the method of the invention requires a relatively low computational effort, embodiments of the present invention can perform this determination in real time using inexpensive hardware. Because real time processing is possible, the present invention can be applied to problems for which prior art approaches to determining salience are unsuited. For example, the present invention could be applied in autonomous vehicles or surveillance systems to assist in identifying objects requiring attention. In addition, the present invention can be applied during compression of images and/or video data to identify the more salient parts of the image which can then be encoded with higher fidelity, e.g. higher resolution or higher bitrate, than other, less-salient parts of the image.

[0010] Exemplary embodiments of the present invention will be described below with reference to the accompanying drawings, in which: Figure 1 is a diagram of an image processing method according to an embodiment of the present invention;

Figure 2 is a diagram of an image processing apparatus according to an embodiment of the present invention;

Figures 3 A to 3D show an example input image and the effects of various processes applied to it;

Figures 4A and 4B show a luminance conspicuity map generated according to the present invention and according to a prior method respectively;

Figures 5A and 5B show an example image and a color conspicuity map derived therefrom respectively;

Figures 6A and 6B show another example image and a color conspicuity map derived therefrom;

Figures 7A and 7B show an example image and an edge conspicuity map derived therefrom;

Figures 8 A to 8F show an example input image, results of various processing steps carried out on it and a final salience map according to an embodiment of the invention;

Figures 9A and 9B show the effects of different enhancement processes carried out in a salience map according to an embodiment of the present invention; and

Figure 10 shows various sample images together with visual salience indicated by eye tracking experiments and salience maps generated according to an embodiment of the present invention.

[0011] In the following description, like parts depicted in more than one figure are denoted by like reference numerals. In various Figures conspicuity and salience values are indicated on a color scale where blue indicates a low value and red indicates a high value in the conventional manner.

[0012] The present invention aims to predict visual attention for an average observer with a free viewing task by filtering input image into a number of low-level visual "feature channels" at the same spatial domain, for features of some or all of color, intensity, orientation and motion (found in visual cortex). The term "free viewing" refers to situations in which observers are viewing their world without a specific goal. The present invention is based on consideration of low-level mechanisms as well as the system-level computational architecture according to which human vision is organized. [0013] A method according to an embodiment of the present invention is depicted at a high level in Figure 1 and is explained further below. Figure 2 depicts an apparatus according to an embodiment, for carrying out the method.

[0014] An apparatus according to an embodiment of the present invention comprises a client device 10 which includes a central processing unit 11 connected to a storage device 12, input device such as a keyboard 13 and a display 14. Images to be processed in the invention can be captured using a still camera 15 or a video camera 16, retrieved from the storage device 12 or obtained from another source, e.g. via the internet 20. The central processing unit 11 may include a graphic processing unit GPU in order to perform parallel calculations optimally.

[0015] The current apparatus can take the form of a portable device such as a smart phone or tablet whereby all of the elements - including an image capture device, display and touch panel input - are combined into a single compact housing. The outputs of the present invention may be stored in the storage device 12, displayed to the user, or transmitted to another computer. In an embodiment of the invention, some or all of the steps of the method can be carried out on a remote server 21 connected to the client computer 10 via a network 20, such as the internet.

[0016] Embodiments of the present invention aim to provide a determination of the salience of an image, for example in the form of a salience map. The salience (also called salience) of an item - be it an object, a person, a pixel, etc. - is the distinct subjective perceptual quality which makes some items in the observed world stand out from their background and immediately grab our attention. Embodiments of the present invention may utilize a numerical value to indicate salience which may be determined in absolute terms or relatively across one or more images.

[0017] In the description below, the term "sensory conspicuity features" or simply

"conspicuity features", is used to refer to features or parts of an image which are conspicuous, e.g. by crossing a threshold on a relative or absolute scale of salience.

[0018] As shown in Figure 1, the present embodiment of the invention receives SI an input color image / and performs respective processes S2, S3, S4 to obtain one or more conspicuity maps C based on one or more of luminance, color, and spatial frequency (including edges). In an embodiment, a monochrome image can also be used. The conspicuity maps C are combined S5 to form a salience map S. The salience map S can be enhanced by performing various enhancement steps, such as applying a blur S6, or including motion information S7. The final salience map is output S8, e.g. by displaying it or storing it in memory. [0019] An algorithm S2 for generating a luminance conspicuity map is described first. In an embodiment of the invention, luminance contrast is the primary variable on which salience computation is based. It is also the first type of information extracted by human visual systems in the retina.

[0020] A computational model named Division of Gaussians (DoG) can be used for deriving a luminance conspicuity map in real-time. The DoG model is described further in

Katramados, L, Breckon, T.P. : 'Real-time visual salience by Division of Gaussians', in 18th IEEE International Conference on Image Processing (ICIP), 201 1, which document is hereby incorporated by reference in its entirety. The DoG model comprises three distinct steps to derive a visual salience map.

[0021] In the first step, a luminance image Ui is derived from the input image / and used to generate a Gaussian pyramid U comprising n levels, starting with image Ui as the base with resolution w *h. Higher pyramid levels are derived via down-sampling using a 5 x 5 Gaussian filter. The top pyramid level has a resolution of (w/2 n_1 ) x ( i/2 n_1 ) . This image is referred to as On.

[0022] In the second step, On is used as the top level of a second Gaussian pyramid D to derive its base Di. In this case, lower pyramid levels are derived via up-sampling using a 5 x5 Gaussian filter.

[0023] In the third step, an element-by-element division of Ui and Di is performed to derive the minimum ratio matrix M of their corresponding values as described by:

(1)

[0024] The luminance conspicuity map is then given by:

C(i,j) = 1 - M{i,j) (2)

[0025] However, the present embodiment uses both the input image / and its negative /' which provides lower contrast but with a wider dynamic range. The present embodiment allows investigation of local features in a dual rail antagonistic structure, where the direct and inverse images are used to intrinsically derive a luminance conspicuity map. The method proposed comprises six steps to derive a visual salience map as detailed below.

[0026] First, the input image / is blurred S2.1, e.g. using a 7x7 Gaussian filter, to replicate the low-pass spatial filtering which occurs when the eye' s optical system forms a retinal image. This step can be omitted if the resolution of the input image is low. Exemplary blurred positive and negative images are shown in Figure 3 A and 3C respectively. [0027] Secondly, relative luminance, Yo , and negative luminance, YN , of the RGB values of the blurred image I~ are calculated S2.2, S2.3 as:

Y 0 = 0.5010 x r + 0.4911 x g + 0.0079 x b (3) Y N = 255 - Y 0 (4) The weights of R, G and B channels were calculated according to the experimental display characteristics to fit V (λ), the CIE luminosity function of standard observer - objects that will be viewed at a distance - https://www.ecse. i.edu/~schubert/Light-Emitting-Diodes-dot- org/Sample-Chapter.pdf . Other weights may be appropriate in other circumstances.

[0028] Thirdly, minimum ratio matrices are derived S2.5, S2.6 using the DoG approach as explained above for both blurred input image, Mo, and blurred negative image, MN, as depicted in Figures 3B and 3D respectively.

[0029] Fourthly, an Aggregated Minimum Ratio Matrix MA is calculated S2.7 from Mo and MN derived from Step 3 as:

Μ Α = (1- λ) Μο + λΜ Ν (5) where tuning parameter λ is derived by using intrinsic image measures from coefficient of variance, -, of Mo and MN as:

μ

[0030] Fifthly, a normalised Minimum Ratio Matrix MY is derived S2.8 from MA and λ derived from Step 3 and 4 as:

MA -MA

M Y = (7)

M A MAX - M AMIN

[0031] Sixthly, a luminance conspicuity map CY is derived S2.9 from (5) and (7) as:

[0032] The luminance conspicuity map CY for the example image is shown in Figure 4A alongside the corresponding map generated by the DoG method, Figure 4B, for comparison.

[0033] Next a method S3 for generating a color conspicuity map is described.

[0034] Color opponencies are central to modelling the contribution of color to salience. To this end, the RGB values of the color input image are mapped S3.1 onto red-green (RG) and blue-yellow (BY) opponency features in a way that largely eliminates the influence of brightness. The color conspicuity map can be computed as follows.

[0035] First, dual antagonistic color opponencies are computed as: p. _ b-min(r,g) p _ min(r,g)-b

3 max(r,g,b) ' 4 max(r,g,b)

when the values of Fi, F2, F3 and F4 are negative, these values are set to zero.

[0036] Secondly, RG and BY features are derived S3.2 from dual antagonistic color opponencies:

RG = (1 - a)F 1 + ccF 2 , BY = (1 - F 3 + £F 4

where tuning parameters a and β are derived by using intrinsic image measures from coefficient of variance, of dual antagonistic color opponencies. When the intensity value of a pixel in a scene image is very small, the color information of the pixel is hardly perceived. Thus, to avoid large fluctuations of the color opponency values at low luminance, RG and BY are set to zero at locations with max{r,g,b)<\l\0 assuming a dynamic range of [0,1].

[0037] Thirdly, the color conspicuity map, Cc, is derived S3.3 from (11) and (12) as:

C c {i,i) = BY{i,i) + RG{i,j) (13)

[0038] Examples of color conspicuity maps are shown in Figures 5 A, 5B, 6A, 6B, where A shows the original image and B the resulting color conspicuity map.

[0039] Next an algorithm S4 for generating an edge (orientation) conspicuity map is described. Biological visual systems are highly adapted to the image statistics of the natural world. A particularly important aspect of the statistics of natural scenes is the arrangements of edges they contain. Edges are not arranged randomly, and the structure in their

arrangements is important for shape recognition and texture discrimination. In an

embodiment of the invention, an edge orientation conspicuity map is calculated as set out below:

[0040] First, Scharr gradient operators, e.g. of size 3 x3, are used to calculate S4.1 the dominant edge orientation in the image, as below: [0041] Secondly, Di, Di and DA features are computed S4.2, S4.3 by convolving the intensity image, Y 0 , with dual antagonistic edge orientation kernels.

D = (l - a)Y 0 * d x + aY 0 * d y , D 2 = (l - β)Υ 0 * d xy + βΥ 0 * d yx (16)

D A = (l - y)D 1 + yD 2 (18) γ = te s) < 19> where tuning parameters α, β and γ are derived by using intrinsic image measures from coefficient of variance, ^ , of dual antagonistic edge orientations.

[0042] Thirdly, the edge orientation conspicuity map, CE is derived S4.4 by normalizing

[0043] Figures 7A and 7B show an example input image and the resulting edge conspicuity map.

[0044] The salience map is then derived by combining one or more of the conspicuity maps. One difficulty in combining color, intensity and edge orientation conspicuity maps into a single scalar salience map is that these features represent a priori not comparable modalities, with different dynamic ranges and extraction mechanisms. An embodiment of the present invention therefore uses a dynamic weighting procedure by which the contribution of each conspicuity map is not fixed but is instead dependent on the activity peaks of conspicuity levels. A method of calculating a salience map from conspicuity maps is described below.

[0045] First, statistical data is computed from the selected conspicuity maps

0y = -^Cy , 0 C = -^C C , 0 E = -^C E , (21)

0γ = μ(βγ), 0c = μ(βο)> ®E = μ^ Ε ), (22) ∑ = 0 Y + 0 C + 0 E , ∑ = 2 X (0 Y + 0 C + 0 E ) (23)

[0046] Secondly, a salience map is calculated by dynamically weighting conspicuity maps

S = aC c + βΟ Ε + yC Y (24) where

a = M ^, }> ^ p., , 9 = nax . (25) [0047] Figures 8A to F depict an example input image, the various conspicuity maps and a resulting salience map.

[0048] Various optional enhancements to the salience map calculated as described above can be made. For example, the salience maps can be blurred S5 and/or a bias towards the centre added to emulate foveal vision. In an embodiment a Gaussian Filter, G, (e.g. of size 15x 15) is applied. In an embodiment a central bias map, Sc, e.g. with a Gaussian kernel of 7 with a weight 0.3 is also applied. Figures 9A and 9B show, using the example image from Figures 3 and 4, the effect of Gaussian filter G and the combination of the Gaussian filter G and central bias map Sc respectively. The combined output is calculated as:

S = 0.7 X G * S + 0.3 X S c (27)

[0049] In an embodiment of the invention, ultra- wide angle images, such as 360° images are processed. Such images can be obtained by stitching together images obtained using two or more wide angle imaging systems. When processing such images to determine salience no central bias is applied so that a field of view for closer examination can be selected after the event.

[0050] Another optional enhancement is to incorporate motion features. Temporal aspects of visual attention are relevant in dynamic and interactive setups such as movies and games or where an observer is moving relative to the observed scene. An embodiment of the invention uses the motion channel to capture human fixations drawn to moving stimuli (in the primate brain motion is derived by the neurons at MT and MST regions which are selective to direction of motion) by incorporating motion features between pairs of consecutive images in a dynamic stimuli to derive temporal salience, ST, as follows:

[0051] Firstly, a difference image, DF, is computed from the current, Yo[ri], and previous, Yo[n - 1], images.

DF = \Y 0 [n] - Y 0 [n - l] \ (28)

[0052] Secondly, the difference image, DF, is blurred to remove detail and noise with a Gaussian Filter, G (e.g. of size 15x 15)

DF = G * DF (29)

[0053] Thirdly, motion salience, SM, is calculated by applying a hard threshold to the blurred difference image calculated in Step 2.

¾ (u) = (u) if sp n > 20 (30)

0 else

[0054] Fourthly, the motion salience, SM, is added to spatial salience, S, calculated in (27): S T = 0.3 X G * S M + 0.7 X S (31)

[0055] A performance analysis was performed on an MIT benchmark data set [Tilke Judd, Frodo Durand, and Antonio Torralba,: Ά Benchmark of Computational Models of Saliency to Predict Human Fixations', in Computer Science and Artificial Intelligence Laboratory Technical Report]. The present invention was found to provide a useful approximation of the visual salience reflected in the eye-tracking data. Results are shown in Figure 10.

[0056] Because the computational effort required for the present invention is reasonable, it can be implemented on readily obtainable hardware and still provide a real-time salience map at a reasonable frame rate. Accordingly, an embodiment of the present invention provides a computer program that calculates in real-time a salience map for a screen display and overlays the salience map on the display in semi-transparent form. The screen display can be the output of another application, the GUI of an operating system, a pre-recorded moving image, or a feed from an imaging device. The overlay can be used for testing of user-interfaces of applications or operating systems, or for review of computer games and movies. The salience map generating program can be used in a portable computing device that includes an imaging device, e.g. a smartphone or tablet, to enable a live site survey of a building or other location. The present invention can also be applied in applications such as control of autonomous vehicles for identifying objects requiring attention.

[0057] The invention can also be applied to the compression of images, including video signals. In such an embodiment, the salience of different areas of the image, or of frames of the video signal, is determined and used to control the compression process. For example, regions of higher salience are compressed less or encoded with greater fidelity than regions of lower salience. The regions of higher salience may be encoded at a higher resolution, at a higher bitrate and/or at a higher frame rate or otherwise prioritized over areas with a lower salience. Different block sizes may be used for regions of higher salience. In this way the image or video signal can be encoded in a given size or bandwidth whilst achieving a subjectively better output.

[0058] Thus the present invention provides a novel algorithm for salience detection based on a dual rail antagonistic structure to predict where people look in images in a free-viewing condition. Furthermore, the proposed algorithm can be effectively applied to both still and moving images in visual media without any parameter tuning in real-time. An embodiment of the present invention comprises a computer program for carrying out the above described method. Such a computer program can be provided in the form of a standard alone application, an update or add-in to an existing application or an operating system function. Methods of the present invention can be embodied in a functional library which can be called by other applications.

[0059] It will be appreciated that the above description of exemplary embodiments is not limiting and that modifications and variations to the described embodiments can be made. For example, computational tasks may be performed by more than one computing device serially or concurrently. The invention can be implemented wholly in a client computer, on a server computer or a combination of client- and server-side processing. Certain steps of methods of the present invention involve parallel computations that are apt to be implemented on processers capable of parallel computation, for example GPUs. The present invention is not to be limited save by the appended claims.




 
Previous Patent: PROCESS OF PRODUCING MONOTERPENES

Next Patent: DRIVE APPARATUS