Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI RESOLUTION SUBSAMPLED SALIENCY
Document Type and Number:
WIPO Patent Application WO/2023/063835
Kind Code:
A1
Abstract:
A method for generating a saliency map for use in polygon reduction, including: processing a 3D object file of an object by storing vertices of meshes of the object in a data structure; determining a tropical angle of curvature for each vertex within a geodesic radius, wherein the tropical angle of curvature represents a difference in angles of normal vectors at all vertices connected to the vertex by a mesh edge; and generating a saliency value for each vertex for the saliency map based on the tropical angle of curvature.

Inventors:
ALLEN BENJAMIN PETER
ANJO KENICHI
KUFFNER DOS ANJOS RAFAEL
ROBERTS RICHARD ANDREW
Application Number:
PCT/NZ2022/050128
Publication Date:
April 20, 2023
Filing Date:
October 10, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VICTORIA LINK LTD (NZ)
International Classes:
G06T15/20; G06T17/20
Domestic Patent References:
WO2013123636A12013-08-29
Foreign References:
US20090092314A12009-04-09
US20130215115A12013-08-22
US6356658B12002-03-12
Other References:
XING LIAN PING, HUI KIN CHUEN: "Entropy-based Mesh Simplification", COMPUTER-AIDED DESIGN AND APPLICATIONS, vol. 7, no. 6, 1 January 2010 (2010-01-01), pages 911 - 918, XP093062065, DOI: 10.3722/cadaps.2010.911-918
Attorney, Agent or Firm:
DAVIES COLLISON CAVE PTY LTD (NZ)
Download PDF:
Claims:
CLAIMS 1. A method for generating a saliency map for use in polygon reduction, including: processing a 3D object file of an object by storing vertices of meshes of the object in a data structure; determining a tropical angle of curvature for each vertex within a geodesic radius, wherein the tropical angle of curvature represents a difference in angles of normal vectors at all vertices connected to the vertex by a mesh edge; and generating a saliency value for each vertex for said saliency map based on said tropical angle of curvature. 2. A method as claimed in claim 1, wherein an entropy of the tropical angle of curvature is generated for each vertex within a geodesic neighbourhood defined by said geodesic radius, and said saliency value is generated from said entropy. 3. A method as claimed in 2, wherein a smooth step process is applied to each entropy to avoid sharp discontinuities in saliency values for said saliency map. 4. A method as claimed in claim 1, 2 or 3, wherein each saliency value generated for a vertex is generated over a number of steps using the entropy for a given geodesic radius where the radius is a percentage value of a total area of the mesh and the radius is decreased for each step to generate the saliency value over different scales. 5. A method as claimed in claim 1, 2, 3 or 4, wherein the vertices of said file are sampled to generate a subset of vertices for determining the saliency values of the subset, and the saliency values of remaining unsampled vertices are interpolated. 6. A method as claimed in claim 5, wherein adaptive sampling is applied to the vertices based on a selected exclusion geodesic radius. 7. A method as claimed in claim 5, wherein the saliency values of the unsampled vertices are interpolated using an inverse-distance-weighted mean of the saliency values of the sampled vertices. 8. A method as claimed in any one of preceding claims, wherein the data structure stores neighbouring vertices together. 9. A method as claimed in claim 8, wherein the data structure stores the vertices and lists of neighbours in memory and uses offsets into the memory to access vertices for processing. 10. A method as claimed in any one of preceding claims, wherein vertices within a neighbourhood of a vertex, defined by the geodesic radius are accessed in a group for generating saliency values and the neighbourhood of a predetermined number of adjacen vertices are accessed together from the data structure and are ^ p^,rocessed in parallel. 11. A method for generating a s ^a^liency map for polygon reduction, including: processing a file for a 3D object to store vertices of polygon meshes of the object in a data structure, said data structure storing neighbouring vertices of the polygon meshes together; determining an angle of curvature of each vertex; and generating a saliency value for each vertex for the saliency map by determining the entropy of the angles of curvature of the neighbouring vertices within a geodesic radius of the vertex. 12. A method as claimed in claim 11, wherein the neighbouring vertices are processed in parallel. 13. A mesh processing system for generating a saliency map, said system including: a processor for accessing vertices of an object mesh and processing the vertices according to a method as claimed in in any one of preceding claims; and memory for storing said vertices in said data structure.

14. A computer readable medium storing code for executing a method as claimed in any one of claims 1 to 12.

Description:
MULTI RESOLUTION SUBSAMPLED SALIENCY FIELD The present invention relates to a method for generating a saliency map for use in polygon reduction, such as retopology or mesh decimation. BACKGROUND In computer graphics, three dimensional (3D) objects are represented by polygon meshes. Sculpting and 3D scanning are two popular methods used to create polygon meshes for productions, but result in meshes that are incredibly data intensive and computationally complex to process; a sculpted or scanned mesh usually contains millions of polygons. Due to these overheads, the polygon meshes often need to be compressed before being animated, where the polygon meshes of an object change each frame. Polygon reduction, either through mesh decimation or retopology, is a process that is used to reduce the number of polygon faces needed for a mesh to effectively represent the object. Polygon reduction can be performed by existing tools, such as ZBrush, but can risk lowering the quality by removing too many polygon faces in a given area. To significantly improve the quality of the mesh reduction, a saliency map of the 3D model can be used. A saliency map can be produced from the 3D model file that specifies or defines the meshes or data points of the 3D model that are the most important from a visual perspective for the shape of the mesh. In production, artists manually guide the simplification to ensure visual quality, in which they may generate a guide for the polygon reduction by painting colours onto the mesh (e.g. red to increase polygon faces in a given area, and blue to reduce them). Having to paint a colour map requires significant manual effort and the process hampers production by being time intensive and complex. The latest developments in rendering, capture, and fabrication technology for animation has made it possible to create highly detailed 3D meshes for use not only in media such as films and animations but also in interactive multimedia including games and other reactive content, such as Extended Reality (XR). However loading and manually editing models with millions of vertices in interactive software requires significant memory and computing time. Also using detailed models in simulation and real time rendering is generally not feasible, even with modern hardware. Many production studios cannot afford the extensive time or costs of processing such models and instead try to work with simplified versions. Accordingly fast mesh decimation processes might be applied, as described in "A short survey of mesh simplification algorithms", Jerry O Talton, 2004, University of Illinois at Urbana-Champaign (2004), but these often sacrifice detail on visually important features. An alternative, costly approach is to employ artists to manually remove superfluous polygon faces (or even to recreate the mesh by hand), but that becomes impractical for complex geometry of a 3D object. Recent saliency-based decimation, as described in (a) "Mesh Saliency Analysis via Local Curvature Entropy", Max Limper, Kuijper Arjan, and Dieter W. Fellner, 2016, Eurographics 2016, Vol.2016, Eurographics - European Association for Computer Graphics, Switzerland, 13 ("Limper et al.2016"); and (b) "Shape context based mesh saliency detection and its applications: A survey", Xianyong Liu, Ligang Liu, Weijie Song, Yanping Liu, and Lizhuang Ma., 2016, Computers & Graphics 57 (2016), 12 – 30 ("Liu et al. 2016"), simplifies complex meshes, retaining more detail on visually important regions, while yielding coarser geometry in other areas. Notably, importance estimation and decimation steps become two distinct passes, supporting parameter tuning and artist feedback before decimating (as described in "State of the Art in Artistic Editing of Appearance, Lighting and Material", Thorsten-Walther Schmidt, Fabio Pellacini, Derek Nowrouzezahrai, Wojciech Jarosz, and Carsten Dachsbacher, 2016, Computer Graphics Forum 35, 1 (2016), 216–233). The main limitation of these approaches lie in their high computational costs, requiring several days to process dense meshes common in production pipelines involving scanned or sculpted polygon meshes. The processing algorithms used also often require tuning over multiple runs before an ideal configuration is found. In "Local-to-global mesh saliency", Ran Song, Yonghuai Liu, Ralph R Martin, and Karina Rodriguez Echavarria, 2018, The Visual Computer 34, 3 (2018), 323–336, it is proposed to pre-decimate a mesh using standard methods, as described in "Quadric-based polygonal surface simplification", Michael Garland, 1999, Technical Report, Carnegie-Mellon Univ School of Computer Science, Pittsburgh PA. However, the details are irreversibly lost during simplification, defeating the purpose of saliency-aware methods. Problems such as discerning noise from small-scale detail, robustness to differing tessellation levels, and detecting repeating patterns also remain open problems in mesh saliency estimation, as described in Liu et al.2016. There is a significant demand for saliency-aware decimation in a production environment to be technically efficient, versatile, and robust. Accordingly, it is desired to provide a saliency map generation process that is able to operate efficiently on highly complex and varied 3D models and significantly improve executing mesh decimation on the models, or at least provide a useful alternative. SUMMARY An embodiment of the present invention provides a method for generating a saliency map for use in polygon reduction, including: processing a 3D object file of an object by storing vertices of meshes of the object in a data structure; determining a tropical angle of curvature for each vertex within a geodesic radius, wherein the tropical angle of curvature represents a difference in angles of normal vectors at all vertices connected to the vertex by a mesh edge; and generating a saliency value for each vertex for said saliency map based on said tropical angle of curvature. An embodiment also provides a method for generating a saliency map for polygon reduction, including: processing a file for a 3D object to store the vertices of polygon meshes of the object in a data structure, said data structure storing neighbouring vertices of the meshes together; determining an angle of curvature of each vertex; and generating a saliency value for each vertex for the saliency map by determining the entropy of the angles of curvature of the neighbouring vertices within a geodesic radius of the vertex. The method may process each vertex in parallel. The method may detect detail at multiple scales, using a multi-modal formulation combining local entropy of the tropical angle of curvature, local plane fitting, and crease detection. DRAWINGS Preferred embodiments of the present invention are described herein, by way of example only, with reference to the accompanying drawings, wherein: Figure 1 is flow diagram of a preferred embodiment of a saliency map generation method performed by a mesh processing system; Figure 2 illustrates different measurements of curvature for an input 3D model with three areas with different levels of tessellation (the polygon mesh of 2A has shading to represent a tropical angle at each vertex, and 2B, 2C, and 2D shows a histogram representation). Figure 3 illustrates different saliency maps for a 3D object for two different sensor noise ^^ reduction levels (parameters are adjusted, left-to-right, to show how the method can highlight details across different scales); Figure 4 is a block diagram of the mesh processing system; Figure 5 is a screen shot of a user interface generated by the system; Figure 6 is a diagram illustrating how subsampling optimization chooses a representative set of vertices of a 3D model for the method; and Figure 7 is diagram of a saliency map generated for a 3D model and a polygon reduction using the saliency map. DESCRIPTION Computer graphics and vision researchers often refer to "Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry", Christof Koch and Shimon Ullman, 1987, Springer Netherlands, Dordrecht, 115–141, (“Koch and Ullman 1987”) as a landmark work on how to understand human visual perception. It postulates that attention is naturally driven towards elements that are locally different from their surroundings, at different scales. This was introduced to image analysis where bottom-up approaches look at local differences across scales to predict regions of interest in an image, and has been used in image processing for compression, re-targeting, retrieval, and other domains. The extension to 3D objects introduced mesh saliency as a measurement parameter that can be used for computer applications such as mesh decimation and view selection. For decimation, a saliency map is used by the computer process to produce meshes with higher detail in salient areas, while heavily simplifying non-salient regions. Using the approach on visual saliency as defined by Koch and Ullman 1987, saliency map generation should focus on mesh locations that contrast to their local neighbourhood at different scales. These can then be combined in a final measurement of saliency. In Limper et al.2016 a localized Shannon’s entropy of the mesh curvature was used as the saliency measure. However, their formulation has limitations that make it unpractical for production uses. For example, while curvature values using the Taubin approximation, described in "Estimating the tensor of curvature of a surface from a polyhedral approximation", Gabriel Taubin, 1995, In Proceedings of IEEE International Conference on Computer Vision, IEEE, 902–907 ("Taubin 1995"), are easy to compute, their absolute values are sensitive to the tessellation density of the mesh. This produces “tessellation noise” that detracts from perceptual visual saliency and can induce non-existing pseudo details in dense mesh areas. Another limitation is that it’s hard to identify regular repeating patterns that should be visually relevant (e.g. plaids on a skirt) as high entropy areas, while noisy surfaces will. This is because hand-crafted repeating patterns will not likely show high variance of curvature values, while noisy surfaces from range scans, or surface micro-details will. Given these two main limitations, a method 100 for generating a saliency map for mesh decimation, described in detail below and shown in Figure 1, uses a different process to generate a measure of saliency for each mesh of a 3D object. The method calculates at multiple scales, the product of a plane fitting measurement with a sigmoid-like response (which works as a noise detector) with the entropy of a new geometric measurement: herein referred to as a tropical angle of curvature. The tropical angle of curvature estimates the maximum immediate difference of normals at each vertex on a surface. This parameter is optimized using an automatic contrast function, allowing local differences to be mitigated or optimized, according to necessity. The tropical angle of curvature is then combined independently with the multi-scale formulation in order to allow detection of crease-like features in an immediate neighbourhood. Users can control the locality of the saliency map using a hyperparameter which controls the calculation area, levels, and presence of the crease-like features. The noise detection function also can be set by a user. The components of the saliency measures that comprise the saliency map, and how they are combined are discussed below. Tropical angle of curvature ( θ(v) ) For each vertex v in an input 3D model, the tropical angle of curvature θ(v) is generated using Equation (1). where adj (v ) are all vertices connected to v by a mesh edge, n a ,n b are the normal vectors at the vertices a and b of each edge connected to the vertex v respectively. The θ(v is) a new geometric concept that can be treated as a kind of curvature, providing new features as described below. The tropical angle of curvature θ(v) is a more intuitive and stable measurement of local changes than the mean or Gaussian curvature, as it provides histogram values spanning a useful and practical range. For example, Figure 2 shows the mean curvature compared to θ(v )The input 3D model of Figure 2A has three different areas with varying tessellation levels. The absolute mean curvature of Figure 2B suffers heavily from outliers caused by particular mesh topologies making an entropy analysis process volatile. Comparing the mean curvature with a manually set outlier boundary, as shown in Figure 2C, the tropical angle of curvature of Figure 2D is independent of tessellation level. Its values span the same range regardless of vertex area, and better reflect local differences. The tropical angle θ(v) is used for two different purposes in the saliency map generation method. First, to measure surface entropy over multiple areas, and second, to identify local contours that are visually salient. Entropy of tropical angle of curvature (H ( v, r)) As mentioned, the Taubin (Taubin 1995) curvature calculation produces absolute values that vary according to the tessellation level of the mesh, being higher for highly tessellated areas (since it roughly translates to the amount of change happening over an area). For this method, when discretizing the curvature measurement (Equation 2), the likelihood of distinct curvatures in highly tessellated areas is increased, while lesser tessellated areas will fall inside the same bin, and thus be considered to be low saliency. The discretized range of possible curvature values is referred to as “bin”, where n is a given number of discrete intervals. Outlier detection can still be used to exclude extreme curvature values from malformed faces. However, the example depicted in Figure 2 shows curvature values produced by a more tessellated area, which can naturally happen as the output of a scan or a sculpted model. So another entropy determination is used using the tropical angle. For a given integer n, the n “binning” operation of θ(v) is performed using: where k(v) is the discretized value of the tropical angle of curvature in a range of n bins, i.e. k(v) is the integer that ranges from 0 to n. The input v is the tropical angle of curvature, taken to the power of β, a user defined parameter that takes a positive value. It directly affects binning as it will either spread out differences to different bins (β < 1), or concentrate them (β > 1). θ β min, θ β max resp are the minimum and maximum values of the tropical angle of curvature to the power β for a given model. This is further discussed below. The bins can be rescaled and represented by I j := [j - 1; j)(1 ≤ j ≤ n - 1) and In := [n - 1; n] . Given that gd(a,b) is the geodesic distance between a and b, A vi can denote the influence area of a vertex v i calculated using mixed voronoi cells. The influence, or frequency, σ j of the j-th curvature bin I j is obtained by accumulating the influence areas per bin. To obtainσ j (1 ≤ j ≤ n), the vertices in the geodesic neighbourhood are iterated and their respective vertex areas are accumulated per curvature bin using: Finally, the local entropy is determined per vertex v over a geodesic neighbourhood with radius r, with m being the number of vertices in the geodesic neighbourhood: where Mitigating small scale noise (η( v, r) In many instances, small features can be visually salient parts of a model. However, in other cases, they are mostly caused by noise. There are two scenarios where noise removal is needed. First, outputs of 3D scanning software or photogrammetry data, often include small scale noise that should not be considered visually salient. Second, if a 3D model (or asset) is ultimately being used in a production pipeline featuring normal or displacement mapping, some details can safely be removed from the geometry, while still being rendered. In these cases, it may be best to eliminate small scale details that drive the saliency up, while keeping other relevant features and changes. For example, while small edges are to be preserved, that might not be true for noisy surfaces. Given a level of sensor noise ε, the method seeks to remove the influence from fluctuations in a surface that are smaller than this value. The effect is harsher on low entropy surfaces such as flat plane-like structures. By approximating small neighbourhoods to a plane (which is true in such surfaces, and locally true for more complex structures), it can be determined when points in that area have a variation within the noise level by taking the distance to the estimated plane. To this end, consider g is the average normal vector calculated over the geodesic neighbourhood. Putting ΔN = max N- min N, η is then defined as: The functionηη( v, r) is a smooth step function created for use in the method to avoid sharp discontinuities in the range where change is close to ε. Thus, ε will either scale down the contribution of a vertex v to the saliency parameter generated at that level, or keep it unchanged. Figure 3 illustrates how ^^ influences the resulting saliency map, and how it can be used to reduce high frequency details in the saliency map that occur due to inadvertent errors such as scanning noise. In particular, when the geometry in the region surrounding the current vertex approximates a plane, the ^^ parameter helps to preserve local consistency of saliency values. From left to right: a globality parameter, as described below, increases from 0 to 1 in steps of 0.25. The top row has meaning no noise reduction is applied, and the bottom row has 0.02. Multi-scale calculation R( v^) As shown in Figure 1, the saliency map generation method 100 runs each step of saliency generation for a vertex using for a given geodesic radius r^ and a number of steps ℓ. This executes determination o ℓ The radius ^^ of the geodesic search is a percentage value of the total area of the mesh, being decreased t in each iteration from 0 to ℓ. The detected saliency at each level is combined with the evaluation of ^^ ( ^^ ^^) which diminishes the effect from noise. The multi-scale computation process is: Multi-mode To complete generation of the saliency parameter value for each given vertex, R( v) is normalised based on its range over all vertices. The normalization of R( v) is denoted by The saliency S( v) is generated for each vertex v in the mesh of a 3D object. R( v) is computed in a first process step, then it is normalized before adding the ^^ weighted contribution of θ(v), the tropical angle of curvature at ^^. This limits its contribution to the final saliency value S( v), not substantially increasing the relevant saliency of regions with creases when compared to the rest of the mesh. The angle θ(v) successfully works in finding and highlighting creases in a variety of models. Moreover, θ(v) can be used at no extra computation cost, since it is already used to generate H ( v,). r As a result, it can be used to enforce hard contours during a decimation process. The tropical angle ( v) is more reliable for this purpose than previous curvature measurements. Mesh processing system The saliency map generation method 100, as shown in Figure 1, is executed by a mesh processing system 400, as showing in Figure 4. A 3D object file is received 102 for processing by the system 400. The file represents a mesh of the 3D object and defines all of the polygon meshes of the object. Once the file is received for processing and the parameters set for the method, as described below, a vertex v of a mesh in the file is selected (step 104), k is set to zero (step 106) and a search commences at a user selected geodesic radius ^^ (step 108). For radius r^, the method generate (step 110). The method then generates ( ) (110) and then completes the multi-scale computation process to generate R( v) (114). At step 116, it is determined whether k has reached ℓ and if not k is incremented (step 118) and then the radius is halved (step 120) so as to commence a new search at the new radius r (108). Once k reaches ℓ at step 116, the method generates the saliency S( v) for the vertex (122) and moves to next vertex in the mesh file (104) until all are processed. The saliency map comprises and represents the saliency for each vertex v^ in the mesh file of the object, as shown in Figures 3 and 5. The mesh processing system 400 includes a computer 402, such as an Intel architecture computer produced by Lenovo Corporation, IBM Corporation, or Apple Inc, as shown Figure 4. Mesh processing, user interface generation and the saliency map generation method 100 executed by the computer system 402 is defined and controlled by computer program instruction code and data of software components or modules 450 stored on non- volatile (e.g. hard disk) storage 404 of the computer 402. At least one database can also be stored on the storage 404 or remotely accessed, and this can be used to store the 3D object files that are processed and displayed. The modules 450 include a game engine 470 (such as Unreal Engine 4) and GPU shaders that can be integrated into the game engine or run separately, and applications 480 to run the method 100 and generate the user interface. The processes performed by the modules 450 can, alternatively, be performed by firmware stored in read only memory (ROM) or at least in part by dedicated hardware circuits of the computer 402, such as application specific integrated circuits (ASICs) and/or field programmable gate arrays (FPGAs). These integrated circuits can be part of a display adapter 414. The computer 402 includes random access memory (RAM) 406, at least one microprocessor (CPU) 408, and external interfaces 410, 412, and 414 that are all connected by a system bus 416. The external interfaces include at least universal serial bus (USB) interfaces 410, a network interface connector (NIC) 412, and the display adapter 414. Other interfaces, such as Bluetooth and Wifi interfaces can also be included. The display adapter 414 includes a graphics processing unit (GPU) to support and execute rendering the 3D objects and animations of the objects on a connected screen display 420. The NIC 412 enables the computer 402 to connect to a communications network 420. The network 420 may include one or a combination of existing networks, such as a LAN, private WAN, the PSTN, the Internet, mobile cellular telephone networks, etc. The computer 402 includes an operating system (OS) 424, such as Microsoft Windows, Mac OSX or Linux. The modules 450 can run on the OS 424, and include program code written using languages such as C, C++, Ruby or C#, and other languages supported by the framework provided by the game engine 470. Parameters The saliency map generation method 100 allows a user to optimize and adjust execution to obtain a saliency map that provides the best trade-off between compression and detail preservation. The execution speed of the method 100 permits generation of a user interface by the system 400, as shown in Figure 5, where the parameters can be easily changed and explored either at or near interactive speeds. There are five free parameters, as shown below in Table 1, that an experienced user could adjust and the user interface provides a simplified version for easier control of the saliency map, as shown in Figure 5. First, the value ^^ can be set per model based on the estimated noise of the used 3D scan or sculpted model. Secondly, a reasonable value for β can be automatically determined from the observation that the entropy of the global curvature distribution approximates the mean entropy of all neighbourhood curvature distributions. A value of β is generated such that the global entropy is equal to a desired mean neighbourhood entropy. As the neighbourhood entropies usually extend to near the maximum possible value (for the histogram), the target is set at 0.45× the maximum possible entropy. This can be interpreted as trying to make the mean saliency white as shown in the Figures. Finally the values for ℓ, ^^ and ^^ are controlled by a hyper-parameter of “globality”, which is the only user controlled value for quick use. A local map using a more local configuration of the parameter will be mindful of creases and highlight mostly local structures (ℓ = 1, ^^ = 1,). At the middle of the spectrum, globally salient regions are identified, but are still less salient than local elements, which are still detected (multi-scale computation with ℓ > 0, increasing ^^ and decreasing map will just roughly identify larger regions of saliency in the mod . Figure 3 shows the results of varying this parameter between 0 and 1. Adaptive Subsampling An adaptive subsampling process is used to provide high performance on highly complex 3D models while handling features at different scales. By doing so, the overhead of the saliency map method 100 is reduced while preserving accuracy. The magnitude of the reduction depends on how many subsamples are used. For example, if the mesh contains, on average, 1000 vertices within a given geodesic radius area, and the process subsamples at a factor of 100 subsamples per area, then the improvement in speed will be approximately one 5 o i a p Sp P nserfrc eeeio af ir aicagpcat l,i mo hepn esb t - vra 3hso5 oeg5e8xl t8 unnh3i5 arim0aht(2 x ftu3 oia41 i6drot7ss08ee88 dt e0l,.1 ,y lx)0 x.d o aeo1 xcx on oc0-17 pdur/1.00t/ i a I230e0 sf2d2 s e ( i oet2x b’rcesdy ooce 1rn audr0dnste,e0r a i odss0pf t op e0o mxfl vi se mcaeecagrlautteniigtcocienttendui sstd 4a tu aoe8m -dt.0e p 11p P)e40 tl.hser0 Ife -asfou st ar iud ptnmbsoedscr 1 iaf don0omedt0retem,per0rlp os0emofs0 tlhia p tn vehteeeier mor st ana sierc.ulteiehbeasosn,ad ac tmhty 1e 1p0 on0l0ve0 teh i ssr aeun a s ibdhm gsorap iwonmrutonepnvr bldepeemos tlrlo pueawnetthert. 10 sroeereeue sc wtsteoetnm e sozewem e mern ereotnssra.e ase.osw ar ttsets su tosaemrornm m outlitmipilzeat riuonns to in a mchinieuvtees th inisst iemadpr oofve dmayesnt fo inr First, the saliency measurement S( ^^) is inherently multi-scale. For larger areas, R( ^^, ^^) approximates the entropy and noise values, as it will average variability in a neighbourhood The aim is not to highlight individual local features, but regions of high saliency. These are the most computationally expensive operations given one needs to iterate over large neighbourhoods. However, R( ^^, ^^) can be determined with accuracy at lower sampling rates, as there is no concern for overlooking small scale details. These will be highlighted as salient when calculating R( ^^, ^^) with a smaller ^^. Such smaller scales require higher sampling rates to detect salient elements. Still, these operations become cheaper since they affect smaller areas. Considering this, the sampling rate can be adapted for faster processing while maintaining low approximation errors. Second, even though the method might be running each level on vertices chosen from a sub-sampled set, each individual generation ( , ) ( , ) iterates over a geodesic neighbourhood ^^ ^^ that is generated from the ground truth mesh, not a sub-sampled mesh. This guarantees interpolating between different ground truth saliency values, not simplified ones which already introduce non-recoverable errors, eg. pre- decimated meshes. Finally, the process is independent of tessellation density, meaning that the sampled points are accurate regardless of what part of the mesh they come from, thus making their interpolation reasonably accurate as well. Given these considerations, an adaptive subsampling process using a hyper parameter of “samples-per-neighbourhood” is used. For neighbourhoods with a hig a r d ui s represents a small sample of the points originally iterated over. As ^^ gets smalle gets closer to the number of points in the geodesic neighbourhood ^^ ^^ discussed above. When n s becomes close to verifying every one out of four vertices, choosing samples and interpolating becomes more expensive than the ground truth calculation. In these cases, additional optimizations are employed to reduce memory access by ordering vertices into blocks based on proximity and traversing them in parallel. With these optimizations, performance over finer scale regions becomes similar to that over large areas using the subsampling optimization. An overview of the process is provide by the pseudocode above, and in the following further details are provided in relation to the subsampling process, interpolation method, and optimizations. Poisson-disc Subsampling The goal of the subsampling is to choose a (small) subset of the vertices to represent the mesh for the purposes of saliency calculation. The entropy and noise reduction components are generated using this subset to save execution time, and then a relatively cheap interpolation of the results is performed for the remaining vertices. The selected vertices need to represent the local entropy ^^ of the mesh with similar quality to that of the full mesh and the method must be fast (a slow subsampling technique would hinder its benefit). A Poisson-disc solution, as described in "A comparison of methods for generating Poisson disk distributions", Ares Lagae and Philip Dutré, 2008, Computer Graphics Forum, Vol.27. Wiley Online Library, 114–129, is employed that combines fast random selection with a near-uniform distribution in order to minimize interpolation error for a given number of samples. Vertices are randomly selected in parallel, and all vertices within the geodesic neighbourhood of each selected vertex are excluded from future selection. Sampling continues until all vertices are excluded. Fast random selection is facilitated by iterating through a randomly shuffled list of vertices. This is similar in principle to that described in "Efficient and Flexible Sampling with Blue Noise Properties of Triangular Meshes", M. Corsini, P. Cignoni, and R. Scopigno, 2012, IEEE Transactions on Visualization and Computer Graphics 18, but simplified and only considering vertices as samples. When applying Poisson-disc sampling to the application 480, the exclusion radius with respect to ^ ^ ^^ (the hyper parameter defining the number of samples per ^^-neighbourhood) is defined as The overall sample density is then controlled by ^^ ^^ as shown in Figure 6. This figure illustrates how different radii affect the adaptive subsampling process. Smaller radii lead to denser sampling as shown in Figure 6(A) for 1 in 50 vertices, and Figure 6(B) for 1 in 500 vertices, while larger radii provide sparse sets as shown in Figure 6(C) for 1 in 5000 vertices. The figure depicts vertices selected by the process, along with half their exclusion radius (using the full radius would cover the mesh of the model entirely). ^^ becomes locally smoother as ^^ increases, which means that ^^ ^^ effectively controls the quality of the resulting interpolation. Interpolating Saliency Values After generating saliency values for the sampled vertices, the saliency map is completed by propagating those values to unsampled vertices. For each unsampled vertex ^^ ^^, the method 100 of the application 480 collects nearby sampled vertice [ ] that were assigned saliency values. The interpolant is then determined as the inverse-distance- weighted mean of the sample values: The method searches for sampled vertices within 2 r e of the vertex v i , and accordingly the weighting function is defined as: This guarantees interpolation coverage ( 2 r e is greater than the largest distance between two nearby subsampled vertices) and also ensures the interpolation is continuous in that the values further from v are givien zero weight. Memory and Grouped Traversal Optimizations The benefits of subsampling diminish as the area used for the saliency measure decreases, since the ratio of sampled to unsampled vertices increases. At some point, typically below one sample per five vertices, it is more efficient to avoid subsampling and instead calculate values for every vertex. To assist the application 480 of the method uses a flat data structure that packs all vertices and their lists of neighbours into a single contiguous region of the memory 404, and then consistently use offsets into this region to identify vertices, significantly reducing indirection processing overhead when traversing the mesh. The vertices are stored such that neighbour points are roughly grouped together to improve cache locality. A series of truncated breadth-first searches are used to gather groups that are then placed sequentially. A secondary inner search gathers inner groups of four direct neighbours to aid grouped traversal. Grouped traversal begins by identifying the ^^-neighbourhood (i.e. the geodesic neighbourhood) of a given vertex in preparation for the saliency calculation. The r- neighbourhoods of directly adjacent vertices inherently have a large overlap. To optimize, the ^^-neighbourhoods of four neighbouring vertices are accessed together in a single traversal in order to share work between them. One, two, or four neighbours may be processed (or more, e.g.8) but four is a chosen as default to provide performance benefits. The application 480 uses a modified first-in-first-out queue instead of the traditional priority queue to implement this, which is also faster for this method. As the operations performed during traversal and entropy determination for each of the four root vertices are mostly identical, computational overhead is further reduced by taking advantage of single instruction/multiple data (SIMD) operations. For example, x86-64 Streaming SIMD Extensions (SSE) compiler intrinsics associated with the CPU 408 can be used to process four root vertices. MRSS The application 480 that executes the method 100 as described above is executable code that implements a highly optimized method for generating a saliency map, and can be described multi resolution subsampled saliency (MRSS). The saliency maps generated by MRSS are compatible with standard polygon reduction software. For example, the maps produced can be exported and used directly by the ZRemesher plugin for the ZBrush software produced by Pixologic Inc. MRSS 480 can be implemented using either a standalone executable or a command-line interface application. Once opened, the user is presented with a graphical interface that shows an empty 3D scene. The user can import a 3D model file exported from most software that manages 3D assets (For example, MRSS supports most file types, including the OBJ and PLY file formats, for 3D model files). Once imported, the user can then open a “Saliency” window, as shown in Figure 5, where the user can configure parameters and then instruct the tool to compile a saliency map. Depending on the model complexity the computation time for this step can be near-instant (for a simple model) or up to a number of minutes (for complex models like 3D scans or artist sculptured models). Once the saliency map has been built, the user can export the results to a file (again such as OBJ or PLY format) and then import the result into a program of choice to perform polygon reduction (such as ZBrush). Alternatively, MRSS also executes mesh decimation, which is accessed via another menu in the interface. The mesh decimation result (again in PLY or OBJ file formats) can then be imported into a content creation application 470 (such as Maya, Blender, Unreal Engine, Unity) and used to produce content, such as animations using the 3D model. As discussed above and shown in Figure 5, the user interface for the saliency map generation allows control of: (i) a slider to select the globality parameter (Local-Global) to configure the surface area used when calculating saliency (smaller values favour performance, larger values favour accurate results); (ii) a tick box to define whether noise is present in the model (such as with 3D scans), and when ticked a slider appears to configure the expected variance of the noise ^^ (S/N); and (ii) a slider (labelled S/N) to reduce sampling rates to favour performance (this can be important for very large models) Additional control can be provided for other parameters used in the saliency map method. Parallel Implementation As discussed above, MRSS 480 generates saliency maps by analysing a tropical angle curvature change within differently sized neighbourhoods of vertices. When operating at a large scale, not every vertex in a mesh is processed. A representative subset of vertices is sampled to generate the saliency value for only those vertices, and those values are interpolated for vertices not in the subset. The amount of subsampling is proportionate to the current scale: larger operating scales using a higher subsampling rate, while small scales use less. This adaptive processing ensures good performance, and also accuracy in the results (small scales for smaller operating areas ensure accuracy). This subsampling process optimization significantly improves speed. However, as the operating scale reduces to a very small size (typically where the subsampling process would visit < 5 vertices), it is no larger faster to use subsampling. At this point, MRSS 480 switches to instead determine saliency for every vertex. To enhance performance for this baseline process, the calculation of saliency for 1, 2, or 4 adjacent vertices (their neighbourhoods overlap) are performed together using SIM operations. The saliency determinations for each subsample or group of the two optimisations above can be run independently. Accordingly MRSS runs these in parallel, for example using OpenMP, generate the saliency values. By doing this, each processing core of the CPU 408 executes one calculation at a time (rather than running all of the calculations on a single thread). This significantly improves performance, and makes full use of the CPU 408. The flat data structure, discussed above, also greatly assists by storing the vertices in the order that they we will be accessed and processed. This data structure improves performance by better enabling SIMD execution, thus making more extensive use of SEE compiler intrinsics, and reducing cache indirection. MRSS 480 generates saliency maps for previously problematic cases such as sculpted or 3D scanned models that featured malformed geometry, such as flipped triangles. MRSS 480 is able to generate saliency maps for 10+ million- vertex meshes in significantly reduced time and in some cases less than a minute. Figure 7(A) shows an original model with a saliency map generated using MRSS 480. Red indicates highly salient features, white shows intermediate saliency areas, and blue depicts less salient zones. Figure 7(B) shows the original model after 95% decimation using the saliency map. Figures 7(C) and 7(D) show close-up views of the face and hair details of the model. The decimation result in Figure 7(D) features denser tessellation in highly salient areas, as tessellation density is prescribed by the saliency map, and enables the decimation process to use high compression while preserving geometric shape. Many modifications will be apparent to those skilled in the art without departing from the scope of the present invention.