Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
FINITE-ELEMENT ANALYSIS AUGMENTED REALITY SYSTEM AND METHOD
Document Type and Number:
WIPO Patent Application WO/2018/156087
Kind Code:
A1
Abstract:
A finite-element analysis augmented reality, FEA-AR, system and method, and computer- readable media. The FEA-AR system comprises a camera configured for capturing an image of a physical structure; a finite-element, FE, model unit configured for maintaining an FE model of the physical structure and for processing the FE model for stress analysis under one or more loads; an interface configured for displaying the captured image of the physical structure and for rendering the FE model overlaying the image of the physical structure and for rendering results of the stress analysis; an input device configured for user-input relating to a virtual structure to be added to the FE model; and wherein the interface is further configured for rendering a modified FE model overlaying the image of the physical structure based on the user-input relating to the virtual structure to be added to the FE model.

Inventors:
HUANG JIMING (SG)
ONG SOH KHIM (SG)
NEE YEH CHING (SG)
Application Number:
PCT/SG2018/050091
Publication Date:
August 30, 2018
Filing Date:
February 27, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NAT UNIV SINGAPORE (SG)
International Classes:
G06F17/50; G06T15/00; G06T17/20; G06T19/00
Foreign References:
US20150262426A12015-09-17
US7634394B22009-12-15
US8599194B22013-12-03
US7356449B22008-04-08
CN104091033A2014-10-08
Other References:
RYKEN M. J. ET AL.: "Applying virtual reality techniques to the interactive stress analysis of a tractor lift arm", FINITE ELEMENTS IN ANALYSIS AND DESIGN, vol. 35, no. 2, 31 May 2000 (2000-05-31), pages 141 - 155, XP055541023, [retrieved on 20180406]
YEH T. P. ET AL.: "Applying Virtual Reality Techniques to Sensitivity-Based Structural Shape Design", JOURNAL OF MECHANICAL DESIGN, vol. 120, no. 4, 1 December 1998 (1998-12-01), pages 612 - 619, [retrieved on 20180406]
VALENTINI P. P. ET AL.: "Dynamic Splines for interactive simulation of elastic beams in Augmented Reality", IMPROVE'11, 17 June 2011 (2011-06-17), pages 89 - 96, XP055541031, [retrieved on 20180406]
HUANG J. M. ET AL.: "Real-time finite element structural analysis in augmented reality", ADVANCES IN ENGINEERING SOFTWARE, vol. 87, 18 May 2015 (2015-05-18), pages 43 - 56, XP055541033, [retrieved on 20180406]
Attorney, Agent or Firm:
VIERING, JENTSCHURA & PARTNER LLP (SG)
Download PDF:
Claims:
CLAIMS

1. A finite-element analysis augmented reality, FEA-AR, system comprising:

a camera configured for capturing an image of a physical structure;

a finite-element, FE, model unit configured for maintaining an FE model of the physical structure and for processing the FE model for stress analysis under one or more loads;

an interface configured for displaying the captured image of the physical structure and for rendering the FE model overlaying the image of the physical structure and for rendering results of the stress analysis;

an input device configured for user-input relating to a virtual structure to be added to the FE model; and

wherein the interface is further configured for rendering a modified FE model overlaying the image of the physical structure based on the user-input relating to the virtual structure to be added to the FE model.

2. The FEA-AR system of claim 1, wherein the input device is further configured for user-input relating to a location and magnitude of one or more simulated loads, and the FE model unit is configured for stress analysis under the one or more simulated loads.

3. The FEA-AR system of claim 2, wherein the interface is configured for attaching a virtual pointer to an image of the input device captured by the camera for applying the one or more simulated loads based on a rendered location and orientation of the virtual pointer.

4. The FEA-AR system of any one of claims 1 to 3, further comprising one or more sensor elements configured to be coupled to the physical structure and for measuring one or more actual loads on the physical structure, and the FE model unit is configured for stress analysis under the one or more actual loads.

5. The FEA-AR system of any one of claims 1 to 4, wherein the input device is further configured for user-input relating to viewing a cross-section of the rendered results of the stress analysis of the FE model, and the interface is configured for rendering the cross-section of the rendered results of the stress analysis of the FE model.

6. The FEA-AR system of any one of claims 1 to 5, wherein the input device is further configured for user-input relating to creating a slice of the rendered results of the stress analysis of the FE model.

7. The FEA-AR system of claim 6, wherein the interface is configured for attaching the created slice to an image of the input device captured by the camera for manipulation of the created slice based on a location and orientation of the input device.

8. The FEA-AR system of any one of claims 5 to 7, wherein the interface is configured for attaching a virtual plane to an image of the input device captured by the camera for selecting the cross-section based on a rendered location and orientation of the virtual plane and/or for creating the slice based on the rendered location and orientation of the virtual plane.

9. The FEA-AR system of any one of claims 1 to 8, wherein the input device is further configured for user-input relating to viewing a region of the rendered results of the stress analysis of the FE model, and the interface is configured for rendering the region of the rendered results of the stress analysis of the FE model.

10. The FEA-AR system of claim 9, wherein the interface is configured for attaching a virtual volume to an image of the input device captured by the camera for selecting the region based on a rendered location and orientation of the virtual volume.

11. A finite-element analysis augmented reality, FEA-AR, method comprising the steps of:

capturing an image of a physical structure;

maintaining an FE model of the physical structure;

processing the FE model for stress analysis under one or more loads;

displaying the captured image of the physical structure;

rendering the FE model overlaying the image of the physical structure;

rendering results of the stress analysis;

receiving user-input relating to a virtual structure to be added to the FE model using an input device; and

rendering a modified FE model overlaying the image of the physical structure based on the user-input relating to the virtual structure to be added to the FE model.

12. The FEA-AR method of claim 11, further comprising receiving user-input relating to a location and magnitude of one or more simulated loads using the input device, and performing stress analysis under the one or more simulated loads.

13. The FEA-AR method of claim 12, comprising attaching a virtual pointer to a captured image of the input device for applying the one or more simulated loads based on a rendered location and orientation of the virtual pointer.

14. The FEA-AR method of any one of claims 11 to 13, further comprising coupling one or more sensor elements to the physical structure and measuring one or more actual loads on the physical structure using the one or more sensor elements, and performing stress analysis under the one or more actual loads.

15. The FEA-AR method of any one of claims 11 to 14, further comprising receiving user-input relating to viewing a cross-section of the rendered results of the stress analysis of the FE model using the input device, and rendering the cross-section of the rendered results of the stress analysis of the FE model.

16. The FEA-AR method of any one of claims 11 to 15, further comprising receiving user-input relating to creating a slice of the rendered results of the stress analysis of the FE model using the input device.

17. The FEA-AR method of claim 16, comprising attaching the created slice to a captured image of the input device for manipulation of the created slice based on a location and orientation of the input device.

18. The FEA-AR method of any one of claims 15 to 17, comprising attaching a virtual plane to a captured image of the input device for selecting the cross-section based on a rendered location and orientation of the virtual plane and/or for creating the slice based on the rendered location and orientation of the virtual plane.

19. The FEA-AR method of any one of claims 11 to 18, further comprising receiving user-input relating to viewing a region of the rendered results of the stress analysis of the FE model using the input device, and rendering the region of the rendered results of the stress analysis of the FE model.

20. The FEA-AR method of claim 19, comprising attaching a virtual volume to a captured image of the input device for selecting the region based on a rendered location and orientation of the virtual volume.

21. Computer readable media having embodied therein data and/or instructions for instructing a computing device to implement the system as claimed in any one of claim 1 to 10 and/or to execute the method as claimed in any one of claims 11 to 20.

Description:
FINITE-ELEMENT ANALYSIS AUGMENTED REALITY SYSTEM AND METHOD

FIELD OF INVENTION

The present invention relates broadly to a finite-element analysis augmented reality system and method.

BACKGROUND

Any mention and/or discussion of prior art throughout the specification should not be considered, in any way, as an admission that this prior art is well known or forms part of common general knowledge in the field.

Interactive finite element analysis (FEA) in virtual reality (VR)-based environments has been considered helpful for structural investigation and design [1-6]. The ease of navigation in an immersive virtual environment facilitates the exploration of FEA results. Furthermore, the results can be updated in response to the modification of the model geometry and simulation variables. Such an interactive simulation environment enables the user to focus on the structural investigation without taking much effort to operate the simulation tools.

The response time of interactive VR simulation systems for FEA visualization is important since it largely affects the user experience and perception. The systems are expected to generate results of acceptable accuracy with minimum lag time. Liverani et al. [1] proposed a VR system for FEA of shell structures. Using stylus and gloves as input devices, the user can create, adjust the mesh, and specify boundary conditions. Real-time results can be obtained by changing loads on a small scale model. For complex models, classical solvers are not able to achieve real-time solutions. A number of studies used artificial neural networks (ANN) or approximation methods to achieve real-time interaction for the specific tasks. Hambli et al.

[2] used an ANN to generate real-time deformation of a tennis ball and racket during impact. The users can play tennis in a VR environment, and the impact is felt via a haptic glove. Connell and Tullberg [3] presented a framework to simulate a bridge in VR. The user can move the loads acting on the bridge, and FEA results are updated immediately using an approximate module. Cheng and Tu [4] reported an ANN-based approach for real-time deformation of mechanical parts under forces. The user can make geometric adjustment by changing the feature parameters among the trained values. Although achieving fast speeds, these simulation approaches do not compute the exact results. Moreover, ANNs require training and may generate unreliable results for untrained inputs.

Changes in the geometry of the models investigated are usually made during the design process. A common method is to modify a finite element (FE) model in a computer assisted design (CAD) system. However, direct mesh manipulation without returning to a CAD system is particularly valuable for interactive simulation, because the operations will be more straightforward and efficient. Yeh and Vance [5] developed an interactive stress analysis method for shape design. By using non-uniform rational B- spline -based free-form deformation and direct manipulation techniques, the designers can modify the shape of a FEA mesh directly. The resulting stresses are updated using linear Taylor series approximations based on a sensitively analysis. Ryken and Vance [6] implemented this method to the stress analysis of a tractor lift arm. Collision detection was performed for checking interference with the surrounding geometry. Rose et al. [7] developed an approach to edit the mesh by manipulating nodes directly. To avoid poor quality of elements after editing the mesh, relaxation and local restructuring procedures were performed. The VR-based environment was constructed using a haptic device and an auto stereoscopic display. Graf and Stork [8] presented a simulation approach in VR. The user can drag geometric features directly to change the positions, and create cross-sections to access the interior FEA results. To obtain real-time results for moving loads, the inverse of the stiffness matrix was pre-computed and the results were calculated only for the visible elements.

Different from VR, augmented reality (AR) includes real-world environments. Scientific visualization of measured or simulated datasets in the real scene is promising for various applications. Superimposing MRI data on a patient will enable the surgeon to plan a surgery and provide navigation guidance during surgical procedures [9]. Visualizing the real-time data captured by sensors located on a bridge is helpful for monitoring the structural health of the bridge [10]. Superimposing simulated electromagnetic fields on the corresponding devices facilitates the teaching of electrodynamics [11]. Comparison of simulated results and actual measurements onsite becomes possible using AR, which is helpful for model validation and evaluation. Ham and Golparvar-Fard [12] developed a system which visualizes measured and simulated spatio-thermal data onsite. The deviations between the data were identified to help with the validation of the simulation models and analysis of the building energy performance. Mobile AR platforms promote the development of onsite and outdoor applications. However, mobile devices normally cannot provide sufficient computational power for numerical simulations, such as FEA; hence, distributed systems are required. Weidlich et al. [13] created a mobile AR system for visualizing FEA results, such as the deformation of a machine. The system has a client-server architecture with bi-directional communication. The server carries out FEA computation, and the client performs result rendering and interactive functions. Heuveline et al. [14] presented a mobile system for outdoor visualization of urban simulation results. A computational fluid dynamics (CFD) simulation based on an airflow model was carried out using a server. AR visualization of the flow field was displayed on smartphones using a hybrid tracking technique.

These reported studies merely used AR for visualizing precomputed simulation results. Indeed, AR can be utilized to facilitate numerical simulations in both visualization and interaction. AR interfaces enhance data exploration and user collaboration for simulation [15,16]. In the collaborative design system ADRON [17], technical drawings are augmented with digital data, such as CAD models, FEA results and multimedia annotations. Each client in the workspace network can navigate the model and create annotations on the augmented drawings. The annotations are shared among all the users and finally lead to a design modification. Issartel et al. [18] presented a portable interface for exploration of volumetric datasets. By using a tablet and a stylus, the user can explore datasets via natural operations, such as slicing, iso-surface picking, particle tracing, etc. Sensing and measurement play important roles in AR technologies and applications [19]. Several studies employed sensing and measurement techniques to acquire input parameters directly from the real world for interactive simulation. Valentini and Pezzuti [20] proposed a method for simulating elastic beams in AR. Elastic beams are modeled as dynamic splines. By using a stylus that is tracked, the user can control virtual beams to perform simulations. The method allows dynamic simulation of beams in real time, but has limitations for models with complex geometries. Moreover, the deformation of practical structures is usually small, which cannot be measured using regular trackers. Malkawi and Srinivasan [21] developed an AR system to visualize computational fluid dynamics (CFD) results in a room. The air temperature and velocity in the room were monitored using wireless sensors, so as to update the boundary conditions for the simulation. A trained ANN was employed to generate approximate results in real time. With a data glove, users can interact with the CFD results using various hand gestures. Haouchine et al. [22] presented a method to visualize deformation of human tissue during minimally invasive liver surgery. A real time model is built based on co-rotational FE method. To deform the model, external stretching forces are induced by tracking the surface motion of the organ using a stereo camera. The simulated deformations are superimposed on real human organs to assist surgery. This study estimates the deformation of soft objects with the purpose of achieving visually plausible results. The method is not suitable for engineering analysis. Bernasconi et al. [23] proposed an AR-assisted method for monitoring crack growth in bounded single-lap joints. A correlation between the crack tip position and the strain field was found using FEA. With the data collected from strain gauges placed on the joint, the changing crack position under a fatigue loading was computed and visualized. In this study, FEA was performed beforehand to obtain the relationship between the simulation targets and the relevant variables, such that fast computation can be achieved for visualization. Several educational applications have been developed by combining real-time FEA simulation with AR interfaces. Fiorentino et al. [24] presented an interactive simulation approach for the teaching of structural mechanics. The approach uses a camera to detect the displacement constraints made by a user, so as to update FEA results. They implemented the approach on a cantilever that is deformed manually. A refresh rate of 6.5 Hz was achieved for a mesh of 132 nodes. Matsutomo et al. [25] developed an AR application for learning electromagnetics. Relying on a 2D mesh adaptation method, the system allows the user to move the mockup magnetic objects freely to investigate the resulting magnetic fields.

The literature reviewed above highlights the advantages of using AR-based over VR-based environments for FEA simulation. However, researchers mostly applied AR technologies to facilitate the visualization and navigation of pre-computed FEA results, while existing AR- based interaction methods for FEA simulation and result visualization have limited applications for engineering analyses due to limitations in the simulation variables that can be controlled. Embodiments of the present invention seek to address at least one of the above problems.

SUMMARY

In accordance with a first aspect of the present invention, there is provided a finite-element analysis augmented reality, FEA-AR, system comprising a camera configured for capturing an image of a physical structure; a finite-element, FE, model unit configured for maintaining an FE model of the physical structure and for processing the FE model for stress analysis under one or more loads; an interface configured for displaying the captured image of the physical structure and for rendering the FE model overlaying the image of the physical structure and for rendering results of the stress analysis; an input device configured for user- input relating to a virtual structure to be added to the FE model; and wherein the interface is further configured for rendering a modified FE model overlaying the image of the physical structure based on the user-input relating to the virtual structure to be added to the FE model.

In accordance with a second aspect of the present invention, there is provided a finite-element analysis augmented reality, FEA-AR, method comprising the steps of capturing an image of a physical structure; maintaining an FE model of the physical structure; processing the FE model for stress analysis under one or more loads; displaying the captured image of the physical structure; rendering the FE model overlaying the image of the physical structure; rendering results of the stress analysis; receiving user-input relating to a virtual structure to be added to the FE model using an input device; and rendering a modified FE model overlaying the image of the physical structure based on the user-input relating to the virtual structure to be added to the FE model.

In accordance with a third aspect of the present invention, there is provided a computer readable media having embodied therein data and/or instructions for instructing a computing device to implement the system as defined in the first aspect and/or to execute the method as claimed in the second aspect.

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention will be better understood and readily apparent to one of ordinary skill in the art from the following written description, by way of example only, and in conjunction with the drawings, in which:

Fig. 1(a) illustrates adding a virtual beam to an existing structure 102 according to an example embodiment.

Fig. 1(b) illustrates visualizing the FEA data and associated results of the modified structure based on Fig, 1(a) under a load according to an example embodiment. Fig. 2(a) illustrates "slicing" a structure using a virtual plane attached to a handheld 3D input device to view a cross-section of the FEA data, according to an example embodiment.

Fig. 2(b) illustrates selecting use a region of a structure within a cube attached to a handheld 3D input device used to view the FEA data within the cube only, according to an example embodiment.

Fig. 3(a) to (c) illustrate components of a system according to an example embodiment.

Fig. 4(a) and (b) illustrates details of components of a system according to an example embodiment.

Fig. 5 illustrates a system architecture according to an example embodiment.

Fig. 6 illustrates a system setup according to an example embodiment.

Figs. 7(a) to (f) illustrate 3D selection using a virtual cube according to an example embodiment.

Fig. 8 illustrates a data structure for load acquisition according to an example embodiment.

Fig. 9 illustrates graphic representation of a point load according to an example embodiment. Fig. 10 illustrates schematics for data visualization and exploration according to an example embodiment.

Figs 11(a) and (b) illustrates a stepladder and finite-element model, respectively, according to an example embodiment.

Figs. 12(a)-(d) illustrate application of virtual loads to a stepladder according to an example embodiment.

Figs. 13(a) to (d) illustrate real-time simulations for a stepladder according to an example embodiment.

Figs. 14(a) to (c) illustrate different visualization styles according to an example embodiment.

Figs. 15(a) to (c) illustrate direct manipulation of FAE results according to an example embodiment.

Figs. 16(a) to (f) illustrate slicing and clipping FAE results ion hand-operated mode according to an example embodiment.

Figs. 17(a) and (b) illustrate slicing with unbounded and bounded cutters, respectively, according to an example embodiment.

Figs. 18(a) to (c) illustrate slicing and clipping FAE results in view -based mode according to an example embodiment. Figs. 19(a) to (f) illustrate adding beams to stiffen a structure according to an example embodiment.

Figs. 20(a) to (d) illustrate adding a geometric model for design modification according to an example embodiment.

Figs. 21(a) to (d) illustrate interactive mesh refinement according to an example embodiment.

Fig. 22 shows a flowchart illustrating a finite-element analysis augmented reality, FEA-AR, method, according to an example embodiment.

DETAILED DESCRIPTION

The Finite-Element Analysis Augmented Reality (FEA-AR) System according to an example embodiment of the present invention enables FEA of structures to be performed through an AR interface. The AR scene is viewed through a computer display such as a monitor or head- mounted display. The AR scene includes a view of the physical environment containing the structure to be analyzed, with graphical visualizations of FEA data, such as structural stresses and strains, overlaying the physical structure. The FEA data is updated in real-time in response to changes in loading conditions received through force sensors attached to the structure, and/or by simulated loads added by the user. An example embodiment of the present invention allows existing structures to be analyzed under actual loading conditions, and simulated loads to be added to investigate hypothetical scenarios. Hypothetical changes to the structure can advantageously be analyzed under actual or simulated loads according to a preferred embodiment of the present invention, for example design modifications to structures, such as bridges and supports. An example embodiment of the present invention uses efficient computation methods to carry out real-time finite element analysis, making it particularly suitable for engineering analysis. Furthermore, an example embodiment of the present invention provides user-friendly interactive tools for deeper analysis than the above works, by allowing for the application of loads, exploration of results using natural interaction methods, the addition and/or removal of geometric elements to existing structures, and slicing of structural members to examine internal stress distribution. An example embodiment of the present invention provides a more intuitive experience in carrying out FEA while also enabling the real-time simulation of more complex structures than the above works. The commercial applications of an example embodiment of the present invention include the finite-element analysis for engineering design in near real time and the training of students taking structural mechanics in a real environment. The FEA-AR according to an example embodiment enables FEA of structures to be performed through an interactive Augmented Reality (AR) interface that displays graphical visualization of FEA data, such as structural stresses and strains, overlaying the physical structure. The FEA data can either be based on loading conditions picked up by force sensors attached to the structure, or by simulated loads added by the user. A preferred embodiment of the present invention allows the system to be applied to the study of physical structures under actual as well as hypothetical loading conditions. In the context of engineering design, hypothetical changes made to existing structures can be analyzed under actual or simulated loads, for example design modifications to structures, such as bridges and supports. Fig. 1(a) illustrates adding a virtual beam 100 to an existing structure 102, and visualizing the FEA data, i.e. the mesh 104 and associated results of the modified structure under a load, as illustrated in Fig. 1(b). Interactions with the visual data, such as adding loads and isolating regions of interest, are performed by the use of a handheld pointer to add and/or remove virtual structures, add loads to specific points on the structure or define regions of interest. For example, "slicing" the structure 200 using a virtual plane attached to a handheld 3D input device 202 to view a cross-section of the FEA data can be performed in an example embodiment, as illustrated in Fig. 2(a). As another example, a region 204 of the structure 200 within a cube 206 attached to the handheld 3D input device 202 is used to view the FEA data within the cube 206 only, as illustrated in Fig. 2(b). As the interactions can be performed directly on the physical structure according to an example embodiment, intuitive understanding of the data is enhanced; this also enhances the efficiency of the process of analyzing and designing structures.

The system according to an example embodiment comprises a hardware and software component. The hardware component includes a computer 300 with a PC camera 302 as illustrated in Fig. 3(a), a handheld 3D input device 304 as illustrated in Fig. 3(b) for user interaction and a force sensor network 306 attached to the physical structure 308 to be analyzed using the system, as illustrated in Fig. 3(c). The software component is configured to perform the FEA calculations and display of the AR scene. The 3D input device 304 includes a marker-cube 310 attached to a wireless mouse 312, as illustrated in Fig. 3(b). The marker-cube 310 enables the 3D pose or orientation of the 3D input device 304 to be tracked by the camera 302, thus allowing for interactions to be carried out in 3D with respect to the structure 308. The force sensor network 306 in an example embodiment includes remote sensor nodes in the form of wireless Radio Frequency (RF) transmitters 400 attached to the force sensors 402, as illustrated in Fig. 4(a). The RF transmitters 400 establish a self-healing mesh network that enables a large number of sensor nodes to be added to the network, according to an example embodiment. A gateway node 404 is provided in an example embodiment, in the form of an RF transmitter 406 connected to the computer 300 via an interface 408, which enables the data transmitted by the remote sensor nodes to be received by the software component. In an example embodiment, the RF transmitters 400, 406 used are Synapse RF Engines.

Embodiments of the present invention described herein can advantageously provide:

• An integrated framework for performing FEA interactively in an AR environment.

• A 3D selection method has been developed to support multiple object selection in the FEA- AR environment for FEA analysis and result visualization. • A real-time interaction method for investigating the behavior of structures under different loading conditions. The user can apply virtual loads using a natural interface and measure loads using force sensors.

• An approach for intuitive data exploration through manipulating, slicing and clipping FEA results that are augmented on the physical structures.

• Interactive methods have been developed for specific model modifications in the FEA-AR environment, e.g., adding geometric elements and local mesh refinement.

System architecture according to an example embodiment

As is appreciated by a person skilled in the art, FEA simulation begins with a mesh model. The matrix equation is formed and solved after loads and constraints have been applied, and the results can be visualized. Depending on the purposes, various FEA results can be achieved for simulation and visualization by changing the variables. In an AR environment according to the example embodiment, sensors can be employed to measure varying boundary conditions, e.g., displacements and loads. Engineers usually conduct FEA to investigate structures under different loading conditions. While real-time FEA methods are available to simulate structures under varying loads, as was described in the background section above, the existing FEA methods have limited capabilities.

With these considerations, a system architecture according to an example embodiment of the present invention is established, as shown in Fig. 5. The system 500 according to this example embodiment has seven modules, namely, a sensor module 502, a load acquisition module 504, an FE model module 506, a solution module 508, a post-processing module 510, a rendering module 512 and a user interaction module 514. The sensor module 502 acquires information from the real world using different sensors, such as using force sensors to measure loads, using trackers to obtain the position and orientation data for AR rendering and user interaction, etc. Depending on the practical situations, wireless sensor networks can be established to monitor spatially distributed loads in various embodiments. The load acquisition module 504 manages the load data acquired from sensor measurement or user input, and converts them into nodal forces for FEA computation. With the nodal forces and the FE model module 506, the solution module 508 is established to solve the equations. Besides having a classical pre-conditioned conjugate gradient (PCG) solver, the solution module 508 according to this embodiment incorporates a real-time solver to accelerate the simulation for varying loads. The real-time solver is built based on the concept of pre- computing the inverse stiffness matrix [8,27], which is implemented in two phases, i.e., offline pre-computation and online solution. The offline pre-computation generates the inverse matrix and load vector so as to compute the FEA results online at a fast speed. When there are modifications of the model, updating of the inverse matrix is performed.

The FEA results from the solution module 508, such as deformations and stresses, are visualized in the post-processing module 510. Data filters are established by implementing scientific visualization techniques, such as data slicing and clipping. By tracking the user's view, the rendering module 512 renders virtual objects in the AR environment, such as the FEA results, loads, slicing planes, added geometric elements etc. FEA simulation and AR rendering are performed in different computation threads in this embodiment. During realtime simulation, a synchronization process is performed to preferably ensure that results are updated in every frame. The user interaction module 514 provides the methods to interact with the FE model module 506, which include load application, adding of geometrical elements, model modification, result exploration, etc.

Solution module according to an example embodiment

Linear elastic models are built in this example embodiment, which are computationally viable and widely applicable for engineering structures, to achieve real-time simulation for varying loads and added geometrical elements. Quasi-static simulations are performed, which is feasible in situations where the dynamic effects of varying loads can be neglected. An elastic FE model can be expressed as a sparse linear equation system in Eq. (1), where K is the stiffness matrix, u and / are the vectors representing nodal displacements and forces, respectively.

Ku = f {\)

The inverse stiffness matrix K 1 is pre-computed offline in the example embodiment. With varying forces acquired in situ, the displacements are derived by performing a matrix-vector multiplication in Eq. (2). u = KT 1 † (2)

The inverse stiffness matrix K 1 is computed by assigning a unit load to each degree of freedom (DOF), and solving the linear systems using a PCG solver. Each column of the inverse matrix is the displacement solution for the unit load that is applied to the corresponding DOF. With nodal displacements, the nodal stresses can be derived using conventional methods [28], such as direct evaluation at the nodes, extrapolation from the Gauss points, etc.

In practice, the FE modeling and pre-computation can be performed using external FEA systems in various embodiments, and the computation of the inverse matrix can be programmed using the scripting tools provided. Rather than computing the entire inverse matrix, the solver preferably only needs to compute the columns for the nodes in the regions that may be subject to loads, e.g., certain faces of a structure, in the example embodiment such that the pre-computation time will be largely reduced. The error tolerance of the PCG solver can be increased to reduce the precomputation time at the expense of solution accuracy in various embodiments. In addition, the computation in Eq. (2) is preferably sped up in the example embodiment by skipping the zero entries in the load vector. When modifications have been made on the FE model, the inverse matrix is preferably updated accordingly. In this situation, the solution module will exit the real-time solution mode, and use the PCG solver to update the FEA results asynchronously in response to model modifications, according to the example embodiment. The inverse matrix will preferably be updated in a background process. The real-time solution mode can be reactivated after the update is completed.

AR environment setup according to the example embodiment

Marker-based tracking is adopted in the example embodiment due to its robustness and ease of use. As shown in Fig. 6, a local or structure coordinate system (CS) 600 is defined for the physical structure. By tracking the pose of a camera 602, i.e. camera CS 604, with respect to the structure CS 600, represented by the camera matrix Ts c , the FEA meshes and results can be augmented to be aligned with the physical structure, here in the form of a stepladder 606. Interaction with virtual objects is achieved using a 3D input device 608 and a virtual panel 610 [29]. This virtual panel 610 is displayed on the screen 612 and consists of virtual buttons e.g. 614 and text displays (see numeral 2008 in Fig. 20(d), discussed below). The virtual buttons 614 are designed for triggering commands user by using the 3D input device 608, i.e. moving onto the desired button and clicking the mouse, and the text displays provide information to the user. The 3D input device 608 is created in the example embodiment by combining a marker cube 616 with a wireless mouse 618. The marker cube 616 is used for tracking the pose of the device 608, i.e. device CS 620, with respect to the camera CS 604, which is represented by the device matrix Tc D . The buttons and wheel of the mouse 618 are used for triggering and parameter adjustment. The coordinate transformation, represented by the matrix T^, between the input device CS and the CS of the physical structure can be computed using Eq. (3).

Ts° = Ts c Tc D (3)

The user can use the input device 608 to perform various interactions, such as applying virtual loads at locations by pointing at these locations, exploring volumetric FEA results using a handheld slicing plane, selecting virtual models as additional geometrical objects and placing them at desired locations, etc. In the example embodiment, a 5D selection approach is developed based on the OpenGL selection mode. Using this approach, a viewing volume is used to select the objects that are drawn in this volume.

To enable 3D selection, the pose and size of the viewing volume are controlled by setting the relevant camera matrix. As shown in Fig. 7(a) and (b), a viewing volume, represented by a virtual cube 700a, 700b, is created on the input device that is manipulated by the user, such that the rendered objects e.g. 702b intersecting this cube e.g. 700b can be selected. The size of the cube can be adjusted by rolling the mouse wheel, resulting in an adjusted cube 700c with correspondingly adjusted selected drawn objects 702c, as illustrated in Fig. 5 7(c).

This selection method in the example embodiment advantageously enables the user to choose multiple objects efficiently in the AR environment. It is easy to select element faces 702d as shown in Fig. 7(d). All the faces that intersect with the virtual cube 700d will be selected, which may not be practical when only the faces of specific orientations are to be selected, e.g., the horizontal faces. To address this, the selection method according to the example embodiment can be optionally refined by using one of the faces 704e of the virtual cube 700e as a reference face, as illustrated in Fig. 7(e). Element faces 702e are selected when the angles between the faces and the reference face 704e are smaller than a threshold. Therefore, the user can select specific faces by rotating the virtual cube 700e via rotating of the marker cube 705, or by changing to another reference face 704f specified via a virtual panel (not shown), as illustrated in Fig. 7(f).

Real-time FEA under varying loading conditions according to the example embodiment

In the established AR environment in the example embodiment, the user can use the 3D input device to exert virtual loads, or use sensors to acquire loads automatically in the actual loading environment. As a basic loading approach, the application of point loads is elaborated in the example embodiment. The adaptation of this method for distributed loads in various embodiments is also demonstrated.

Load acquisition according to an example embodiment

All the load data, either specified by the user or acquired from load sensors, is preferably converted to nodal forces for FEA computation in the example embodiment. Graphic representations are created to visualize the loads. A data structure is established in the example embodiment to manage the loads as shown in Fig. 8. Each load sensor 800 or tracker 802 has a unique ID number for indexing, and a communication address for accessing the output. The outputs of the load sensors 800 are interpreted in a load interpreter 804 into load values 806 after being multiplied by a calibration matrix. The interpreter 804 is established for calibrating common force sensors and for adapting to situations where loads are identified through measuring local strains or deflections [33]. For a virtual load, the value, location and orientation are input using the 3D input device. However, to acquire practical load data, appropriate sensors are preferably used and installed according to the measurement requirements in the example embodiment, e.g., position trackers are used to obtain the locations of moving constant loads. For loads acting on fixed locations on physical structures, these load locations can be manually input by the user. For moving loads tracked using sensors, collision detection is used to obtain the different locations during real-time simulation in the example embodiment.

With the location and orientation of a load, a load transformation matrix is computed. Using this matrix, the loads acquired from user input or sensors are transformed into the structure CS, and allocated to the nodes subject to these loads, i.e., converted to nodal forces. The loaded nodes are determined according to the load locations; the allocation is achieved by assigning weights to the loaded nodes, which are derived by computing the equivalent nodal forces with applied unit loads in the example embodiment. In each rendering loop during the real-time simulation, the load vector / is updated by accumulating the nodal forces, so as to compute the FEA results for every frame. Using the data structure and computation approach described above with reference to Fig. 8 for the example embodiment, the loads can be manipulated easily, such as adding, removing and adjusting. Real-time simulation with virtual loads according to an example embodiment

Virtual loads can be attached to the input device to allow the user to manipulate the application and location of these virtual loads. A virtual point load is graphically represented by a solid cone 900 in the example embodiment, as shown in Fig. 9. The pointing direction of the cone 900 is the load direction, and a number indicates the load value which can be adjusted with the mouse wheel of the 3D input device 901. To compute the load location on the surfaces of a structure, the polygonal surface mesh is extracted from the 3D solid mesh. When the cone 900 tip is close to a face of the model, a ray from the cone tip will be cast towards the face. The vertex of the intersection will be the node subject to the load, and the intersection point will be taken as the load location in the example embodiment. Therefore, the user can intuitively control the orientation, location and value of a load through manipulating the 3D input device 901. A virtual load can be finalized at a specific location on the structure accordingly. The relevant load parameters, such as the location, orientation and value, do not change. If adjustments are required, the user can select the load and change the parameters. The load variations caused by the manipulation or adjustments are computed, and used to compute and update the variations of FEA results.

The loading approach according to the example embodiment can be adapted for applying distributed loads in various embodiments. A typical example is applying pressure to the faces of a physical structure. Using the refined 3D selection method described above with reference to Figs. 5(e) and (f), the user can manipulate a virtual cube to select a model face or part of it by selecting the element faces composing it. With the pressure and areas of the selected faces, the force on each element face can be computed and allocated to the relevant nodes in various embodiments.

Visualization and exploration of FEA results according to an example embodiment

To visualize FEA results, the Visualization Toolkit (VTK) [34] is integrated into the system in the example embodiment. VTK is an open-source library that provides various visualization algorithms and supports interactive applications. Visualization is achieved using a data pipeline from the source data to images that are rendered. Fig. 10 illustrates the approach for data visualization and exploration in the FEA-AR environment. As described in detail in [34], in VTK, vtkUnstructuredGrid dataset is created to store the FE mesh and solutions. This dataset can be modified and/or transformed with vtkFilters for specific requirements. vtkMappers are used to map the resulting datasets to visualization objects, i.e., vtkActors, and the visual properties of these vtkActors are adjustable, such as scale, color, transparency, etc. The vtkActors are rendered using vtkRenderers.

The viewing transformations for rendering vtkActors are controlled by the vtkCameras. By performing the viewing transformation with the matrices from the camera calibration and viewpoint tracking, vtkActors can be superimposed on the physical structure. Therefore, the user can observe the FEA results from different perspectives intuitively through moving the camera viewpoint, e.g., a user can wear a head-mounted display (HMD) and walk around the structure. However, this approach may encounter difficulties due to restrictions from the physical environment and the limitations of the tracking method. For instance, the user has to position the viewpoint at an awkward angle to access the bottom views of unmovable structures. The trackers would fail or have poor performance when the markers are outside the field of view or at inappropriate distances from the camera. To complement this approach, direct manipulation of data is enabled in the example embodiment. By configuring the vtkCamera with the matrices from tracking the 3D input device, vtkActors can be attached to the 3D input device, such that the user can manipulate the data manually. In the example embodiment, the user can adjust the scales of the vtkActors by rolling the mouse wheel.

Data slicing and clipping are fundamental scientific visualization techniques for exploring volumetric datasets. AR interfaces can contribute to intuitive and efficient exploration of these volumetric datasets. The filters vtkCutter and vtkClipDataset are utilized to build the interfaces. The data slicing method allows the user to access the interior of a volumetric dataset by manipulating a slicing plane. The data clipping method allows the user to clip a dataset with a cube or plane, so as to isolate the portions of data that are of interest. The planes and cubes used for slicing and clipping are created using vtkPlane and vtkBox, respectively, and their sizes are adjustable. When a virtual plane or cube is attached to the device CS, the user can manipulate this plane or cube. When attached to the camera CS, the user can manipulate the plane or cube by moving his viewpoint. However, it is may not be very intuitive to control a cube using one's viewpoint. Hence, only planes are manipulated in this manner in the example embodiment. Multiple data slices or clips can be created at different locations on the physical structure for observation and comparison. The user can manipulate a data slice or clip after attaching it to the 3D input device. In addition, the data slice or clip can be updated during real-time simulation in the example embodiment.

Interactive model modification according to the example embodiment

In FEA analysis, model modifications are usually performed for different purposes, e.g., refining the mesh locally to improve the accuracy of the results in critical regions, modifying the geometric model to investigate the design variables, etc. The modified model is usually re-analyzed with unchanged boundary conditions. The modification and re-analysis may be performed repeatedly until satisfactory results are obtained. Using the interfaces of standard FEA software, the user needs to conduct laborious data entry and repeat the operations of re- analysis. In contrast, customized AR interfaces can be established in the example embodiment to allow intuitive and efficient control of the variables for modification. The FEA results can be updated using an automated procedure for model modification and reanalysis. As a result, the modification is conducted interactively with direct update of the FEA results.

The method of adding geometric models according to the example embodiment advantageously allows the user to join new geometric models to the current FE model, for example, adding structural members, such as ribs and braces, to stiffen a structure. With an initial FE model, the user can manipulate the new geometric models, and place them at specific locations. To join the new geometric models to the FE model, one method is merging all the geometric elements and regenerating the global mesh, which may not be most efficient. Therefore, the new geometric models are preferably trimmed and meshed individually according to the example embodiment. The trimming approach is achieved in the example embodiment by dividing the geometric model with the intersected faces of the FE model, and removing the portions that intrude into the FE model. Hence, the next step is to connect the dissimilar meshes, i.e., the newly-generated mesh and the original mesh. A smooth connection according to the example embodiment uses mesh adaptation to generate common nodes that are shared by the contacting meshes. To simplify the connection, the example embodiment adopts an existing method of connecting dissimilar meshes, i.e., applying linear constraints to the nodes located in the contacting areas. Finally, the FE model is re-analyzed to update the results. In this modification process according to the example embodiment, the user places the new geometric models and specifies the intersected faces of the FE model. The subsequent tasks can be performed automatically.

Local mesh refinement, as another example of interactive model modification according to an example embodiment, is usually performed semi-automatically with standard FEA software. The user selects the nodes or elements in the regions to be refined, specify the level of refinement and activate the automatic mesh refinement algorithms. After refining the mesh, the model is re-analyzed. In the FEA-AR environment, the ease of data exploration facilitates the determination of the regions to be refined. Taking advantage of the 3D selection method described above with reference to Fig 5, the user can select elements intuitively through moving the virtual cube to enclose the regions to be refined in the example embodiment. After the user inputs the level of refinement, the system can perform the computation with an automated procedure for mesh refinement and re-analysis. The refined model and results are finally rendered to the user. It should be noted that the mesh topology is changed after refinement. The loads and constraints which are affected by the changes are preferably reapplied. After model modifications, the inverse stiffness matrix can be updated automatically if real-time simulation is required. Automatic mesh generation or refinement is involved in the modification processes. The feasibility depends on the element type used and meshing tools available.

System implementation according to an example embodiment

A prototype system according to an example embodiment has been developed using C++ language and runs on a Microsoft Windows operating system. A PC that has a 3.2 GHz Intel processor and 8 GB RAM is utilized. Marker-based tracking is implemented using ARToolkit library, and a webcam with a capture rate of 30 fps is used. A virtual panel that has a customized menu is created. ANSYS software is employed to support certain FEA tasks, such as mesh generation, solution, model modification, etc. The PCG solver provided by ANSYS is adopted with an error tolerance of 1.0E-5. The communication between ANSYS and the AR-based system is achieved using the ANSYS Parametric Design Language (APDL) programs. The AR-based system generates APDL codes to control the specific FEA tasks. With the APDL codes, ANSYS performs FEA tasks and outputs the data. The data files generated by ANSYS are read by the AR-based system. The prototype system is applied to the structural analysis of an off-the shelf stepladder 1100 as shown in Fig. 11(a). The stepladder 1100 is selected because it is considered as having moderate failure risk in usage, and the model is typical and not difficult to understand. The prototype system is focused on demonstrating the proposed interaction methods according to example embodiments of the present invention, rather than conducting a detailed analysis of the structure. Linear hexahedral and truss elements are selected to model 1101 the wooden components and metal linkages, respectively, as shown in Fig. 11(b). Linear constraints are used to connect the truss elements to the hexahedral elements. A few non-essential features are neglected, and the materials are assumed to be elastic isotropic. The faces of the four legs e.g. 1102 that are in contact with the ground are constrained. The mesh data is extracted from ANSYS, and the inverse stiffness matrix is computed using an APDL program. To reduce the pre-computation time, the inverse matrix is computed only for the nodes on the faces 1104a-c that can be subject to loads, as illustrated in Fig. 9 11(b). All these data are stored in text files and read by the system. Fig. 12 shows a scenario in which virtual loads are applied to the stepladder 1201. When the user moves a load to a specific location using the 3D input device 1200, the resulting von Mises stresses and deformations can be visualized on the stepladder 1201, i.e. by the gray scale coding and deformation in the drawn stepladder mesh 1202 overlaying the real physical structure 1201 image, as illustrated in Fig. 12(a). The load position and value is displayed on the real physical structure 1201 instead of the deformed mesh 1202, such that the user can locate and move the loads easily by referencing the real structure 1201. After finalizing the load at a location, more loads can be applied in the same way, as illustrated in Fig. 12(b). The user can select a load, adjust the value and orientation as illustrated in Fig. 12(c), or delete the load. Pressure can be applied to the surfaces of the model by utilizing the 3D selection method described above with reference to Fig. 5. Fig. 12(d) shows a real-time simulation in which pressure is exerted on the selected areas. The selected element faces are marked with crosses to indicate the load locations. In addition or alternatively to application of virtual loads, the system allows load acquisition from the actual loading environment in an example embodiment. As shown in Fig. 13(a), four wireless force sensors 1301-1304 are attached on the stepladder 1300 to measure the loading caused by users stepping on the ladder. Fig. 13(b)-(d) shows the FEA results when a user steps on the ladder 1300. To measure the loads at other locations, the locations of the sensors can be changed or more sensors can be added in various embodiments.

The real-time performance depends largely on the solution time, i.e. the time for computing K ~! f , as will be appreciated by a person skilled in the art. In one embodiment, the FE model has 944 nodes. The pre-computation takes around 88 s. The AR rendering has a frame rate of 28 fps when virtual point loads are applied. When virtual pressure is applied to a single element face, the frame rate is around 26 fps. This frame rate decreases when more element faces are loaded, because more non-zero entries are involved in the load vector. A frame rate of 23 fps can be achieved for the simulation using force sensors, which is lower than the simulation using virtual point loads. It is mainly because more dynamically changing loads are involved in the simulation using force sensors, i.e., there are four loads that are updated with sensor data in the simulation using force sensors in one embodiment, while only the virtual load under manipulation is changing in the simulation using virtual point loads. Experiments show that the frame rate decreases to 12 fps when there are 10 dynamically changing virtual point loads in one embodiment. The frame rate also decreases when the mesh becomes finer [26]. By integrating VTK with the FEA-AR system, FEA results can be superimposed on the real scenes. A few visualization styles have been examined in different embodiments. As shown in Fig. 14, the nodal displacements can be exaggerated to visualize the deformation, but this leads to the misalignment of the stress distributions on the real structure. Specifically, directly overlaying the deformed model 1400 on the real structure 1402 can provide intact views of the results, as illustrated in Fig. 14(a), but may cause misinterpretation of the deformation when occlusion is not taken into consideration. However, when the occlusion is taken into account, part of the results becomes invisible, as illustrated in Fig. 14(b). Semi-transparent rendered objects allow the user to see both the FEA results and the structure, as illustrated in Fig. 14(c), but the colors of different depths at a region are mixed. Each visualization style has its pros and cons. According to their requirements, the user can switch among the different styles and adjust the relevant parameters according to an example embodiment, such as the scale factor of exaggeration and the opacity of the graphic objects.

The user can also walk around the structure 1500 to observe the FEA results, i.e. drawn models 1502a-c, from different perspectives. The user can move the 3D input device 1504 to any position in the 3D space, and attach the model to the input device by clicking a mouse button 15, in order to achieve translation (as illustrated in Fig. 15(a), rotation and/or zooming of the model, as illustrated in Fig. 15(b). When the model 1502a-c is manipulated, the user preferably keeps the marker cube in the field of view of the camera for tracking. Some regions of the model which are at a distance from the marker cube may not be observable. To address this issue, the user can place the model at a location for further observation, as illustrated in Fig. 15(c), or release it back to overlay the real structure.

Two operating modes have been implemented in an example embodiment, namely, hand- operated mode and view-based mode, for data slicing and clipping. Fig. 16 illustrates the hand-operated mode. The FEA results, i.e. rendered models e.g. 1600, can be examined using a handheld cutter plane 1602 or clipped using a virtual cube 1604, as illustrated in Figs. 16(a) and (d), respectively. The user can create data slices or clips at different locations, as illustrated in Figs. 16(b) and (e), respectively, and manipulate each slice 1606 or clip 1608 for observation, as illustrated in Figs. 16(c) and (f), respectively. The slicing plane e.g. 1602 or clipping cube e.g. 1604 is displayed on the structure 1610 to indicate the location where the slice or clip is selected from. An unbounded slicing plane e.g. 1602 allows users to slice a model e.g. 1600 that is far away from the user, but may generate redundant slices when there are multiple intersection areas. A bounded plane of adjustable size is additionally or alternatively implemented in an example embodiment to overcome this problem, and the model is preferably near the user. A comparison of bounded and unbounded cutters is shown in Fig. 17.

In the view-based mode, an unbounded slicing plane or clipping plane is placed in parallel with the view plane at an adjustable distance. The user can manipulate the slicing or clipping plane by moving his viewpoint, or rolling the mouse wheel to adjust the position of the slicing or clipping plane relative to the view plane to explore the data (Fig. 18(a) and (b)). By clicking a mouse button, the slicing or clipping plane will be fixed on the structure to allow data observation from different perspectives (Fig. 18(c)). Specifically, Fig. 18(a) shows the data in a sliced plane 1800a, Fig. 18(b) shows the data exploration from the same view as in Fig. 18(a) but with the top part of the rendered data clipped and moved out slightly to expose the sliced plane 1800b, and Fig. 18(c) shows the sliced plane 1800c is fixed at a position and the user can view it from another angle. This operating mode is suitable for HMDs and handheld displays in example embodiments, which allow easy manipulation of the user's viewpoint. Moreover, the mouse wheel and button can be integrated with the display devices in such embodiments, such that the user does not need to hold the 3D input device for data exploration.

An interface has been developed to implement the method of adding geometric models to the FE models according to preferred example embodiments, and an example of adding beams to stiffen structures will be described for one preferred embodiment. The initial model 1900 and associated FE results, i.e. the colored result model, are shown in Fig. 19(a). The user can switch to the mesh display or line-model 1901, i.e., rendering the boundary elements but without the results displayed, and select a cross-section using the 3D input device 1902 to create a beam 1904 of an adjustable length. The beam 1904 can be manipulated and placed at a specific location, as illustrated in Fig. 19(b). Next, the user can specify the faces of the stepladder where the beam intersects. An intersected face is specified by selecting an arbitrary cell 1908 on this face, as illustrated in Fig. 19(c). More beams can be added by repeating these processes. After clicking the "CONFIRM" button 1910, all the data is transferred to ANSYS via APDL codes in an example embodiment. The relevant tasks are performed automatically using ANSYS, which include creating beam models, coordinate transformations, trimming beams using the intersected faces, meshing the beams, connecting the meshes to the original mesh, and computing the FEA results with unchanged boundary conditions. The mesh connection is achieved using the CEINTF command in ANSYS. After these tasks have been completed, the new mesh 1912 and associated results are imported to the FEA-AR environment and rendered, as illustrated in Fig. 19(d). The inverse stiffness matrix is re-calculated in a background process, which can take several minutes. Real-time simulations are allowed after the computation is completed, as illustrated in Fig. 19(e). The FE model of the modified stepladder model is advantageously enhanced after adding the beams 1913, 1915, which can be observed from the reduction in the deformation and stresses of the step board 1914a and 1914d between the initial and updated models respectively, as illustrated in Figs. 19(a) and (d). However, the method of connecting dissimilar meshes leads to discontinuous stress fields in the connecting areas (Fig. 19(f)). It is understood from published literature that whenever the cross-section of a structural member changes abruptly, a structural discontinuity would arise. Discontinuous stress fields can be reduced by increasing the mesh density around the region in different embodiments, but this could lead to longer computation time.

To address this in another preferred embodiment, after placement of the model 2000a of the virtual structure to be added relative to the original model 2002, as shown in Fig. 20(a), the model 2000a is trimmed using the intersected face of the model, resulting in model 2000b as shown in Fig. 20(b), and then the model 2000b is meshed individually, as shown in Fig. 20(c), resulting in the meshed model 2000c. The next step is to connect the newly-generated mesh 2000c and the original mesh 2002. In one such preferred embodiment, the connection is simplified by applying linear constraints. For example, the connection at node E (Fig. 5(d)) is achieved by imposing constraints to node E and the four nodes A-D around it. After connecting the meshes to form a continuous mesh 2004 (as opposed to the dissimilar meshes in the embodiment described above with reference to Fig. 19), FEA solutions are computed with the new, modified FE model and unchanged boundary conditions by using the PCG (pre-conditioned conjugate gradient) solver. The mesh and solutions are updated subsequently in the AR environment.

The system response time for model modification in a preferred embodiment has been tested for different mesh resolutions as shown in Table 1. In the case of adding beams, meshes of different resolutions are created for both the stepladder and added beams.

1m

0.3

IS

Although the treatment for beams has been described above in relation to one preferred embodiment, the method is adaptable to other types of structures depending on the applications in different embodiments. As will be appreciated by a person skilled in the art, other geometric models can be built e.g. online through parametric modeling for use in embodiments of the present invention. Geometric models can also be built offline using CAD, in particular for more complex geometric models, and can be imported to the system according to various embodiments, and meshed automatically using e.g. tetrahedral elements.

Considering the meshing tools available, the method for mesh refinement is implemented in an example embodiment on a tetrahedral mesh of 1022 nodes. Starting from an initial model 2100 as shown in Fig. 21(a), the user determines the region to refine, and selects the elements 2102 in the region using the 3D input device 2104, as illustrated in Fig. 21(b). Next, the level of refinement is set by rolling the mouse wheel on the 3D input device, as illustrated in Fig. 21(c). A higher number (at numeral 2106) results in a denser mesh. After confirmation through a mouse click, the mesh refinement is carried out automatically, followed by re- analysis of the refined model. Since the mesh has been modified, the equivalent nodal forces of the point loads are re-calculated for the refined mesh. A text message 2108 is displayed to inform the user of the updated status. Finally, the updated FE model 2110 is rendered, as illustrated in Fig. 21(d), showing more detailed results in the refined region 2112. Likewise, the inverse stiffness matrix can be updated in a background process for real-time simulation.

In the case of mesh refinement, different refinement levels are applied to the same set of elements, resulting in different node numbers. The results for an example embodiment indicate that a finer mesh leads to a longer response time, and reveal that the response time is significantly longer than the time taken by ANSYS to modify and analyze the model. The longer response time is mainly due to the processes that are caused by using external FEA software according to an example embodiment, such as loading and saving ANSYS database, transferring FEA data, etc.

The advantages of the interactive AR simulation environment according to example embodiments can include, different from VR-based systems, that the AR-based system can render FEA models and results in the exact physical context, thereby benefitting interpretation and validation of the FEA data. For example, the users can examine the actual loading conditions, and locate the regions of large deformation or stresses on the real structure directly. The actual deformation and stresses can be acquired in situ through measurement or observation to validate the FEA model. In the VR environment, user manipulation is implicitly mapped to the virtual space for interaction with virtual objects. By contrast, users of example embodiments of the present invention are able to manipulate virtual objects directly by using the tangible AR interfaces, such as moving virtual loads, slicing planes, and/or advantageously adding virtual structures such as beams, etc., and the virtual space is spatially aligned with the real world. The AR interfaces according to example embodiments are particularly valuable for users without formal FEA training. While the stability and precision of user manipulation may be limited by hand motions and tracking performance, this can be improved by adding constraints to the manipulation and using more accurate trackers in different embodiments.

With the use of sensing technologies, FEA simulators for use in example embodiments can acquire various parameters directly from the physical world, such as geometry, loads and boundary conditions, such that simulation can be performed in situ. Preferably, example embodiments can perform the simulation approach for varying loads. Different approaches can be developed depending the purpose of simulation and measurement requirements in different embodiments. For example, the geometry of structures may be captured using 3D scanning in an example embodiment and construct mesh models in situ. Example embodiments of the present invention can have many applications including for educational and commercial applications, and can be utilized to enhance practical analysis tasks, e.g., evaluation of structures in actual operating environments, investigation of structural failure, determining stiffening strategies onsite, etc.

An AR-based system according to an embodiment of the present invention integrates sensor measurement, FEA simulation and scientific visualization techniques, and/or interfaces to enhance the visualization and interaction of structural analysis. The investigation of structures can be facilitated by combining real-time FEA simulation and automatic load acquisition according to an example embodiment. The inverse stiffness matrix can be stored for reuse according to an example embodiment, and can be updated when the FE model is modified. Exploration of FEA results are enhanced according to an example embodiment by enabling natural interfaces for manipulating, slicing and clipping the data. With simplified and automated interfaces, the user can modify the model and perform re-analysis in efficient and intuitive manners. With structural enhancement through mesh modification by addition and/or removal of virtual structures, e.g. for structural enhancement, according to a preferred embodiment, and optionally mesh refinement, more interactive methods can be provided, such as material reduction while sustaining strength, and the system response time can be largely reduced by integrating FEA tools fully into the system according to an example embodiment. The system can be implemented on mobile AR platforms in example embodiments for outdoor and onsite applications.

In one embodiment a finite-element analysis augmented reality, FEA-AR, system comprises a camera configured for capturing an image of a physical structure; a finite -element, FE, model unit configured for maintaining an FE model of the physical structure and for processing the FE model for stress analysis under one or more loads; an interface configured for displaying the captured image of the physical structure and for rendering the FE model overlaying the image of the physical structure and for rendering results of the stress analysis; an input device configured for user-input relating to a virtual structure to be added to the FE model; and wherein the interface is further configured for rendering a modified FE model overlaying the image of the physical structure based on the user-input relating to the virtual structure to be added to the FE model.

The input device may be further configured for user-input relating to a location and magnitude of one or more simulated loads, and the FE model unit is configured for stress analysis under the one or more simulated loads. The interface may be configured for attaching a virtual pointer to an image of the input device captured by the camera for applying the one or more simulated loads based on a rendered location and orientation of the virtual pointer. The FEA-AR system may be further comprising one or more sensor elements configured to be coupled to the physical structure and for measuring one or more actual loads on the physical structure, and the FE model unit may be configured for stress analysis under the one or more actual loads.

The input device may be further configured for user-input relating to viewing a cross-section of the rendered results of the stress analysis of the FE model, and the interface may be configured for rendering the cross-section of the rendered results of the stress analysis of the FE model.

The input device may be further configured for user-input relating to creating a slice of the rendered results of the stress analysis of the FE model. The interface may be configured for attaching the created slice to an image of the input device captured by the camera for manipulation of the created slice based on a location and orientation of the input device.

The interface may be configured for attaching a virtual plane to an image of the input device captured by the camera for selecting the cross-section based on a rendered location and orientation of the virtual plane and/or for creating the slice based on the rendered location and orientation of the virtual plane.

The input device may be further configured for user-input relating to viewing a region of the rendered results of the stress analysis of the FE model, and the interface may be configured for rendering the region of the rendered results of the stress analysis of the FE model. The interface may be configured for attaching a virtual volume to an image of the input device captured by the camera for selecting the region based on a rendered location and orientation of the virtual volume.

Fig. 22 shows a flowchart 2200 illustrating a finite-element analysis augmented reality, FEA- AR, method according to an example embodiment. At step 2202, an image of a physical structure is captured. At step 2204, an FE model of the physical structure is maintained. At step 2206, the FE model is processed for stress analysis under one or more loads. At step 2208, the captured image of the physical structure is displayed. At step 2210, the FE model is rendered overlaying the image of the physical structure. At step 2212, results of the stress analysis are rendered. At step 2214, user-input relating to a virtual structure to be added to the FE model using an input device is received. At step 2216, a modified FE model overlaying the image of the physical structure is rendered based on the user-input relating to the virtual structure to be added to the FE model.

The FEA-AR method may be further comprising receiving user-input relating to a location and magnitude of one or more simulated loads using the input device, and performing stress analysis under the one or more simulated loads. The FEA-AR method may be further comprising attaching a virtual pointer to a captured image of the input device for applying the one or more simulated loads based on a rendered location and orientation of the virtual pointer. The FEA-AR method may be further comprising coupling one or more sensor elements to the physical structure and measuring one or more actual loads on the physical structure using the one or more sensor elements, and performing stress analysis under the one or more actual loads.

The FEA-AR method may be further comprising receiving user-input relating to viewing a cross-section of the rendered results of the stress analysis of the FE model using the input device, and rendering the cross-section of the rendered results of the stress analysis of the FE model.

The FEA-AR method may be further comprising receiving user-input relating to creating a slice of the rendered results of the stress analysis of the FE model using the input device. The FEA-AR method may be comprising attaching the created slice to a captured image of the input device for manipulation of the created slice based on a location and orientation of the input device.

The FEA-AR method may be comprising attaching a virtual plane to a captured image of the input device for selecting the cross-section based on a rendered location and orientation of the virtual plane and/or for creating the slice based on the rendered location and orientation of the virtual plane.

The FEA-AR method may be further comprising receiving user-input relating to viewing a region of the rendered results of the stress analysis of the FE model using the input device, and rendering the region of the rendered results of the stress analysis of the FE model. The FEA-AR method may be comprising attaching a virtual volume to a captured image of the input device for selecting the region based on a rendered location and orientation of the virtual volume.

The various functions or processes disclosed herein may be described as data and/or instructions embodied in various computer-readable media, in terms of their behavioral, register transfer, logic component, transistor, layout geometries, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, FTP, SMTP, etc.). When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of components and/or processes under the system described may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs. Aspects of the systems and methods described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits (ASICs). Some other possibilities for implementing aspects of the system include: microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM)), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the system may be embodied in microprocessors having software -based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (MOSFET) technologies like complementary metal-oxide semiconductor (CMOS), bipolar technologies like emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc.

The above description of illustrated embodiments of the systems and methods is not intended to be exhaustive or to limit the systems and methods to the precise forms disclosed. While specific embodiments of, and examples for, the systems components and methods are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the systems, components and methods, as those skilled in the relevant art will recognize. The teachings of the systems and methods provided herein can be applied to other processing systems and methods, not only for the systems and methods described above.

The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the systems and methods in light of the above detailed description.

In general, in the following claims, the terms used should not be construed to limit the systems and methods to the specific embodiments disclosed in the specification and the claims, but should be construed to include all processing systems that operate under the claims. Accordingly, the systems and methods are not limited by the disclosure, but instead the scope of the systems and methods is to be determined entirely by the claims.

Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise," "comprising," and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of "including, but not limited to." Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words "herein," "hereunder," "above," "below," and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word "or" is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.