Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DISPLAY OF THREE-DIMENSIONAL MODEL INFORMATION IN VIRTUAL REALITY
Document Type and Number:
WIPO Patent Application WO/2017/222829
Kind Code:
A1
Abstract:
For display of a 3D model (20) in virtual reality (VR), merely converting the CAD display from the 2D screen to a 3D image may not sufficiently reduce the information clutter. To provide metadata for the 3D CAD model (20) with less occlusion or clutter, a separate space (32) is generated in the virtual reality environment. Metadata and information about the 3D CAD model (20) and/or selected part (36) of the 3D CAD model (20) is displayed (58) in the separate space (32). The user may view the 3D CAD model (20) in one space (30), the metadata with or without a representation of a part in another space (32), or combinations thereof.

Inventors:
KRITZLER MAREIKE (US)
MAYR MATTHIAS (US)
Application Number:
PCT/US2017/036721
Publication Date:
December 28, 2017
Filing Date:
June 09, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS AG (DE)
International Classes:
G06F17/50; G06F3/01; G06T19/00
Foreign References:
US20130141428A12013-06-06
US20090187389A12009-07-23
US20140002351A12014-01-02
Other References:
NATHAN BEATTIE ET AL: "Taking the LEAP with the Oculus HMD and CAD - Plucking at thin Air?", PROCEDIA TECHNOLOGY, vol. 20, 29 June 2015 (2015-06-29), pages 149 - 154, XP055394856, ISSN: 2212-0173, DOI: 10.1016/j.protcy.2015.07.025
Attorney, Agent or Firm:
RASHIDI-YAZD, Seyed Kaveh E. (US)
Download PDF:
Claims:
I (WE) CLAIM:

1 . A system for display of a three-dimensional model (20) in virtual reality, the system comprising:

a memory (18) configured to store the three-dimensional model (20) of an object;

a virtual reality headset (22) configured to display a virtual environment to a user wearing the virtual reality headset (22); and

a processor (14) configured to generate the virtual environment, the virtual environment having (1 ) a representation of the three-dimensional model (20) positioned on a first side of a surface (34) and (2) a representation of a user selected part of the three-dimensional model (20) and metadata for the user selected part positioned on a second side of the surface (34).

2. The system of claim 1 wherein the three-dimensional model (20) comprises a computer assisted design model (20) of the object, the object including a plurality of inter-related parts including the user selected part.

3. The system of claim 1 wherein the virtual reality headset (22) comprises a stereographic head-mounted display and an inertial movement sensor.

4. The system of claim 1 wherein the virtual reality headset (22) comprises a gesture sensor, the processor (14) configured to alter a perspective of the user between the first and second sides of the surface (34) in response to a gesture detected by the gesture sensor.

5. The system of claim 1 wherein the surface (34) is a ground plane on which the three-dimensional model (20) rests and the second side is below the ground plane, wherein the processor (14) is configured to alter a perspective of the user between the first and second sides as an elevator motion.

6. The system of claim 1 wherein the surface (34) is semi-transparent.

7. The system of claim 1 wherein the representation of the metadata comprises boards (42) with images, text, or images and text of the metadata, the boards (42) positioned around an outer region with the representation of the user selected part positioned in an inner region.

8. The system of claim 1 wherein the representation of the user selected part is disconnected from the three-dimensional model (20) and is on a pedestal displayed on a second surface (34).

9. The system of claim 1 wherein the processor (14) is configured to navigate perspective around and into the three-dimensional model (20) and to select the user selected part.

10. The system of claim 1 wherein the representation of the three- dimensional model (20) is free of occlusion from the metadata.

1 1 . A method for display of a three-dimensional model (20) with a virtual reality headset (22), the method comprising:

displaying (50) a first view of a three-dimensional computer assisted design of an arrangement of parts resting on a first level of a three- dimensional space in the virtual reality headset (22);

receiving (52) user selection of a first part of the arrangement of parts; transitioning (56) a user view from the first view, through a floor of the first level, and to a second view on an opposite side of the floor than the first view, the second view being of a second level of the three-dimensional space; and

displaying (58) the first part and information about the first part at the second view in the virtual reality headset (22).

12. The method of claim 1 1 wherein the floor comprises a ground surface (34) on which the arrangement of parts rests, and wherein transitioning (56) comprises transitioning (56) as an elevator moving the user's perspective downward.

13. The method of claim 1 1 wherein transitioning (56) comprises transitioning (56) without altering a user orientation.

14. The method of claim 1 1 further comprising detecting (54) a gesture with a gesture sensor, and wherein the transitioning (56) occurs in response to the detecting of the gesture.

15. The method of claim 1 1 further comprising transitioning (60) from the second view back to the first view.

16. The method of claim 1 1 wherein displaying (58) the first part and the information comprises displaying (58) the first part on a pedestal and displaying (58) the information as virtual boards (42) spaced from and positioned around the pedestal.

17. The method of claim 1 1 wherein the floor is semi-transparent, and wherein displaying (58) the second view comprises displaying (58) the first part, the floor, and at least part of the arrangement of parts on another side of the floor.

18. A virtual reality system comprising:

a stereographic display (26) configured to represent an interior of a computer assisted design model (20) in a first space (30) and information about the computer assisted design model (20) in a second space (32) separated from the first space (30);

a user input sensor (12, 16) configured to receive user interaction input from a user; and

a processor (14) configured to alter, in response to the interaction input, a user focus from the interior to the information.

19. The virtual reality system of claim 18 wherein the interior is displayed without the information prior to the alteration, and wherein the information is displayed after the alteration and without occluding the interior.

20. The virtual reality system of claim 18 wherein a semi-transparent ground plane (34) separates the first space (30) from the second space (32) and wherein the alteration emulates an elevator moving from the first space (30) to the second space (32).

Description:
DISPLAY OF THREE-DIMENSIONAL MODEL INFORMATION IN VIRTUAL

REALITY

RELATED APPLICATIONS

[0001] The present patent document claims the benefit of the filing date under 35 U.S.C. §1 19(e) of Provisional U.S. Patent Application Serial No. 62/353,073, filed June 22, 2016, which is hereby incorporated by reference.

BACKGROUND

[0002] The present embodiments relate to display of three-dimensional models. Rockets, wind turbines, cars, bikes, shoes, slotted screws, and other objects are designed using computer assisted design (CAD) software. CAD is used to create models of any sizes, for any industries and any purposes. Engineers design, analyze, and simulate the properties of the objects using CAD. Engineers may modify single parts of three-dimensional (3D) CAD models. The parts may be combined into an assembly.

[0003] The keyboard and mouse are used to design 3D CAD models on a computer. Computer screens and CAD software provide a high density of information and easy access to different tools for creation, manipulation, and visualization, for example. Vendors of CAD software offer tools to view CAD files as three-dimensional (3D) models rendered to a two-dimensional (2D) screen of a desktop computer. Metadata about the 3D model is displayed on panels that may overlap the 3D model or require the 3D model to be smaller than the real-world dimension to allow for display of a panel. As engineered objects grow in complexity, it becomes much harder to interact with these models and view their connected information. Representing 3D CAD models on a 2D computer screen misses out on the third dimension and displaying functional metadata may become overwhelming.

SUMMARY

[0004] By way of introduction, the preferred embodiments described below include methods, systems, instructions, and computer readable media for display of a 3D model in virtual reality (VR). Merely converting the CAD display from the 2D screen to a 3D image may not sufficiently reduce the information clutter. To provide metadata for the 3D CAD model with less occlusion or clutter, an additional space is generated in the virtual reality environment. Metadata about the 3D CAD model and/or selected part of the 3D CAD model is displayed in the additional space. The user may view the 3D CAD model in one space, the metadata with or without a representation of a part in another space, or combinations thereof.

[0005] In a first aspect, a system is provided for display of a three- dimensional model in virtual reality. A memory is configured to store the three-dimensional model of an object. A virtual reality headset is configured to display a virtual environment to a user wearing the virtual reality headset. A processor is configured to generate the virtual environment, the virtual environment having (1 ) a representation of the three-dimensional model positioned on a first side of a surface and (2) a representation of a user selected part of the three-dimensional model and metadata for the user selected part positioned on a second side of the surface.

[0006] In a second aspect, a method is provided for display of a three- dimensional model with a virtual reality headset. A first view of a three- dimensional computer assisted design of an arrangement of parts is displayed resting on a first level in a three-dimensional space in the virtual reality headset. User selection of a first part of the arrangement of parts is received. A user view transitions from the first view, through a floor, and to a second view on an opposite side of the floor than the first view on a second level of the three-dimensional space. The first part and information about the first part are displayed at the second view in the virtual reality headset.

[0007] In a third aspect, a virtual reality system includes a stereographic display configured to represent an interior of a computer assisted design model in a first space and information about the computer assisted design model in a second space separated from the first space. A user input sensor is configured to receive interaction input from a user. A processor is configured to alter, in response to the interaction input, a user focus from the interior to the information. [0008] The present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims.

Further aspects and advantages of the invention are discussed below in conjunction with the preferred embodiments and may be later claimed independently or in combination.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] The components and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.

[0010] Figure 1 shows an embodiment of a virtual reality system with a multi-space environment;

[0011] Figure 2 illustrates one example of a multi-space virtual

environment;

[0012] Figure 3 illustrates an example interior view of a 3D CAD model;

[0013] Figure 4 is another example of a multi-space virtual environment;

[0014] Figure 5 illustrates an example view of an information center for metadata with part of a 3D CAD model viewable through a semi-transparent surface; and

[0015] Figure 6 is a flow chart diagram of one embodiment of a method for display of a three-dimensional model with a virtual reality headset.

DETAILED DESCRIPTION OF THE DRAWINGS AND PRESENTLY PREFERRED EMBODIMENTS

[0016] 3D CAD models and model information are displayed in a VR environment. CAD is an important part of engineering. CAD programs help to create, analyze and optimize models ranging from screws to airplanes. VR is a computer-simulated world viewed by a user with a headset. The headset features a screen for each eye and tracks the movements of the head with an inertial measurement unit (IMU) for navigating the VR. Artificial worlds may feel real since the application reacts to movements of the body and head. The combination of CAD and VR allows users to explore a 3D CAD model in a VR. The model may be experienced in the actual or another size. [0017] VR may help to explore 3D CAD models more immersively and intuitively compared to visualization on 2D computer screen. The user may interact with and manipulate a virtual representation based on a 3D CAD model. VR systems allow users to experience a virtual world, in which space is unlimited and laws of physics may be excluded or altered. A rich

experimentation field allows the user to explore properties of objects yet to be built. In one embodiment, a gravity-free environment allows users to navigate through a displayed model in 3D space and experience both the visual set-up as well as the associated information. VR is a well-suited technology that may close the gap between visualization limitations of 3D models on 2D computer screens and the lack of immersive exploration possibilities.

[0018] While exploring 3D CAD models in VR, users may step into the 3D model to learn about parts and details on the inside of the 3D model. In the VR, the users may be completely or mostly surrounded by the 3D model. In this situation, the display of information about the 3D model in general or its parts is not possible without occluding the 3D model or disabling large parts of the virtual 3D model. In addition, the space inside 3D models is very limited. There might not be enough room to display information panels without reorientation. These panels of information may hinder the exploration of the remainder of the 3D model. Panels cover the view and the interaction possibilities with the other parts of the 3D model. Furthermore, the

investigated part may be attached to or built-in with other parts of the 3D model, so these other parts of the 3D model may occlude some of the part of interest.

[0019] To deal with the large amount of available information with less occlusion, a multi-space (e.g., a two story-floor) metaphor separates between the 3D virtual representation and the associated information. The second level or space is an addition to the "traditional view of viewing" 3D CAD data in VR. This approach allows the display the 3D CAD model and associated information without covering parts of the presented 3D CAD model. Models of all sizes may be explored immersively while still offering enough space to examine a specific part of the 3D model in the separate space. The addition of the separate space from the display of the whole 3D model prevents users from getting lost in the VR world. The direct connection between the two spaces relates the spaces together. The connection may be made by a semi- transparent floor, for example. Users are able to locate the separate space as well as themselves in the virtual world. The introduction of the separate space for displaying a part of the 3D CAD model avoids occlusion by the rest of the 3D model and occlusion by panels that display information next to a part. This separate space may overcome the disadvantages of boards being displayed beside the 3D model since 3D models may be of large objects (e.g., train, automobile, plain, or wind turbine).

[0020] Figure 1 shows one embodiment of a system for display of a 3D model in VR. This VR system is configured to display the 3D model in one space and additional information in an additional separate space. The spaces are connected in a known or intuitive manner, such as being one above the other, for ease of navigation.

[0021] The system includes a VR headset 22 with one or more sensors 12, 16, a processor 14, a memory 18, and a stereographic display 26. Additional, different, or fewer components may be provided. For example, motion detecting gloves or motion detection sensor, a microphone, speaker (e.g., headphone), or other VR hardware is provided. As another example, the sensor 12 is not provided.

[0022] The system implements the method of Figure 6 or a different method. For example, the processor 14 and stereographic display 26 implement acts 50, 56, 58, and 60. The sensor 12 implements act 54. The sensor 12 and/or processor 14 implement act 52. Other components or combinations of components may implement the acts.

[0023] The processor 14 and/or memory 18 are part of the VR headset 22. The processor 14 and/or memory 18 are included in a same housing with the stereographic display 26 or are in a separate housing. In a separate housing, the processor 14 and/or memory 18 are wearable by the user, such as in a backpack, belt mounted, or strapped on arrangement. Alternatively, the processor 14 and/or memory 18 are a computer, server, workstation, or other processing device spaced from a user and using communications with the VR headset 22, stereographic display 26 and/or sensor 12. Wired or wireless communications are used to interact between the processor 14, the memory 18, the sensor 12, the display 26, and any other electrical component of the VR headset 22. Separate processors may be used for any of the

components.

[0024] The memory 18 is a graphics processing memory, video random access memory, random access memory, system memory, cache memory, hard drive, optical media, magnetic media, flash drive, buffer, database, combinations thereof, or other now known or later developed memory device for storing the 3D model, metadata, camera information, stereo images, sensor signals, and/or other information. The memory 18 is part of a computer associated with the processor 14, the VR headset 22, or a standalone device.

[0025] The memory 18 is configured to store the 3D model of an object, such as a CAD model of the object 20. During virtual reality navigation, the 3D CAD model or 3D data is used to render or generate stereoscopic views from any potential user viewpoint.

[0026] The object 20 is represented by 3D data. For example, a building, an assembly of inter-related parts, or manufactured object 20 is represented by CAD data and/or other engineering data. The 3D data is defined by segments parameterized by size, shape, and/or length. Other 3D data parameterization may be used, such as a mesh or interconnected triangles. As another example, the 3D data is voxels distributed along a uniform or nonuniform grid. Alternatively, segmentation is performed, and the 3D data is a fit model or mesh.

[0027] The 3D model represents the geometry of the object 20. One or more surfaces are represented. The 3D model includes surfaces of the object 20 not in view of a virtual camera. 3D CAD is typically represented in 3D space by using XYZ coordinates (vertices). Connections between vertices are known - either by geometric primitives like triangles/tetrahedrons or more complex 3D representations composing the 3D CAD model. CAD data is clean and complete (watertight) and does not include noise. CAD data is generally planned and represented in metric scale. Engineering or GIS data may also include little or no noise. More than one model may be presented, such as two or more 3D models positioned adjacent to each other.

[0028] The 3D model may include metadata. The metadata is encoded in the model or is stored separately from the 3D model depending on the file format. The metadata may be one or more labels. The labels are information other than the geometry of the physical object. The labels may be part information (e.g., part number, available options, material, manufacturer, recall notice, performance information, use instructions,

assembly/disassembly instructions, cost, and/or availability). The labels may be other information, such as shipping date for an assembly. The label may merely identify the object 20 or part of the object 20. Different metadata may be provided for different parts of an assembly. The metadata may include reference documentation, such as material data sheet, simulation results, test results, recall notices, quality control information, design alternatives, and/or other engineering information. The metadata is information about the 3D model other than the geometry.

[0029] The memory 18 or other memory is alternatively or additionally a computer readable storage medium storing data representing instructions executable by the programmed processor 14 or another processor. The instructions for implementing the processes, methods, acts, and/or techniques discussed herein are provided on non-transitory computer-readable storage media or memories, such as a cache, buffer, RAM, removable media, hard drive, or other computer readable storage media. Non-transitory computer readable storage media include various types of volatile and nonvolatile storage media. The functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of instructions stored in or on computer readable storage media. The functions, acts, or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone, or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing, and the like. [0030] In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the instructions are stored in a remote location for transfer through a computer network or over telephone lines. In yet other embodiments, the instructions are stored within a given computer, CPU, GPU, or system.

[0031] The VR headset 22 includes a stereographic head-mounted display 26 and an inertial movement sensor 16. Speakers, a microphone and/or other devices may be included. Any now known or later developed VR headset may be used, such as Occulas, Google Cardboard, HTC Hive, PlayStation VR, or Samsung Galaxy VR. In alternative embodiments, one or more projectors are used instead of the stereographic display 26. The projectors project images onto the retina of the user. As another example, an eye tracker (e.g., camera directed at the user's eye) is used to align the perspective with the direction of the user's focus instead of using head motion.

[0032] The VR headset 22 is configured to display a virtual environment to a user wearing the VR headset 22. For example, the VR headset 22 is a pair of googles restricting all or most of the user's field of view to the stereographic display 26. The head mounted and/or eyewear device may cover the entire field of view of the user. Part of the field of view of the user may be restricted, such as blocking any viewing in peripheral. The VR headset is head mounted or mountable. For example, the VR headset includes a hat or ring for resting on the head of the user and placing the display 26 in front of the eyes of the user. As a head-mounted display, a harness or helmet supports a display.

[0033] The stereographic display 26 includes a separate display for each eye of the user. A barrier limits interference from a display for one eye on the display for the other eye. A single display may be used to show two images, one for each eye. The eye-specific images are rendered from different camera angles and/or positions, providing stereographic images. In alternative embodiments, other stereographic displays may be used.

[0034] The inertial movement sensor 16 is a gyroscope, accelerometer, structure light sensor, gravitational field sensor, and/or other device for detecting change in position and/or motion of the user's head or the VR headset 22. Magnetic field sensors may be used to detect the position and/or motion. The inertial movement sensor 16 measures position or change in position, such as a gyroscope for measuring in six degrees of freedom. The perspective of the field of view in the virtual reality adjusts with the user's head movements. In alternative embodiments, user input devices, such as arrow keys, are used instead of inertial movement.

[0035] A source of the VR environment is provided, such as the graphics processor 14. The graphics processor 14 generates the images for the stereographic display 26 based on input from the inertial movement sensor 16. The processor 14 generates the multi-space environment for displaying the CAD model and metadata for a user selected part of the CAD model.

[0036] The sensor 12 is a user input sensor configured to receive interaction (e.g., navigation) input from a user. A keyboard, mouse, trackball, touch pad, touchscreen, gesture sensor or other user input device is used. The user input sensor measures or receives input from the user for navigating and/or interacting with the virtual environment.

[0037] Alternatively or additionally, the sensor 12 is a gesture sensor. For example, a Leap Motion gesture sensor is used to recognize the user's hands. Two cameras and software run on the graphics processor 14 or a separate processor to estimate the position of both hands including fingers. The sensor 12 fixedly or releasably connects with the VR headset 22, such as gluing or being mounted in a front of VR headset 22. A cable or wireless connection is used to provide measurements from the sensor 12. In other embodiments, controllers or sensors are mounted on the user's hands or in gloves for gesture sensing.

[0038] An interaction scheme recognizes the user's hand gestures. For example, a pattern (such as pinch) due to motion or position of the hand, hands, finger, and/or fingers is detected. Any number of distinct gestures may be used, such as a pinch (e.g. move, rotate, scale), grab (e.g. move user in the scene); and (3) thumb up (e.g. switch between ground plane and information center) are provided. Each gesture, depending on context, is mapped to a reaction. A direct feedback to users may be provided once a gesture has been recognized. For example, if a pinch or grab is recognized, the hand is highlighted by sparkles whereas the thumbs up gesture instantly triggers the associated action. Alternatively or additionally, the feedback is highlighting or response.

[0039] In one embodiment, the VR headset 22 is a smartphone as the processor 14, memory 18, inertial motion sensor 16, and stereographic display 26. A gesture sensor 12 is mounted to the front of the VR headset 22 for viewing the user's hands and/or fingers. Any hardware and/or software may be used, such as a 3D game engine (e.g., Unity), a VR headset (e.g., Google Cardboard), a stereographic display 26 (e.g., smartphone), processor 14 (e.g., smartphone), memory 18 (e.g., smartphone), a gesture sensor (e.g., leap motion), a menu interface (e.g., Hovercast as part of the Hover-VR Interface kit), and a streaming application (e.g., Trinus VR for generating images). Other arrangements may be used.

[0040] The processor 14 is a general processor, central processing unit, control processor, graphics processor, graphics processing unit, digital signal processor, three-dimensional rendering processor, image processor, application specific integrated circuit, field programmable gate array, digital circuit, analog circuit, combinations thereof, or other now known or later developed device. The processor 14 is a single device or multiple devices operating in serial, parallel, or separately. The processor 14 may be a main processor of a computer, such as a smart phone, laptop or desktop computer, or may be a processor for handling some tasks in a larger system, such as in the VR headset 22. The processor 14 is configured by instructions, design, firmware, hardware, and/or software to perform the acts discussed herein.

[0041] The processor 14, using stereographic imaging, generates the virtual environment. The virtual environment has a representation or display of the 3D CAD model from the memory 18. The representation is of any size by providing an open space that is not limited to the dimension of the displayed 3D CAD model. The VR environment includes multiple spaces 30, 32. These spaces 30, 32 may be viewed simultaneously. For example, the user's perspective is zoomed out to include both spaces. As another example, while the user's perspective is placed in or to view one space, at least part of the other space is viewable due to the orientation of the user. [0042] The spaces 30, 32 are separate. The spaces 30, 32 may be viewed separately without view of the other space 32, 30. Any separation may be used, such as by a wall or surface 34. Alternatively, the spaces 30, 32 are separate by not being connected (i.e., no parts or graphics of one extend into the other or separated by a region or space).

[0043] The spaces 30, 32 have a known relationship to each other, such as having a same orientation (e.g., up is the same in both spaces) and/or being aligned by coordinate systems (e.g., same scale so that a part in one is at the same coordinates in the other). One space 32 may shift relative to the other based on a selected part (e.g., place a center of the information center space 32 at a center of a part 36 in the 3D model 20. In one embodiment, the spaces 30, 32 have a known relationship to each other by being adjacent. For example, one is to the left of the other or one is above the other.

[0044] In one embodiment shown in Figure 1 , the relationship is by floors or one space 30 is above the other space 32. A semi-transparent ground floor or surface 34 appears to support or hold the entire 3D CAD model 20. The 3D CAD model 20 may be sized according to the current needs to allow closer looks but also a real size experience. The other space 32 is an information center underneath the ground floor or surface 34. This second lower floor is used to examine and display a single part 36 or a sub-assembly of parts as well as to show information 42 (e.g., metadata) about the part 36, sub-assembly, and/or entire 3D CAD model 20. The display in this

information center space 32 avoids occluding the 3D CAD model 20.

[0045] In the 3D model space, the representation of the 3D model is positioned on a top side of the surface 34 or otherwise positioned in the space 30. The representation is a 3D representation, such as showing the 3D model from a given perspective. The 3D model may be presented as life-size to help with orientation, but any scale, size, height, or other dimensions of the model may be used.

[0046] The surface 34 is a ground floor, but may be a wall, ceiling, or other separator. The floor separates a three-dimensional space into different levels. The other space 32 is below the surface 34. The surface 34 is a divider that separates the spaces 30, 32. [0047] The surface 34 is opaque or semi-transparent. For example, the surface 34 is opaque while the user is viewing with a camera positioned above the surface 34 and/or in the space 30. When the camera is positioned away from the 3D model 20, such as a lateral position beyond a border of the surface 34, then the surface 34 is shown as semi-transparent. Alternatively, the surface 34 is always semi-transparent. Semi-transparent includes viewed as a flat surface having a same level of transparency and/or having points or grid lines less transparent than other parts.

[0048] The surface 34 has any dimensions and shape, such as being an oval sized to be wider and deeper than the 3D model but as big as the entire 3D model. The size and/or shape of the surface 34 may adapt to the size of the 3D model 20. Figure 2 shows a car 3D model 20 with the surface 34 extending beyond the 3D model 20 laterally by less than ½ a width or depth of the 3D model 20. Figure 2 shows the camera positioned above the surface 34, but placed laterally outside of the boundary of the surface 34. The VR environment includes both spaces 30, 32 as simultaneously viewed by the user from this camera position.

[0049] Based on input from the sensors 12, 16, the processor 14 generates the virtual environment on the stereographic display 26 to show navigation around, in, and/or through either of the spaces 30, 32, the part 36, and/or the 3D model 20. The user perspective may be changed to view around and/or into the 3D model 20. The navigation may be continuous between and through the different spaces 30, 32, but disjointed navigation between spaces 30, 32 may be provided. Users may explore the 3D model as a whole and select parts or sets of parts of interest. Different types of movement allow the user to choose the most comfortable and best suited movement method. A provided set of tools enables users to interact with the 3D CAD model 20. A menu may be shown at the location of the user's hand in the virtual environment. A board with the menu options may be activated and displayed for selection of menu items.

[0050] The user may select a part 36 or sub-assembly. The sensor 12, 16 is activated, triggering of a selection of a part. This approach may not use gravity, allowing users to place parts anywhere in the scene. Alternatively, gravity is used. Physics may allow or not allow the parts to collide if one part is being moved. By not allowing collusion, it may be easier to pick and move any part instead of disassembling the entire 3D CAD model piece by piece. Users can deactivate parts or assemblies to expose build-in parts.

[0051] As the user navigates through or relative to the 3D model, the information center space 32 stays on the same relative height, but the content of the information center space 32 may follow the user while exploring 3D CAD model 20. As the user navigates or selects, the information center is updated to include the metadata associated with the currently selected part. This helps users to not lose their orientation when switching to the information center space 32.

[0052] Figure 3 shows an example view of the 3D CAD model 20 with the camera and corresponding perspective of the user being within the 3D CAD model 20. Several different parts 36 are shown in the stereographic display of the interior of the 3D model 20. Only the first space 30 is shown in the perspective of Figure 3. While the user navigates in the 3D mode, the geometric information is displayed without other metadata. Some metadata may be displayed over one or more parts 36.

[0053] When the user selects the part 36, the part 36 may be highlighted. If metadata were to be shown in the space 30 from the interior perspective, then the metadata, such as on a panel or other annotation, would occlude parts of the 3D model 20 otherwise viewable from the given perspective.

While exploring a 3D model virtually, users may fully emerge into the scene and be surrounded by the 3D model. In this situation, the display of information about the 3D model in general or its parts is not possible without occluding the 3D model with virtual boards, virtual screens, or virtual panels, shrinking the 3D mode to free up screen space for the metadata, or disabling large parts of the virtual 3D model. Panels or boards beside a model may not be effective for large 3D CAD models, such as airplanes, trains, or ocean liners. By providing the different space 32 for at least some of the metadata, the interior view may be maintained with little or no occlusion from the metadata. [0054] The virtual environment includes the information center space 32 for information other than the geometry of the CAD model 20. A 3D representation of a user selected part 36 of the 3D model 20 and/or metadata for the user selected part 36 are positioned in the other space 32. In the example of Figures 1 and 2, the user selected part 36 and the metadata are positioned on a side of the surface 34 opposite the 3D model 20.

Stereographic information about the 3D CAD model 20 is presented in the second space 32 separated from the first space 30. By adding a second floor as information center space 32 underneath the ground plane or surface 34, space to examine a singular part or a set of parts while not being attached (but still having the reference to the full model above) to the rest of the 3D model 20 is provided. The selected part 36 and/or metadata may be viewed without occluding the rest of the 3D model 20 but still in a frame of reference with the 3D model.

[0055] This information center space 32 empowers users to pick out a specific part 36 or assembly of 3D CAD model 20 and to take a closer look. The part 36 or assembly is not covered by other parts as occurs in the 3D model 20 and users are less likely to be distracted by the rest of the 3D CAD model 20. While the user's perspective or field of view includes or is in the information center space 32, metadata may be displayed without occluding the 3D CAD model 20 and/or the part 36.

[0056] Figure 4 shows one example where the surface 34 is a ground plane of finite extent separating the space 30 for the 3D model 20 from the information center space 32. The information center space 32 includes the user selected part 36 on a pedestal 38. The part 36 is shown on or above a surface 40, such as a floor for the information center space 32. This other surface 40 is semi-transparent or opaque. The part 36 is shown at eye level in an inner region (i.e., centered and/or within the lateral boundary of the floor surface 40). In other embodiments, the pedestal 38 is not provided. The part 36 is show disconnected from the 3D model 20, so may be viewed from any angle without occlusion by other parts of the 3D model 20.

[0057] Figure 5 shows an example where the camera is positioned below the surface 34 and outside of a lateral boundary of the floor surface 40. The user's perspective shows the part 36 on the pedestal 38 below the surface 34, through which part of the 3D model 20 may be seen. The addition of the floor surface 40 allows for a view of the selected object or part 36 without occlusion and a reference to the entire 3D model 20. The floor surface 40 is parallel with the ground surface 34, providing a spatial relationship between the spaces 30, 32.

[0058] The metadata is presented in any form. For example, an

annotation of alphanumeric text or link is displayed overlaid on the part 36. Images, text, links, lists, graphs, charts, or combinations thereof of the metadata are presented in the information center space 32.

[0059] In the embodiments shown in 1 , 4, and 5, the metadata is

presented one or more panels or boards 42. In Figure 1 , the boards 42 are positioned around the boundary of the floor surface 40. In Figures 4 and 5, a tree or menu structure of interconnected boards 42 is shown. The tree of boards 42 is shown behind the part 36 (at least at an initial camera position) and laterally within the boundary of the floor surface 40. Floating text or information without a background board or panel may be used. Any position of the panels or boards may be used, such as a stack and/or positioned laterally outside of the boundary of the floor surface 40.

[0060] The information center allows for the display of the 3D model part 36 and metadata for the part 36 without any occlusion. This information center space 32 may be extended to deliver information about the entire 3D model 20, such as including a list of parts and further metadata. The size of the floor 40 may be limited to give users a better orientation while still featuring enough space to display information. The selected part 36 or set of parts is presented at eye level (optionally on top of a platform 38 that holds a virtual keyboard for text input) while virtual boards 42 around the part 36 display the available information or metadata without occluding the displayed part 36. Other arrangements with different boards or panels, different placement, different surface, and/or multiple parts with or without pedestals, may be provided.

[0061] To populate the information center space 32 for a particular part 36 or sub-assembly, the user selects the part 36 and/or sub-assembly while navigating in or with a view of the 3D model 20 in the model space 30. For example, a pick and examine tool enables users to take a part or assembly out of the 3D CAD model 20. This tool duplicates the part 36 or assembly and moves or places the duplicate in the information center space 32.

Alternatively, a focal point of the user's eyes 28 indicates the part 36 within the 3D model 20, and any part 36 at the focal point is selected and duplicated in the information center space 32. In other embodiments, the part 36 is shown moving from the 3D model 20 to the pedestal 38. The metadata and/or a menu for navigating to metadata for the selected part 36 or subassembly is also provided in the information center space 32.

[0062] In one embodiment, a panel list, tree-structure, or graph displays part names of the 3D CAD model 20 and allows users to pick a specific part 36, highlight the part 36 in the 3D model 20, and quickly move or place the part 36 in the information center space 32. The list, graph, or structure is provided in the information center space 32. Instead of spending costly time to search for the part 36 by going through the 3D CAD model 20, the user navigates to a board in the information center space 32. This approach may avoid stationary installation of menus or boards next to the displayed 3D CAD model 20 or having menus showing up between parts 36 in the 3D model 20 and occluding some parts of the 3D model 20.

[0063] Using the spatial relationship between the two spaces 30, 32, the user camera position and/or perspective may be shifted between the two spaces 30, 32. The perspective of the user is altered between the different sides of the ground surface 34. Any trigger may be used, such as a gesture, voice command, or activation of an input key. In one embodiment, the selection of the part 36 or sub-assembly is the trigger. The trigger causes the alteration in perspective. The perspective shifts from the 3D model 20, such as from within the interior of the 3D model 20, to a same view or orientation in front of the part 36 or sub-assembly in the information center space 32, or vice versa. The perspective shift (a fast transition from the model centric space to the information center in order to avoid motion sickness and user irritation) does not, but may, alter the scale or orientation between the spaces 30, 32. [0064] Where the two spaces are related by top and bottom, the shift may be an elevator emulation or motion. The shift moves the user straight down or up. The elevator alteration may be instantaneous or occur at any rate. For instantaneous, the user 'teleports' with the selected part (i.e., a fast change of coordinates of the camera (which is the user's location)) from the upper 3D model space 30 to the lower floor surface 40 of the information center space 32. For a less than instantaneous elevator motion, the camera shifts up or down gradually, falling or raising through the surface 34. Other motion may be used, such as altering an angle or orientation of the camera or user perspective, moving side to side, zooming out and/or in, or following an arc between the spaces 30, 32 out and past the boundary of the surface 34.

[0065] Given the spatial relationship between the spaces, the alteration may be more easily perceived by the user. The processor 14 alters, in response to the interaction input of a trigger, the user focus from the 3D model 20 (e.g., from an interior view - see Figure 3) to the part 36 and information (e.g., metadata panels) of the information center space 32. The information is initially displayed after the alteration, during the alteration, or before the alteration. The user may interact with the information, such as selecting, highlighting, and/or moving. The information provided in the information center space 32 is provided without occluding the interior of the 3D model 20 since the 3D model 20 is in a separate space 30.

[0066] Figure 6 shows one embodiment of a flow chart of a method for display of a three-dimensional model with a virtual reality headset. In general, the method is directed to navigating around and/or in a 3D CAD model and providing specific part and/or metadata information in a different region to avoid occluding the 3D CAD model with the metadata and/or to provide an unblocked view from any angle of the selected part.

[0067] The method is performed by the system of Figure 1 , a graphics processor, a VR headset, or combinations thereof. For example, a

stereographic display of a VR headset performs acts 50, 56, 58, and 60. A processor of the VR headset performs act 52. A gesture sensor with or without the processor of the VR headset performs act 54. [0068] The method is performed in the order shown (top to bottom or numeric) or a different order. For example, Figure 6 shows navigating in the 3D model, transitioning to the information center, then transitioning back. In other embodiments, the initial transition from the information center to the 3D model or a reverse order of transitions is used.

[0069] Additional, different, or fewer acts may be provided. For example, only one transition is made (e.g., from the 3D model to the information center or vice versa). In another example, any of the transitions are repeated. In yet another example, other selection than the user selection of act 52 is used. In other examples, a gesture is not detected in act 54, but instead the transition occurs automatically or in response to other input.

[0070] In act 50, a view of a 3D CAD of an arrangement of parts resting on a surface is displayed in the virtual reality headset. Any camera position may be used. The user may adjust or change the view to examine the 3D CAD from different positions and/or directions. The 3D CAD may be viewed from an exterior and/or interior.

[0071] The view of the 3D CAD of the arrangement is on a surface, such as a ground surface. The surface has any lateral extent, such as being a disc sized to be less than double a lateral extent of the longest width or depth of the 3D CAD. The surface may be infinite. The surface is opaque or semi- transparent. Any pattern may be provided such as including an opaquer grid. No pattern may be provided in other embodiments.

[0072] The camera position is above the surface. As the user navigates to view the 3D CAD, the camera is maintained above the surface. The camera may be positioned below the surface and directed at the surface, such as for viewing a bottom of the 3D CAD through the semi-transparent surface.

Alternatively, the 3D CAD floats above the surface or may be separated from the surface to allow viewing a bottom while the camera is above the surface. The user may lift, rotate, activate, select, change, or otherwise manipulate the 3D CAD or parts thereof.

[0073] In act 52, a user input device and/or processor receives user selection of a part of the arrangement of parts. Alternatively, user selection of the arrangement is received. The user, based on detected hand position and a trigger, selects the part in the VR environment. Eye focus and/or gestures may be used in other embodiments. Selection from a list of parts or other navigation to a part of interest may be used. In yet other embodiments, the selection is automatic or occurs without user input.

[0074] In act 54, a sensor with or without the processor detects a gesture of the user. The user moves their hand, hands, and/or fingers in a particular pattern. This movement or placement resulting from the movement is detected. For example, the user balls a hand into a fist with the thumb extended upward. This thumbs-up gesture is detected. In alternative embodiments, other user input is detected, such as activation of a key or a voice command. In yet other embodiments, the user selection of act 52 is used as or instead of the detection of the gesture in act 54.

[0075] In act 56, the graphics processor, using the display on the VR headset, transitions a user view from the view of the 3D CAD, through the surface, and to a view on an opposite side of the surface than the view of the 3D CAD. The user's view is transitioned from the 3D CAD to the information and/or part on an opposite side of the surface. The transition is by changing a position of the camera from one side of the surface to another. Where the surface is a wall, the change may be from one side to another laterally.

Alternatively or additionally, the transition is by changing an orientation from directed to above the surface to directed to below. Change in zoom may be used.

[0076] In one embodiment, coordinate systems on opposite sides of the surface are duplicated or mirrored. The same coordinate or reflection of the same coordinates for the camera are used on the opposite sides of the surface. The camera at position Xi , Yi , Zi and 3D angle Θ (roll left/right, pitch up/down, and/or tilt left/right) above the surface viewing a part in the 3D CAD is repositioned at Xi , Yi, Z-i and 3D angle Θ below the surface viewing the part without the rest of the 3D CAD. The user may be at a same height, lateral offset, depth, and angle from the part in both views, but one view is with the part in the 3D CAD and the other view is of the part in the information center (i.e., without the rest of the 3D CAD). In alternative embodiments, different relative positioning is provided for the views on the different sides of the surface.

[0077] Any transition motion may be used. For example, the transition is as an elevator moving the user's perspective downward or upward. The transition changes the position only along one dimension. The orientation and/or zoom are constant or do not change. The user may continue to navigate, resulting in change in other dimensions, angle, or zoom during the transition. The transition is instantaneous, such as moving from one position to the next without any intermediate views. Alternatively, the transition is gradual, including one or more intermediate views as the camera position changes. In other embodiments, the transition motion is along an arc or other non-straight path with or without change in scale (i.e., zoom) and/or angle.

[0078] The transition occurs in response to the detecting of the gesture in act 54. Other triggers may be used.

[0079] In act 58, the graphics processor, in the display of the VR headset, displays the user selected part and information about the user selected part at the view after transition. Any camera position may be used. Initially, the camera position at the end of the transition is used. The user may adjust or change the view to examine the 3D part and/or information from different positions, scales, and/or directions.

[0080] The view of the part is on a surface and/or pedestal. The surface and/or pedestal has any lateral extent, such as the surface being at least twice the lateral extent of the pedestal and the pedestal being sized to be less than 1 .5 times a lateral extent of the longest width or depth of the part. The surface may be infinite. The surface and/or pedestal are opaque or semi- transparent. Any pattern may be provided such as including an opaquer grid. No pattern may be provided in other embodiments.

[0081] The camera position is above the surface of the information center. As the user navigates to view the part or information, the camera is

maintained above the surface of the information center and below the surface for the 3D CAD. The camera may be positioned below the surface of the information center and directed at the surface, such as for viewing a bottom of the part through the semi-transparent surface. Alternatively, the part floats above the surface or may be separated from being adjacent or on the surface to allow viewing a bottom while the camera is above the surface. The user may lift, rotate, activate, select, change, or otherwise manipulate the part. The camera may be positioned above the 3D CAD surface viewing the part and/or metadata of the information center.

[0082] The information is displayed as being on one or more boards. The panel or board may be opaque, semi-transparent, or transparent. The link, alphanumeric text, graph, chart, and/or other representation of information about the part are provided on one or more virtual boards. The boards have any placement, such as around the boundary of the surface of the information center (e.g., as partial walls). The boards are spaced from the pedestal and the part to avoid occluding views of the part. The boards may be floating or supported (e.g., presented as signs). The boards may be flat or curved. The user may interact with the boards or content of the boards, such as navigating through content on one or more boards (e.g., selecting information on a board activates another board and/or replaces text with new text).

[0083] In act 60, the processor, in the display of the VR headset, transitions from the view of the part and/or information to the 3D CAD of the arrangement. The same or different trigger (e.g., thumbs-up) is used. In response to detecting the trigger, the view transitions to the other side of the surface between the spaces. The transition alters the camera in just one dimension without orientation or scale change, but other alterations may be used. The user may transition the view to select a different part and/or to examine the interaction or placement of the previously selected part relative to other portions of the 3D CAD. The user may alter the part (such as color) and transition to view the results of the alteration relative to the 3D CAD.

[0084] While the invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.




 
Previous Patent: MODULAR POSTER PRINT STRETCH FRAME

Next Patent: OXIDATION PROCESS