Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR MODIFYING A VIRTUAL OBJECT
Document Type and Number:
WIPO Patent Application WO/2022/137081
Kind Code:
A1
Abstract:
Described herein is a system for modifying a 3D virtual object in augmented reality, wherein said 3D virtual object is made up of a plurality of elements; at least one of said elements being associated with at least one 2D image; said system comprising: a first device equipped with a display device and a video camera; wherein said first device is configured for: receiving data associated with said 3D virtual object; displaying an image of an environment on said display device, said displayed image being acquired in real time by means of said video camera; reproducing said 3D virtual object in said displayed image; wherein said first device is further configured for: selecting an element of said 3D virtual object; selecting a 2D image; applying said selected 2D image to said selected element, thereby generating a modified virtual object; displaying said modified virtual object in said displayed image.

Inventors:
MELONI DAVIDE (IT)
FORNASIERO ROSSELLA (IT)
Application Number:
PCT/IB2021/062025
Publication Date:
June 30, 2022
Filing Date:
December 20, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DIGICOMPRO S R L (IT)
International Classes:
G06T19/00; G06T15/04; G06T19/20
Other References:
LANGLOTZ ET AL: "Sketching up the world: In-situ authoring for mobile Augmented Reality", INTERNATIONAL WORKSHOP ON SMARTPHONE APPLICATIONS AND SERVICES 2010, 9 December 2010 (2010-12-09), XP055029083, Retrieved from the Internet [retrieved on 20120606], DOI: 10.1007/s00779-011-0430-0
UNKNOWN: "Sketching up the world: In situ authoring for mobile Augmented Reality", INTERNET CITATION, 12 January 2011 (2011-01-12), pages 1, XP002677260, Retrieved from the Internet [retrieved on 20120606]
ANONYMOUS: "UV mapping - Wikipedia", 19 November 2020 (2020-11-19), XP055837924, Retrieved from the Internet [retrieved on 20210906]
Attorney, Agent or Firm:
BARONI, Matteo et al. (IT)
Download PDF:
Claims:
CLAIMS A system (100) for modifying a 3D virtual object in augmented reality, wherein said 3D virtual object is made up of a plurality of elements; at least one of said elements being associated with at least one 2D image; said system (100) comprising :

- a first device (110) equipped with a display device (111) and a video camera (112); wherein said first device (110) is configured for:

- receiving data associated with said 3D virtual object;

- displaying an image of an environment on said display device (111), said displayed image being acquired in real time by means of said video camera (112);

- reproducing said 3D virtual object in said displayed image; wherein said first device (110) is further configured for:

- selecting an element of said 3D virtual object;

- selecting a 2D image;

- applying said selected 2D image to said selected element, thereby generating a modified 3D virtual object;

- displaying said modified 3D virtual object in said displayed image. The system (100) according to the preceding claim, wherein said associating an element of said 3D virtual object with at least one 2D image comprises:

- associating with said element a respective UV mapping;

- associating a point of a 2D image with each point of said UV mapping. The system (100) according to any one of the preceding claims, wherein said system (100) further comprises a server (102); said 3D virtual object being stored in said server (102); said first device (110) being configured for accessing said server (102) and requesting data associated with said 3D virtual object. The system (100) according to the preceding claim, wherein said system (100) further comprises:

- a second device (120) comprising a second display device (121); said second device (120) being configured for:

- accessing said server (102) and receiving data associated with said 3D virtual object;

- reproducing said 3D virtual object on said second display device (121); wherein said second device (120) is further configured for:

- selecting an element of said 3D virtual object;

- selecting a 2D image;

- applying said selected 2D image to said selected element, thereby generating a modified 3D virtual object. The system (100) according to the preceding claim, wherein said first device (110) is further configured for:

- acquiring a photograph by means of said video camera (112);

- supplying said photograph to said second device (120); said second device (120) being further configured for:

- receiving said photograph;

- selecting an area of said photograph; and

- generating a 2D image as a function of said area;

- associating said generated 2D image with at least one element of said 3D virtual object;

- saving said generated 2D image into said server (102). 15 The system (100) according to the preceding claim, wherein said second device (120) is further configured for associating material- related data with said generated 2D image. The system according to any one of claims 3 to 6, wherein said first device (110) and said second device (120) are configured for simultaneously displaying the same 3D virtual object and for modifying data associated with said 3D virtual object on said server (102). A method for modifying a 3D virtual object in augmented reality, comprising:

- providing a 3D virtual object made up of a plurality of elements; at least one of said elements being associated with at least one 2D image;

- activating a first device (110) equipped with a display device (111) and a video camera (112) for:

- receiving data associated with said 3D virtual object;

- displaying an image of an environment on said display device (111), said displayed image being acquired in real time by means of said video camera (112);

- reproducing said 3D virtual object in said displayed image;

- selecting an element of said 3D virtual object;

- selecting a 2D image;

- applying said selected 2D image to said selected element, thereby generating a modified 3D virtual object;

- displaying said modified 3D virtual object in said displayed image. The method according to the preceding claim, wherein said providing a 3D virtual object made up of a plurality of elements comprises: 16

- associating with at least one of said elements a respective UV mapping;

- associating a point of a 2D image with each point of said UV mapping. 10. Software program comprising instructions that, when loaded into a computer, cause the execution of the method in accordance with any one of claims 8 - 9.

Description:
Description of Industrial Invention:

"System and method for modifying a virtual object"

Field of the invention

The present invention relates, in general, to the field of augmented reality. In particular, the present invention concerns a system and a method for modifying a virtual object for augmented reality.

Background art

Nowadays technologies based on augmented reality are becoming increasingly widespread. As is known, augmented reality (hereafter referred to simply as "AR") provides interactive experience within a real environment, in which virtual objects and/or information generated by a computer are inserted.

The Applicant has observed that, once a three-dimensional virtual object has been displayed, augmented reality systems do not allow such object to be readily and easily modified.

In particular, disadvantageously, real-time modifications can only be made to the colours of a virtual object displayed in AR. Such colour modifications are effected, for example, by means of suitable preloaded libraries.

Summary of the invention

The present invention aims at providing a method and a system for overcoming the above-mentioned problem.

In particular, it is the object of the present invention to provide a system for modifying a 3D virtual object in augmented reality, wherein said 3D virtual object is made up of a plurality of elements. At least one of said elements is associated with at least one 2D image. The system for modifying a 3D virtual object according to the present invention comprises:

- a first device equipped with a display device and a video camera; wherein said first device is configured for:

- receiving data associated with said 3D virtual object;

- displaying an image of an environment on said display device, said displayed image being acquired in real time by means of said video camera;

- reproducing said 3D virtual object in said displayed image; wherein said first device is further configured for:

- selecting an element of said 3D virtual object;

- selecting a 2D image;

- applying said selected 2D image to said selected element, thereby generating a modified virtual object;

- displaying said modified virtual object in said displayed image.

Preferably, at least one element of said 3D virtual object is associated with at least one 2D image. In particular, said element is associated with a respective UV mapping; each point of said UV mapping is associated with a point of a 2D image.

Preferably, the system according to the present invention further comprises a server; said first device is configured for accessing said server and requesting data associated with said 3D virtual object.

Preferably, the system further comprises:

- a second user device comprising a second display device; said second device being configured for:

- accessing said server and receiving data associated with said 3D virtual object;

- reproducing said 3D virtual object on said second display device; said second device is further configured for: selecting an element of said 3D virtual object; selecting a 2D image; applying said selected 2D image to said selected element, thereby generating a modified 3D virtual object.

According to one embodiment, said first device is further configured for: acquiring a photograph by means of said video camera; supplying said photograph to said second device; said second device being configured for: receiving said photograph; selecting an area of said photograph; and generating a 2D image as a function of said area, said generated 2D image being saved into said server and associated with an element of said 3D virtual object.

Preferably, said second device is further configured for associating material-related data with said generated 2D image.

Preferably, said first device and said second user device are configured for simultaneously accessing the same 3D virtual object on said server.

According to a further aspect, the present invention relates to a method for modifying a 3D virtual object in augmented reality, comprising:

- providing a 3D virtual object made up of a plurality of elements; at least one of said elements being associated with at least one 2D image;

- activating a first device equipped with a display device and a video camera for:

- receiving data associated with said 3D virtual object;

- displaying an image of an environment on said display device, said displayed image being acquired in real time by means of said video camera;

- reproducing said 3D virtual object in said displayed image;

- selecting an element of said 3D virtual object;

- selecting a 2D image;

- applying said selected 2D image to said selected element, thereby generating a modified 3D virtual object;

- displaying said modified virtual object in said displayed image.

Preferably, said providing a 3D virtual object made up of a plurality of elements comprises:

- associating with at least one of said elements a respective UV mapping;

- associating a point of a 2D image with each point of said UV mapping.

According to a further aspect, the present invention provides a software program comprising instructions that, when loaded into a computer, cause the execution of the method in accordance with the present invention.

Brief description of the drawings

Further features and advantages will become apparent in light of the following detailed description of some preferred embodiments of the invention.

Such description is provided herein with reference to the annexed drawings, which are also supplied by way of non-limiting example, wherein:

- Figure 1 shows a system for modifying a 3D object in accordance with the present invention;

- Figure 2 shows a flow chart of a method for displaying a 3D virtual object in augmented reality according to the present invention;

- Figure 3 shows a flow chart of a method for modifying a 3D virtual object according to the present invention;

- Figure 4 shows a flow chart of a method according to the present invention when the modification of a 3D virtual object is carried out by a first device;

- Figure 5 shows a flow chart of a method according to the present invention when the modification of a 3D virtual object is carried out by means of two different devices. Detailed description of some embodiments

With initial reference to Figure 1, the following will describe a system 100 for modifying a 3D virtual object in augmented reality. Such 3D virtual object is made up of a plurality of elements. Each element is associated with at least one 2D image. Each 2D image has a colour and/or a texture.

Note that the 3D virtual object may represent any object; for example, the 3D virtual object may represent a vehicle, a piece of furniture, etc.

Preferably, the 3D virtual object is a three-dimensional model made by means of a set of points in a three-dimensional space. Preferably, the points of such three-dimensional space are connected by means of geometric entities. For example, the points of such three-dimensional space are connected by means of triangles, lines, curved surfaces.

As aforementioned, the 3D virtual object is made up of a plurality of elements. Each element of said 3D virtual object is associated with at least one 2D image.

In particular, each element of the 3D virtual object is associated with a respective two-dimensional mapping. Preferably, such two-dimensional mapping is a UV mapping; each point of a respective UV mapping is associated with a point of a 2D image.

In other words, the three-dimensional surface of each element of the 3D virtual object is "flattened" into two dimensions and associated in a biunivocal manner with at least one respective 2D image.

Preferably, an identifier representing a material is associated with each 2D image.

For example, considering a 3D virtual object representative of a car, the 3D virtual object will be composed of the following elements: body; dashboard; doors; steering wheel; seat, wheels.

Note that the above list of elements making up the virtual car is merely exemplifying and non-limiting.

Note also that the division of a 3D virtual object into a plurality of elements is chosen as a function of the object involved and the desired degree of customization. For example, considering the element representing the "steering wheel", such element may be subdivided into further sub-elements (e.g. rim, spokes, horn, etc.) without departing from the scope of the present invention.

For example, considering a 3D virtual object representative of a car, the following items are associated with the "rim" sub-element of the "steering wheel" element:

- a first 2D image having a first colour and/or a first texture, and an identifier indicating that the material associated with such first 2D image is "leather";

- a second 2D image having a second colour and/or a second texture, and an identifier indicating that the material associated with such second 2D image is "wood".

For example, when the first 2D image is selected, such image will be applied to the rim sub-element of the steering wheel of the 3D virtual object; in particular, the first 2D image will be "wrapped" on the rim subelement by means of the UV mapping associated with said rim.

Preferably, the system comprises a server 102. The 3D virtual object is stored in the server 102. Even more preferably, the 3D virtual object, each two-dimensional mapping and each 2D image associated with such 3D virtual object are stored in the server 102.

Preferably, the first device 110 is configured for accessing the server 102 and requesting data associated with the 3D virtual object.

The system 100 comprises a first device 110.

The first device 110 is equipped with a display device 111 and a video camera 112 (Figure 1). For example, the first device 110 is a tablet or a smartphone.

The first device 110 is configured for: receiving data associated with a 3D virtual object; displaying an image of an environment on said display device 111, the displayed image being acquired in real time by means of the video camera 112; reproducing said 3D virtual object in the displayed image.

For example, with reference to Figure 2, the first device 110 receives data identifying the 3D virtual object to be reproduced in augmented reality (step 201).

Subsequently, at step 202, the first device 110 displays, on the display device 111, an environment - e.g. a garage - acquired by means of the video camera 112.

Subsequently, at step 203, by means of the first device 110, the user positions the 3D virtual object - e.g. a virtual car - in a certain spot within the environment displayed on the display device 111.

Preferably, once the 3D virtual object has been positioned in a spot within the environment displayed on the display device 111, such object will stay fixed. In other words, when changing the position of the video camera and framing a different spot in the environment, the virtual object will not change its position. For example, after reproducing the virtual car in a spot of the garage being framed by the video camera 112, the user will be allowed to move and shift focus.

In particular, according to the present invention, it is possible to display the 3D virtual object from different perspectives by moving the video camera 112 of the first device 110 (step 204). For example, the operator may move closer to the virtual car and display magnified details of the virtual car on the first display device 111, or may move and dynamically shift the point of view of the virtual car.

The first device 110 is further configured for: selecting an element of said 3D virtual object; selecting a 2D image; applying said selected 2D image to said selected element, thereby generating a modified 3D virtual object; displaying said modified 3D virtual object in said image displayed on said first display device 111.

For example, with reference to Figure 3, the first device 110 may, once the 3D virtual object has been reproduced in a spot of the environment being framed by the video camera 112, be activated by a user in order to select an element of said 3D virtual object (step 301); for example, the user may select the steering wheel of the virtual car.

After selecting such element, the first device 110 will display the 2D images associated with that element (step 302); in particular, the first display device 111 will show to the user every 2D image associated with the selected element.

Subsequently, at step 303, the user operates the first device 110 in order to apply one of the 2D images displayed at step 302 to the selected element. At the end of step 302, the first device 110 displays the modified element (step 304).

According to one embodiment, with reference to Figure 4, prior to modifying a virtual object the first device 110 starts a first step 401 prompting to enter an identification key for the virtual object.

Once said identification key has been entered, the first device 110 connects, via the network 105, to the server 102 and requests the data associated with the 3D virtual object identified by that identification key.

At the end of step 401, the first device 110 starts a step 402 wherein the first device 110 displays the 3D virtual object in the environment being framed by the video camera 112 as previously described.

When an element of the displayed 3D virtual object needs to be modified, the user can select such element by means of the first device 110. For example, if the first device 110 is a tablet, the user can act upon the first display device 111 in order to select the element that needs to be modified by clicking on it. Once such element has been selected, the first device 110 starts a step 405 wherein a plurality of 2D images associated with such element are displayed on the first display device 111. By means of the first device 110, the user selects the 2D image to be applied to the selected element (step 406). Optionally, the user may, by means of the first device 101, modify a 2D image shown at step 405 by adding, for example, lines and/or logos and/or graphics and/or writings, thereby generating a new 2D image associated with the element selected at step 404.

At the end of step 406, the first device 110 starts a step 407 wherein the first device 110 is activated in order to use the two-dimensional mapping of the selected element and "wrap" it with the 2D image selected at step 406, thereby generating a modified 3D virtual object.

At the end of step 407, the first device 110 displays the modified 3D virtual object in the environment being framed by the video camera 112 as previously described.

Advantageously, when the first device 110 is a tablet, the user can move the first device 110 in order to display the 3D virtual object from different points of view.

For example, considering a 3D virtual object representing a virtual car, the user can:

- display such virtual car inside his/her house;

- move towards/away from an element of the virtual car, in order to display it in greater detail;

- select such element and modify it at will, displaying such modification in real time and immediately seeing the aesthetic result of the modification just made.

According to the present invention, the system 100 further comprises a second device 120. The second device 120 comprises a second display device 121 and is configured for:

- accessing the server 102; - receiving data associated with a 3D virtual object;

- reproducing the virtual object on the second display device 121.

Preferably, the first device 110 and the second device 120 simultaneously access the respective display device 111, 121 and reproduce thereon the same 3D virtual object.

Preferably, the second device 120 is configured for:

- selecting an element of the 3D virtual object;

- selecting a 2D image;

- applying the selected 2D image to the selected element, thereby generating a modified 3D virtual object.

Preferably, such modified 3D virtual object is displayed on the first device 110 and/or on the second device 120, even more preferably in real time.

Preferably, according to one embodiment of the present invention, the first device 110 is further configured for:

-acquiring a photograph by means of said video camera 112;

-supplying said photograph to said second user device 120.

The second device 120 is configured for receiving such photograph and selecting an area thereof, thereby generating a 2D image. Preferably, such generated 2D image is saved into the server 102 and associated with an element of the 3D virtual object.

Preferably, the second user device 120 is further configured for associating material-related data with such generated 2D image.

Preferably, the second device 120 is equipped with a respective video camera 122. Preferably, the second device 120 is configured for acquiring a photograph by means of the respective video camera 122. Once such photograph has been acquired, the second device 120 is configured for selecting an area of such photograph, thereby generating a 2D image. Preferably, such generated 2D image is saved into the server 102 and associated with an element of the 3D virtual object. Preferably, the second user device 120 is configured for associating material-related data with such generated 2D image.

According to a further embodiment, both the first device 110 and the second device 120 preferably comprise respective video cameras 112, 122.

Preferably, the first device 110 is configured for:

• acquiring a respective photograph by means of the video camera 112;

• selecting an area of the respective photograph, thereby generating a 2D image.

Preferably, the 2D image generated by the first user device 110 is saved into the server 102 and associated with an element of the 3D virtual object. Preferably, material-related data are associated with each 2D image.

Preferably, the second device 120 is configured for:

• acquiring a respective photograph by means of the video camera 122;

• selecting an area of the respective photograph, thereby generating a 2D image.

Preferably, the 2D image generated by the second user device 120 is saved into the server 102 and associated with an element of the 3D virtual object. Preferably, material-related data are associated with each 2D image.

By way of example, with reference to Figure 5, the system 100 may carry out the following steps in order to make a modification to a 3D virtual object:

- step 501 : activating both the first device 110 and the second device 120 to request a project code identifying a 3D virtual object. Once the project code has been entered, the first device 110 and the second device 120 connect (e.g. via the network 105) to the server 102 and request the data associated with the 3D virtual object identified by that project code;

- step 502: the first device 110 displays the 3D virtual object in the environment being framed by the video camera 112 as previously described, and the second device 120 displays such 3D virtual object on the respective display device 121;

- step 504: acquiring a photograph by means of the video camera 112 of the first device 110 (or the video camera 122 of the second device 120). For example, the first device 110 stops reproducing the 3D virtual object in augmented reality and permits the acquisition of a photograph by means of the video camera 112. Preferably, such photograph is sent to the second device 120;

- step 505: processing the photograph acquired at step 504 to select an area having a colour and/or a texture of interest, thereby generating a 2D image of such area. Note that step 505 may be carried out by both the first device 110 and the second device 120;

- step 506: selecting an element of the 3D virtual object with which the 2D image generated at step 505 should be associated. Re-processing such 2D image, associating it with the selected element by means of the respective UV mapping;

- step 507: applying the re-processed 2D image to the element selected at step 506, thereby generating a modified 3D virtual object;

- step 508: displaying the modified 3D virtual object on the display device 111 of the first device and/or on the display device 121 of the second device 120.

The present invention achieves important advantages.

Advantageously, it is possible to modify a 3D virtual object in a simple and straightforward manner by only acting, on the user's side, upon 2D images associated with the 3D virtual object.

Advantageously, the system 100 according to the present invention can be used as a remote object configurator. For example, the first device 110 may be used by a potential buyer and the second device 120 may be used by an operator, who, while working simultaneously on the same 3D virtual object, will be able to configure such 3D virtual object according to the customer's requirements without needing special 3D graphics skills.