Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USER-CONTROLLED 3D SIMULATION FOR PROVIDING REALISTIC AND ENHANCED DIGITAL OBJECT VIEWING AND INTERACTION EXPERIENCE
Document Type and Number:
WIPO Patent Application WO/2014/006642
Kind Code:
A2
Abstract:
Method, technology and system of user-controlled realistic 3D simulation and interaction are disclosed for providing realistic and enhanced digital object viewing and interaction experience with improved three dimensional (3D) visualisation effects. A solution is provided to make available 3D-model/s carrying similar properties of real object, where performing user-controlled realistic interactions selected from extrusive interaction, intrusive interactions, time-bound changes based interaction and real environment mapping based interactions are made possible as per user choice.

Inventors:
VATS GAURAV (IN)
VATS NITIN (IN)
Application Number:
PCT/IN2013/000448
Publication Date:
January 09, 2014
Filing Date:
July 18, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VATS GAURAV (IN)
VATS NITIN (IN)
International Classes:
H04N5/66
Foreign References:
US20080012863A12008-01-17
US20110213680A12011-09-01
US20080059138A12008-03-06
US20050253840A12005-11-17
US6230116B12001-05-08
US20100138898A12010-06-03
US20100284607A12010-11-11
US20110199376A12011-08-18
Other References:
See references of EP 2875491A4
Attorney, Agent or Firm:
VATS, Gaurav (J23 Nehru Nagar, Strre No. 5,Garh Road, Meerut, Uttar Pradesh 1, IN)
Download PDF:
Claims:
CLAIMS:

1. A method of user-controlled realistic 3D simulation for enhanced object viewing and interaction experience, the method comprising: receiving request by at least one input mode for display of an object (1 101); displaying image of the said object or object containing consolidated-view category (1 102); receiving second request by at least one input mode for display of 3D-model of the said object (1 103); loading and simulating 3D-model of the said object in real-time, wherein a virtual operating sub-system is optionally installed in the loaded 3D-model based on characteristics, state and nature of the said object, and where loading and simulating 3D-model of the said object in real-time comprises: i. using image associated data of said object, and auto-linking with real object associated data, polygon data and texturing data of the said object in a simulative manner;

ii. transforming the linked polygon data, texturing data, image associated data and real object associated data into 3D-model of the said object (1 104); displaying 3D-model of the said object in 3D-computer graphic environment, where the displayed 3D-model of the said object comprises at least one realistic 3D-view (1 105); making available user-controlled realistic interactions with the displayed 3D-model to an user, where the user-controlled realistic interactions include extrusive interaction (fig.4) and/or intrusive interactions (le-lh,3b-3f,5b-5e,6a-6c,9a-9c,10a- 10c,fig. l I ,fig. l 3,fig. l4) and/or time bound changes based interaction (fig.7), and/or real environment mapping based interactions (fig. l 5,fig. l6) as per user choice and as per characteristics, state and nature of the said object (1 106); performing user-controlled realistic interactions with the displayed 3D-model and simultaneous displaying in at least one realistic 3D-view with at least one input (1 107).

2. The method as in claim 1, wherein loading of 3D model of the said object is routed either directly, or through a consolidated-view category (fig. l7,fig.l 8,fig. l 9) or via live telecast category (fig.20) as per user choice or in pre-determined manner, where the consolidated-view category emulates real showroom view containing real products, and is selected from an interactive video view category or an interactive panoramic view category, and where consolidated-view category and live telecast category is designed such that dynamic customization of texturing pattern of 3D-model is carried out during loading of the 3D-model, and where during interacting in panoramic view, a virtual assistant (1901 , 1901') remains intact in same position over the panoramic view while panoramic image or panoramic model moves in interactive and synchronized manner.

3. The method as in claim 1 , wherein realistic 3D-view is first realistic 3D-view, a pressure view for judgment of pressure required to operate the said displayed object, a taste view to judge sense of taste, a temperature view for judging heat generated during operation of the said displayed object after certain time intervals, a touch view for judging the sense of softness touch when applied on the displayed object, where the said first realistic 3D-view is preferably displayed initially, and where the pressure view, the taste view, the temperature view, the touch view are available, and displayed on request as per characteristics, state and nature of displayed object, and where properties of heating, cooling, softness, hardness and pressure applied to open or operate a movable sub-part of multi-part 3D-model is represented by texturing the 3D-model in different color, where different pressure, temperature, softness or hardness is distinguished at different sub-parts of said 3D-model or entire 3D-model in different colors. 4. The method as in claim 1 , wherein input mode is selected from placing a search query to search said object; through a pointing device such as mouse; via a keyboard; a gesture guided input of hand or eye movement captured by a sensor of an system; a touch input; a command to a virtual assistant sub-system, where command to the said virtual assistant system can be a voice command or via chat.

5. The method as in claim 1 , where extrusive interaction includes rotating 3D-model of object in 360 degree in different planes, lighting effect for light-emitting parts of 3D-model of object, interacting with 3D- models having electronic display parts for understanding electronic display functioning, sound effects, and displaying said object as per input, in real-time with precision, where polygons along with associated texture of said 3D-model moves as per user command, and movement of simulated 3D-model or its parts is achieved and displayed in real time and with precision based on user input commands, intrusive interaction includes viewing and interacting with internal parts, opening and closing of sub-parts of said 3d- model, where the simulated 3D-model is multi-part object, disintegrating parts of the simulated 3D-model one by one to interact with interior and individual parts of the said 3d-model, interacting for exploded view, and where polygons along with associated texture of said 3D-model moves as per user command, and movement of 3D-model or its parts is achieved and displayed in real time and with precision based on user input commands as per characteristics, state and nature of displayed object, and where the time bound changes based interactions comprises monitoring or visualizing time-bound changes observed on using or operating the said 3D-model of object, where object behavior can be ascertained after a desired duration, and where in real environment mapping based interactions, area in vicinity of user is captured, mapped and simulated in real-time such that simulated 3D-model or virtual object displayed on electronic screen of user can be interacted with the mapped and simulated environment, where environment mapping based interactions also include mirror effect. 6. A system of user-controlled realistic 3D simulation for enhanced object viewing and interaction experience comprising:

a graphical user interface (GUI) (2302) connected to a central search component (2301) configured for accepting user inputs;

a consolidated view displayer (2303) for displaying 3D graphics environment, containing one or more 3D-models in an organized manner using a 3D consolidated view generating engine (2306);

a 3D-model displayer (2305) for displaying 3D-model of an object simulated using a 3D-model generating engine (2307), where the 3D-model displayer comprises at least one display space for displaying the virtual interactive 3D- model;

a virtual operating sub-system (2308) for providing functionality of operation of displayed 3D-model, where the virtual operating sub-system is installed during loading of said 3D-model as per characteristics, state and nature of displayed object, where the virtual operating sub-system is in direct connection to the 3D- model displayer and the 3D objects generating engine; optionally a virtual assistant sub-system (2304) as one input mode for two way communication, where the virtual assistant sub-system comprises another graphical user interface, a natural language processing component for processing of user input in form of words or sentences and providing output as per the received input, where the natural language processing component is integrated to a central database (2309); optionally a live telecast displayer (2312) for displaying live telecast of a place containing plurality of objects, where a dynamic link is built over each identified object, where each dynamic link invokes the 3D-model displayer for displaying 3D-model of the said identified object; optionally a camera (231 1) for capturing video for background mapping based interaction, where the video captured from the camera is layered beneath the 3D- model displayer; where the GUI is direct connection with the consolidated view displayer, the virtual assistant sub-system, the 3D-model displayer, and the central database in addition to the central search component, and where the 3D-model displayer and the 3D objects generating engine are in direct connection to each other, and are also connected to the virtual operating sub-system, and where the central database is also connected to a central data storage (2310) for storing at least image associated data.

The system as in claim 6, wherein the 3D-model displayer is an interactive platform for carrying out extrusive interaction and/or intrusive interactions and/or time bound changes based interaction and/or real environment mapping based interactions as per user choice and as per characteristics, state and nature of the said object.

The system as in claim 6, wherein the 3D objects generating engine uses image associated data, real object associated data, polygon data and texturing data of the said object for simulating said 3D-model, where the 3D-model comprises plurality of polygons.

9. The system as in claim 6, wherein enhanced object viewing and interaction experience can be provided over a web-page via hypertext transfer protocol in a wearable or non- wearable display, or as offline content in stand-alone devices or systems.

10. The system as in claim 6, wherein virtual assistant sub-system further includes a microphone for receiving voice command, and sound output device, where interaction with the said virtual assistant system is two-way communication emulating a user interacting with real person to gain object related information and receiving precise replies, either in the form of text output or sound output, as per the query asked in real time.

1 1. The system as in claim 6, wherein the link built over each identified object in live telecast displayer is a dynamic link built in real time during live video telecast of a remote place or a link built with a lag time.

12. The system as in claim 6, wherein real object associated data, polygon data and texturing data of the said object is stored in the central database or partially in the central data storage.

Description:
USER-CONTROLLED 3D SIMULATION FOR PROVIDING REALISTIC AND ENHANCED DIGITAL OBJECT VIEWING AND INTERACTION EXPERIENCE

FIELD OF INVENTION

The present invention relates to field of virtual reality, particularly user-controlled realistic 3D simulation and interaction technology for providing realistic and enhanced digital object viewing and interaction experience with improved three dimensional (3D) visualisation effects. The applications of user-controlled realistic 3D simulation and interaction technology includes in the field of online shopping by providing enhanced digital object viewing and interaction experience, collaboration and object demonstration, e-learning, media, entertainment and content industry, computing, mechanical and communication industry.

BACKGROUND OF THE INVENTION

There is increasing trend in the use of three dimensional (3D) viewing in various industries such as in entertainment, mechanical engineering designs view, online shopping sites, and offline product advertisement panels. There are many web-based shopping markets, websites or store fronts which show images or in some case a short video of objects or products. The images are static and in some cases only enlarged or zoomed to get a clearer picture. In some other cases video of products are captured, but this makes the loading, and ultimately viewing slow, and further user get to see whatever is captured mostly either by streaming or through media player in two dimensional projections or partly in three dimensions. The images and written information displayed provides limited information about the desired object. Limited information here means information that is written and displayed related to object, which is available for view to the end user. This is a passive way of information transfer. In conventional systems, web based portals or sites, and online shopping portals, the user cannot interact with the product as possible when user or customer physically visits a shop to a great extent, for example, viewing the product in all possible angles, checking functionalities, asking any type of desired queries about the product, interacting with product to see its interior or exterior just like real scenario. This is active way of information transfer.

US7680694B2, US8069095B2, US8326704 B2, US20130066751 A1 , US20120036040A1 , US20100185514A1 , US20070179867A1 and US2002000251 1A1 , discusses about solution for

3D view, and some form of interactions related of online shopping, shopping location, and stores. This is limited to displaying the virtual shopping location on a user computer by streaming a 3D interactive simulation view via a web browser. However, this doesn't provide for generating a 3D model which has real object properties in true sense capable of user- controlled simulation and interactions not restricted or limited to pre-set or pre-determined interactions. Conventional systems, methods and techniques lack in generating 3D-model carrying properties of real objects such as appearance, shape, dimensions, texture, fitting of internal parts, mirror effect, object surface properties of touch, smoothness, light properties and other nature, characteristics, and state of real object, where performing user-controlled realistic interactions such as viewing rotation in 360 degree in all planes, non-restrictive intrusive interactions, time-bound changes based interaction and real environment mapping based interactions as per characteristics, state and nature of the said object are lacking. Patent US 7,680,694 B2, US8326704 B2, WO 01/1 151 1 Al also discusses about a concierge or an animated figure or avatars or sales assistant, capable of offering information about products or graphics to customers, remembering customer buying behaviour, product choices, offering tips and promotions offer. These types of interactions are limited to pre-defined set of offers, information about products. The input query is structured and generally matched with database to find and retrieve answers. However there still exists gap in bringing out the real-time intelligent human-like interaction between the said animated figure and real human user. This is no mention of facial expressions, hand movements and precision which are prime criteria to receive a response from the animated figure or concierge which is human-like and as per the query of the real human user. For active communication, a natural interface such as understanding of language such as English is necessary. Such technology to decipher meaning of language during text chat by a virtual assistant or intelligent system and provide user query specific response is costly endeavour and still a problem to be solved.

A JP patent with Application Number: 2000129043 (publication Number 2001312633) discusses about a system, which simply show texture information, and touch sense information in form of write-up in addition to still picture information or a photographic image, an explanatory sentence, video, and only three-dimensional information which user have to read. This and other patents US6070149A, WO0169364A3, WO 02/48967 Al , US5737533A, US7720276 Bl , US7353188 B2, US6912293 Bl , US20090315916A1, US20050253840A1 discusses about 3D viewing and simulation, and virtual or online shopping experience. However lack in one or more of the following points and technologies given below. Further, most existing technology of 3D simulation for providing digital object viewing and interaction experience, in addition to above also lack one or more of the following:

1. The existing simulated 3D-models are hollow models meaning such models doesn't allows intrusive interactions such as to see exploded view of the parts of a simulated 3D-model of an object in real-time, or open the parts of the 3D-model of object one by one as a person could have done in real scenario. For example, in conventional virtual reality set-up, a user cannot open the compressor of a refrigerator from a virtual 3D-model of refrigerator, or open or perform interactions with sub-part of the simulated 3D-model such as battery and other internal parts removed from a 3D-model of a mobile for interactions and realistic viewing, rotate tyres of car, move steering wheel to judge the movement and power steering, or examine the internal parts or interior built of a simulated 3D-model of mobile in real time. In some conventional cases, limited options are provided, on click of which an internal part of an object is visible in photographic or panoramic view, but such cannot do further analysis of internal parts beyond the provided options. Another example is 3D-view of a bottle filled with oil or any liquid, where only a 3d-simulated view can be displayed in conventional systems, but a user cannot open the cork of the bottle, or pour the liquid from the bottle in an interactive manner as per his desire which is possible in real scenario. In other words user-controlled interaction is not feasible as per user choice.

2. They don't allow realistic extrusive interaction such as rotating 3D-model of object/s in 360 degree in different planes with ability of interaction from any projected angle. Mostly only

360 degree rotation in one plane is allowed in existing technologies. Further, current 3D- simulation technology lacks to give a realistic 3D-simulation effect or 3D visualization effect, lighting effect for light-emitting parts of 3D-model of object, interacting with 3D-models having electronic display parts for understanding electronic display functioning, sound effects, of object such that creating illusion of real objects is not very precise in virtual views.

3. Another lack of originality and closeness to real-set up is operating pressure, judging sense of taste, sense of touch. For example, a user opening a refrigerator holds the handle, and applies pressure to open the refrigerator door. Existing virtual 3D-sumulated models of object and technology cannot judge the smoothness or softness of the handle and the operating pressure or force required to open the refrigerator door. 4. Monitoring or visualizing time-bound changes observed on using or operating an object is not possible. User cannot check product or object behavior after a desired duration. For example checking the heating of iron, or cooling in refrigerators, or cooling generated by air conditioners in a room. Further, user cannot hear the sound when a refrigerator door is opened from a simulated 3D-model of object which mimics the real sound produced when opening the door of a real refrigerator in real setup. Further change in sound after certain intervals of time cannot be heard or monitored to experience the product performance, or to compare it with other product.

5. Further in real scenario a user can switch on a laptop, computer, iPad, mobile or any computing device, and check the start-up time, speed of loading of the operating system, and play music etc. Such interactions are lacking in real time for various virtual 3D-models and choice of user is limited to observing only the outer looks of the object such as laptop.

6. Real environment mapping based interactions are interactions where user environment, that is the place or location in the vicinity of user, is captured through a camera, mapped and simulated in real-time such that a realistic 3D-model or virtual object displayed on electronic screen can be seen interacting with the mapped and simulated environment. Such real-time interactions including mirror effect are lacking in current technologies.

7. The existing technology doesn't allow dynamic customization of texturing pattern of 3D-model during loading of the 3D-model. Such real-time and enhanced interactions are lacking in current virtual reality related technologies. The above constraints in current available technology/technologies makes very difficult for human user to interact with things virtually in a way that he/she can interact in real world, and hence there is need for a technology that enhances digital object viewing and interaction experience, and bridges the gap between real and virtual world in true sense.

SUMMARY OF THE INVENTION

It is an object of the invention to provide a system of user-controlled realistic 3D simulation for enhanced object viewing and interaction experience capable of displaying real products virtually in interactive and realistic 3D-model. The user-controlled realistic 3D simulation and interaction technology of the said system comprising a 3D-model displayer with virtual operating sub-system is useful to see digital objects in a three dimensional view from all angles like in real world, and simultaneously also operate simulated 3D-model of the object in realistic manner producing a realistic 3D visualisation effect over an electronic display.

Another object of the invention is to provide a method of user-controlled realistic 3D simulation for providing realistic and enhanced digital object viewing and interaction experience using the said system of user-controlled realistic 3D simulation. A solution is provided to make available a 3D-model carrying similar properties such as appearance, shape, dimensions, texture, fitting of internal parts, object surface properties of touch, smoothness, and other nature, characteristics, and state of real object, where performing user-controlled realistic interactions selected from extrusive interaction, intrusive interactions, time-bound changes based interaction and real environment mapping based interactions are made possible as per user choice in real-time and as per characteristics, state and nature of the said object. The user-controlled realistic 3D simulation and interaction technology allows for dynamic customization of texturing pattern of 3D-model during loading of the 3D-model, thereby providing selective loading ability to 3D-model and making efficient use of memory. This optimizes the loading time, such that there is no or minimum visible impact on the viewing of 3D-model of the object even if data is transmitted over web-page via hypertext transfer protocol (HTTP). Another further object of the invention is to make possible building dynamic interactive points in real-time capable of displaying virtual 3D-objects in a live video from a live telecast of a place having plurality of real objects. Another further object of the invention is to provide a virtual operating sub-system for providing functionality of operation of displayed 3D-model, where the virtual operating subsystem is installed during loading of said 3D-model as per characteristics, state and nature of displayed object.

BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 shows a flowchart illustrating the method of user-controlled realistic simulation and interaction for enhanced object viewing and interaction experience according to invention;

Fig. 2 and Fig. 3 show different schematic and perspective views of 3D-model of mobile depicting extrusive and intrusive . interactions according to an preferred embodiment of invention; Fig. 4 shows different perspective views of 3D-model of a multi-part object such as refrigerator depicting extrusive interaction in 360 degree in more than one plane according to an preferred embodiment of invention;

Fig. 5 shows perspective views of 3D-model of refrigerator depicting another example of intrusive interaction according to a preferred embodiment of invention;

Fig. 6 shows different schematic views of 3D-model of a laptop showing intrusive interaction using a virtual operating sub-system according to invention;

Fig. 7 shows schematically a temperature view of simulated 3D-model of iron, depicting heating of iron lower surface at different time intervals as per time-bound changes based interactions according to an embodiment of invention;

Fig. 8 shows different perspective views of a realistic 3D-simulation of a chair with its touch view for judging softness of seat and back cushion in an intrusive interaction according to invention;

Fig. 9 shows in a schematic view virtual simulation of 3D-model of a liquor bottle in a taste view according to according to an embodiment of invention;

Fig. 10 shows schematic view of different frames of an continuous user-controlled 3D simulation and interaction with a 3D-model of a toothpaste tube showing paste coming out of the tube in intrusive interaction according to invention;

Fig. 1 1 shows perspective views of 3D-model of a bike depicting an example of intrusive interactions as per user choice according to invention;

Fig. 12 shows perspective and partial enlarged views of 3D-model of the bike of fig. 1 1 depicting operating pressure view as an example of intrusive interactions according to invention;

Fig. 13 shows further intrusive interactions for the 3D-model of the bike of fig.l 1 , where some parts of the 3D-model have been disintegrated as per user choice according to invention;

Fig 14 shows perspective views of 3D-model of a car showing another form of intrusive interactions according to a preferred embodiment of the invention; Fig 15 shows schematic perspective views of environment mapping based interactions according to a preferred embodiment of invention.

Fig 16 shows mirror effect as another form of environment mapping based interactions according to a preferred embodiment of invention. Fig 17 shows different schematic and perspective views of interactive video of 3D-graphics environment model of interior of a refrigerator showroom in a consolidated view category according to an embodiment of invention;

Fig.18 shows a perspective representation of a panoramic view of a 3D-graphics environment model of interior of a refrigerator showroom containing 3D-models of different refrigerators in a consolidated view category according to a preferred embodiment of invention;

Fig.19 shows a perspective representation of a panoramic view of a 3D-graphics environment model of interior of a refrigerator showroom containing 3D-models of different refrigerators in a consolidated view category with a virtual assistant sub-system according to a preferred embodiment of invention; Fig. 20 shows schematic and perspective representation of a live telecast of a remote physical shop, where change in object is recognised and dynamic links are built in real-time for display of 3D-models of according to an embodiment of invention;

Fig. 21 shows perspective views of a mechanical engineering design of a 3D-model of a lathe machine for remote demonstration according to an embodiment of invention; Fig 22 shows another flowchart illustrating the method of user-controlled realistic simulation and interaction for enhanced object viewing and interaction experience according to invention;

Fig 23 shows a system of user-controlled realistic simulation and interaction for enhanced object viewing and interaction experience according to invention;

DETAILED DESCRIPTION

Fig. 1 shows a flowchart illustrating the method of user-controlled realistic simulation and interaction for enhanced object viewing and interaction experience. Step 1101, involves receiving request by any one input mode for display of an object. In step 1102, an image of the said object or object containing consolidated-view category is displayed. In step 1103, a second request is received by any one input mode for display of 3D-model of the said object, which is followed by loading and simulating of 3D-model of the said object in real-time (step 1104). A virtual operating sub-system may be installed in the loaded 3D-model based on characteristics, state and nature of the said object. For example, If the requested object is a computer or laptop, smart phone or any computing device, a virtual operating system is also loaded and installed within the loaded 3D-model such as within simulated 3D-model of laptop based on product or brand characteristics such as if an windows version operating system was present in the real product specification, a virtual operating system pertaining to the said windows version style operating system will load accordingly in real time and as per state and nature of the desired object. The characteristics, state and nature of displayed object means the loaded object are displayed and the interactions available are as per their real characteristics and nature in reality. The characteristics, state and nature of the object includes the real object properties such as single part object, multi-part object, digital or communication devices such as laptop, smart phones, and computers, solid, liquid, semi-solid, gaseous object state properties, or operation status such as object in opened state or closed state etc. By nature of the object, it means expected behaviour and the purpose of the object. One cannot expect in real setup to disintegrate a single part object or judge the taste view of car. For example if the desired object is a iron, testing its heating property is justified, and not the coldness as for this object expected behaviour and the purpose of the object is producing heat for pressing clothes. The step of generation of 3D-model of the said object involves: a) using image associated data of said object, and auto-linking with real object associated data such as characteristics, state and nature of said object, polygon data and texturing data of the said object in a simulative manner; and b) transforming the linked polygon data, texturing data, image associated data and real object associated data into 3D-model of the said object. In step 1105, displaying 3D-model of the said object in 3D-computer graphic environment is carried out, where the displayed 3D- model of the said object comprises at least one realistic 3D-view. The realistic 3D-view is first realistic 3D-view, a pressure view for judgment of pressure required to operate the said displayed object, a taste view to judge perception of sense of taste, a temperature view for judging heat generated during operation of the said displayed object after certain time intervals, a touch view forjudging the sense of softness touch when applied on the displayed object. The first realistic 3D-view is displayed by default. The pressure view, the taste view, the temperature view, the touch view are available, and displayed on request, as per characteristics, state and nature of displayed object. The pressure view is for solid objects which can be operated, e.g. a refrigerator, gasoline generator or hand pump. The taste view is available for food items, emulating real life scenario. The taste views helps in judging the taste of object and compare the taste with other objects showing the extent of bitterness, sweetness, sourness, saltiness, emami taste or as per food in question. The temperature view helps to see the , temperature change for objects in real set-up dealing with temperature e.g. refrigerators, air conditioners, iron, any electronic devices as they generate heat after prolonged operation in real set-up. The touch view helps in ascertaining softness and smoothness through colour representations making available another parameter of judgment available for comparison. The properties of heating, cooling, softness, hardness and pressure applied to open or operate a movable sub-part of multi-part 3D-model is represented by texturing the 3D-model in different colour, where different pressure, temperature, softness or hardness is distinguished at different sub-parts of said 3D-model or entire 3D-model in different colours. In step 1106, user- controlled realistic interactions with the displayed 3D-model are made available to user. The user-controlled realistic interactions include extrusive interaction and/or intrusive interactions and/or time bound changes based interaction and/or real environment mapping based interactions as per user choice and as per characteristics, state and nature of the said object. The extrusive interaction is interaction possible from exterior of any real objects. The extrusive interaction with 3D-model emulates real life scenario with regards to viewing or examining the object. On receiving input for viewing the object in different angles, as per user choice, the 3D- model of object/s is rotated in 360 degree in different planes. The said object is displayed as per received input. In extrusive interactions, simulating parts of 3D-model of a multipart object/s is made possible as per user choice. The simulation is displayed such that viewing, examining and testing object functionalities or product features is made possible in real-time with precision, where polygons along with associated texture of said 3D-model moves as per user command, and movement of 3D-model or its parts is achieved and displayed in real time and with precision based on user input commands. The intrusive interaction includes viewing and examining internal parts, disintegrating parts of the object in real-time one by one to examine interior and individual parts of the said object. The polygons along with associated texture of said 3D-model moves as per user command, and movement of 3D-model or its parts is achieved and displayed in real time and with precision based on user input commands. The movement of 3D-model or its parts is achieved and displayed in real time, and with precision based on user input commands as per characteristics, state and nature of displayed object. The time bound changes based interactions comprises monitoring or visualizing time-bound changes observed on using or operating an object. User can check product or object behaviour after a desired duration. For example checking the heating of iron, or cooling in refrigerators, or cooling generated by air conditioners in a room is possible. Further, user can hear the sound when a refrigerator door is opened from a virtual simulation of 3D-model of object which mimics the real sound produced when opening the door of a real refrigerator in real setup. Further change in sound after certain intervals of time can be heard or monitored to experience the product performance, or to compare it with other product. The pressure view, the taste view, the temperature view and the touch view interactions are also included in the time bound interactions. The real environment mapping based interactions comprises of interactions where user environment, that is the place or location in the vicinity of user, is captured through a camera, mapped and simulated in real-time such that a realistic 3D-model or virtual object displayed on electronic screen of user can be seen interacting with the mapped and simulated environment. In step 1107, user performs user-controlled realistic interactions with the displayed 3D-model by providing at least one input, where performed interactions are displayed in at least one realistic 3D-view.

Fig. 2 and Fig. 3 show 3D-models of mobile in various extrusive and intrusive interactions. Fig. 2a-2d shows rotation of 3D-model of mobile in more than one plane as per user choice and in real-time. The 3d-model of mobile can be rotated by user in any angles in its 360 degree course to return to its original position. Users desiring to check battery size and internal components can perform intrusive interactions such as opening a back cover (201 ), further taking out mobile battery (202,202) in real-time to see dual SIM layout in the said 3d-model. Further, if user desires to further check and inquire about other internal components (203) of mobile, the user can open the mobile and check 3d-model of the sub-part as shown in fig. 2h, or ask a virtual assistant sub-system to gain active product information. Figure 3 shows user interacting with the 3D-model of mobile, where the user not only views the mobile but also is able to interact intrusively by sliding the mobile to check numeric keypad (302), pressing number keys, where the numbers pressed (303) is reflected real time in mobile screen (302) of 3D-model of the said mobile. The user can check all functions in virtual 3D-space of a 3D- model displayer (2305 of fig. 23). The said user can interact with the 3D-model of mobile simulated just like real setup, such as to open message, see contact list, press buttons, use camera virtually very similar to what we do with real mobile. In fig. 3d, on pressing contact (304), contact page (304') is displayed in interactive manner using a virtual operating subsystem (2308 of fig. 23) of a system. Similarly, by providing an input (305) desiring to interact to view operation of touch numeric keypad, an on-screen numeric keypad (305') is displayed in the said 3D-model mimicking the functionalities of real mobile, which would have been operated in physical set-up. The interactions displayed are not the only interactions possible. The user can do numerous interactions as per his desire and the interactions possible holding a mobile in hand. The user can further see the exploded view of the mobile parts, or disintegrate parts one by one such as taking out SIM slot, opening front cover, judging smoothness of mobile body, or switching on to judge the start time, processing speed, operate the mobile to check functionalities etc mimicking real-setup. Other extrusive interactions can be lighting effect for light-emitting parts of 3D-model of object, interacting with 3D-models having electronic display parts for understanding electronic display functioning and sound effects emulating real scenario. Fig. 4 shows different perspective views of 3D-model of a refrigerator, where extrusive interaction of rotation is performed. In Fig. 4, realistic rotations in various angles are shown to be carried out with help of a pointing device such as mouse cursor movement. All rotations in 360 degree in all planes are possible using any conventional input devices such as keyboard, pointing device. The virtual assistant sub-system can also be used as input mode for requesting in the form of voice command or chat in natural language such as English. Further, in Fig. 5a- 5e, perspective views of the same 3D-model of the refrigerator is shown as another example of intrusive interaction, where when the user selects doors (501 ,502) of displayed 3D-model and provides input such as pulling the doors, the user gets a view of opening of doors (501 ',502') in a continuous movement simulation such as in animation (5b) emulating real scenario. Further, if user desires to further investigate lower portion of refrigerator, the user can open lower drawer (503) on the 3D-model on which first 3D-model simulation (5b) of door opening is already requested. A real-time 3D-simulation of the opened drawer (503') is generated and presented before the user as seen in 5c of figure 5. The user can upload his own photograph to generate a virtual simulation of himself (504,504') representing self. The simulated human 3D- model can walk as per his desire to a showroom or directly visit a product. The simulated human 3D-model can not only walk and experience a different virtual world but can see himself operating the product Here, the human 3D-model is shown walking to the 3D-model of refrigerator, to open door (501") of displayed 3D-model of refrigerator himself (5e). The lights in the refrigerator will also turn-on on opening the door, and cooling can also be experienced virtually mimicking the real life set-up. The interactions displayed are not the only interactions possible. Even pressure applied to open the door by the user can be judged using another operating pressure view. This view calculates the pressure and displays using available standard metrics to calculate energy or pressure. This can be compared with other objects so as to get better informed to decide choosing of a product emulating a real scenario in real set-up. The sound of the refrigerator door opening, if any as per the real product can be heard through conventional sound devices such as a speaker connected with the system, where display of 3D simulation is carried out.

Fig. 6 shows different schematic views of 3D-model of a laptop showing intrusive interaction with a virtual operating sub-system (OS). Fig. 6a shows virtual simulation of 3D-model of a laptop schematically in power-off mode. A user can not only check the laptop looks and compare specification, but can operate the laptop just in real life scenario such as switching it to judge start-up time, which is the real start-up time for the said product, if the product would have been started in real life set-up. The virtual operating sub-system (OS) is shown loaded within the 3D-model of the laptop. Fig. 6b shows schematically realistic simulation of a laptop, starting (601) with the help of the virtual operating sub-system (OS). The virtual operating subsystem is built with artificial intelligence and realistic 3D-simulation and interaction technology. The virtual operating sub-system (OS) mimics the real operating systems loaded in the existing systems or computers or any computing devices for operation such that hardware of the displayed virtual simulation of 3D-model of a laptop can be operated through the virtual operating sub-system. Fig. 6c shows started virtual operating sub-system ready for user login such that system booting time can be estimated virtually such as in real scenario. Fig. 7 shows schematically a temperature view of simulated 3D-model of iron (7a-7b), depicting heating of iron lower surface at different time intervals (7c-7e) as per time-bound changes based interactions. The simulation of 3D-model of iron here is shown schematically. The heating generated in the iron is ascertained by colour coding from light to dark shades representing low to high temperature respectively and can be displayed in standard metrics such as degree, Celsius or Fahrenheit in a particular time interval (not shown in figure). After

1.5 minutes of operation, say the iron heats to 70 degree Celsius of temperature. The value comes when two 3D-models of different products are compared (7f and 7g) for temperature at same time say one minute after operation, and see the difference in generated temperature in real-time without actually having to operate the iron, which might not be possible or allowed in real set-up.

Fig. 8 shows different perspective views of a realistic 3D-simulation of a chair with its touch view for judging softness of seat and back cushion in an intrusive interaction. The chair in fig. 8a and 8c transforms in real-time to another view represented in shades of colors in touch view to depict softness of seat, and cushion (8b,8d). The softness can be colour coded from light to dark shades representing very soft, soft to hard surfaces respectively, or an index is displayed in numerical standard allowing comparison of products with the parameter of softness or smoothness.

Fig. 9 shows schematic view of virtual simulation of 3D-model of a liquor bottle (9a) in a taste view. When a user selects a taste view for food items such as liquor in this embodiment, taste types is displayed mimicking the brand taste for which object is displayed. This feature of invention goes beyond the real-setup scenario, as in real scenario, users before buying a product cannot open the bottle or taste the product. The user can also open cork of the bottle, or pour the liquid (9b-9c) from the simulated of 3D-model bottle emulating real scenario.

Fig. 10 shows schematic view of different frames of a continuous animation of virtual simulation of 3D-model of a toothpaste tube showing paste coming out of the tube in an intrusive interaction. The cap of virtual simulation of 3D-model of toothpaste tube is opened, and the tube is pressed to squeeze out paste ( 14a- 14c). The paste color can also be observed together with the exterior body of the paste tube. The strength required to press the tube can also be judged and compared with another paste of different product or brand, where the characteristics, state and nature of product is same as of the real product in real store.

Fig. 1 1 shows perspective views of 3D-model of a bike depicting an example of intrusive interactions as per user choice, where a part of a 3D-model can be opened, exchanged or changed for another part of similar nature in different colour or shape as per user choice. Here, seat (1 101) of the 3D-model of a bike is changed to different coloured seat (1 102) to match with the body of the bike as per user choice performing an intrusive interaction virtually. The

3D-model of a bike can also been seen in pressure view to judge operating pressure of its parts. Fig. 12 shows perspective and partial enlarged views of 3D-model of the bike of fig. 1 1 depicting pressure view (12a), where pressure or force required to operate a brake (12b) or operate a kick (12c) can be judged either by color shade differentiation in an intrusive interaction. A pressure (pi) generated while operating the kick is shown in fig. 12c. The user can further check individual parts of the 3D-model of the bike as shown in Fig. 13, where some parts such as wheel (1301 , 1301', 1301 "), of the 3D-model have been disintegrated as per user choice. Fig 14 shows perspective views of 3D-model of a car (14a- 14c) showing another form of intrusive interactions. Doors (140 ) of a simulated 3D-model of a car (14a) can be opened in a manner such as in real scenario in true sense. An exploded view of 3D-model of a car can be viewed to introspect each part as per user choice in user-controlled realistic simulation. Further, the steering wheel can be rotated to judge the power-steering, smoothness of tyres can be judged, where individual parts are disintegrated in real-time using the user-controlled realistic simulation and interaction technology mimicking the real life scenario. The disintegrated parts, e.g. wheel in this case, are also displayed in 3D-simulation view, where individual parts such as wheel can be rotated separately just like real set-up. In fig. 18, an example of environment mapping based interactions is shown schematically, where in fig. 15 a, a section of a room (1501 ) with real sofa (1503), and a system with a camera (1502) is shown. The camera (1502') mounted on an electronic screen (1507) captures the video of the room section with the sofa. The captured video (1504) is shown in front side of an electronic screen (1507') in fig.15b, where simulated 3D-model of sofa cushion (1505) is also displayed by a 3D-model displayer (1506,1506') for interaction. The user can initiate environment mapping simulation by requesting the virtual assistant sub-system. The virtual assistant sub-system directs camera to capture the video of the section of the room (1501) with real sofa (1503). The desired object that is cushion (1505') is placed over the captured video of sofa (1503") as seen in fig. 15c interactively in through the 3D-model displayer (1506') to check the compatibility in terms of colour match and aesthetics to make an informed decision to select the cushion or search for different product/cushion as per user choice.

Fig 16 shows mirror effect as another form of environmental mapping based interactions, where in fig. 16a and 16b, front top portion of 3D-model of bike (1605,1605') is shown zoomed with a rear view mirror (1603,1603'), a front webcam (1601 ,1601'), an electronic screen (1602,1602'), and a user (1604) sitting in front of the displayed 3D-model of bike. A reflection (1604') of face of user can be seen on the rear view mirror (1603') of the 3D-model of the bike just like in real scenario. The reflection (1603') is generated in real-time when a user sitting in front of the electronic screen initiates environment mapping simulation through any input mode using a system of user-controlled realistic simulation and interaction. Another example is simulated 3d-model of dressing table producing reflection of user body in the said mirror effect. During interacting in panoramic view, the virtual assistant (1901 ,1901 ') remains intact in same position over the panoramic view while panoramic image or panoramic model moves in interactive and synchronized manner

Fig 17 shows different schematic and perspective views of interactive video of 3D-graphics environment model of interior of a refrigerator showroom in a consolidated view category. The virtual assistant is asked to display refrigerator showroom, which is loaded on the right hand side (17a). In drawing 17a, a virtual assistant (170Γ) is displayed on the left hand side capable of initializing real-time intelligent human-like chatting interaction with real user. 3D-models of different refrigerator (1703,1704,1705) are displayed in an interactive video of interior of 3D computer graphic model of a refrigerator showroom. A mouse cursor (1702) is shown in 17b, on the click of which on the path, and dragging back, other 3D-models of a refrigerator (1706,1706') are displayed as seen in figure 17c and 17d. Figure 17d shows that user wants to further introspect the first refrigerator (1703") from right, and hence can request for the display of realistic 3D-model of the selected refrigerator for further user-controlled realistic interactions such as opening of door as shown above in fig. 5.

Fig.18 shows perspective representation of a panoramic view of a 3D-graphics environment model of interior of a refrigerator showroom containing 3D-models of different refrigerators in a consolidated view category. The panoramic view category is a 360 degree view of virtual place such as a showroom shown in different frames (18a- 18c). The objects shown in the panoramic showroom are interactive objects, here consolidated view of 3d-model of refrigerators, capable for generating user-controlled realistic simulation of the said object in 3D-model capable of user-controlled realistic interactions. In fig.19 another perspective representation of a panoramic view of a 3D-graphics environment model of interior of a refrigerator showroom is shown containing 3D-models of different refrigerators with virtual assistant (1901 ,1901'). The virtual assistant can also be an image or 3D-model, where the virtual assistant (190Γ) is shown moving lips in response to a query. When the user moves the panoramic view with area position (A-l ) to area position (A-2), the virtual assistant is still intact at its previous position giving improved panoramic image or model viewing experience, which is made possible by synchronised movement using user- controlled realistic simulation and interaction technology.

Fig. 20 shows schematic and perspective representation of a live telecast of a remote physical shop, where change in object is recognised and dynamic links are built in real-time for display of 3D-models. In the live video, it becomes difficult to detect type of object automatically in real time, and recognise change in object if the object is replaced in real store, such as a refrigerator (2010) to a washing machine (2020). The system of user-controlled realistic interaction can recognise the change in object in real-time or with some time lag, and build dynamic links over each object identified. The user on providing input can initiate display of 3D-model of the said object for further interactions. The video of the physical showroom can be captured by conventional devices such as via a camera capable of capturing video, a transmitting unit and a receiving unit. The receiving unit can receive the said video, and supply live feed to a central database of the system of user-controlled realistic interaction. The live feed data can be processed to make it compatible to run and viewed over http even in a website.

Fig. 21 shows perspective view of a mechanical engineering design of a 3D-model of a lathe machine for remote demonstration as another application of the user-controlled realistic simulation and interaction technology. It becomes difficult to collaborate and demonstrate complex machineries remotely using conventional means. The 3D-models simulated by user- controlled realistic simulation and interaction technology are not hollow and complete emulating real objects in real scenario, which can be used to provide remote demonstration of working of the said machine using extrusive, intrusive and time bound changes based interactions such as heating produced after certain time intervals. A sliding motion of middle part of lathe machine from one position (2101) to another position (2102) is shown. The user can interact with its parts, to understand its functioning in virtual but real like setup, as the user would have interacted with real machine. If the user wishes to know more about the said product or machine, he can simply query the virtual assistant, which replies with precise answers as per the query. Query can be typed in a chat, where the virtual assistant will reply either by speaking or by action of moving lips or written message to solve the query. Fig 22 shows another flowchart of a method of user-controlled realistic simulation and interaction for enhanced object viewing and interaction experience. Step 2201, involves decision making in choosing of modes selected from either a showroom mode or product mode. The step 2201 is followed by different layouts displayed as per chosen mode. A showroom view layout (2202) is displayed, if showroom mode is chosen, or a product view layout (2203) is displayed, if product mode is chosen. In step 2204, input is provided by user for display of showroom type in pre-set consolidated view category after display of showroom view layout, where an input is requested for display of showroom type such as TV showroom, refrigerator showroom as per user choice. Step 2205 involves detecting processing power consumption of the processor and/or network connectivity speed and/or memory space, where the said processor and memory is the processor of the user system. In step 2206, based on the detected processing power consumption of the processor and/or network connectivity speed and/or memory space, selective loading of showroom type in pre-set consolidated view category takes place. If the network is slow, the entire showroom view is not loaded, whereas if the network and processor speed is satisfactory, then entire showroom view is loaded, but simulated and texturing is adjusted such that there is no visual impact on user side. This helps to minimize impact of slowness of network speed and processing power on the experience of viewing the realistic virtual simulations. This also enables quick loading of graphics for seamless viewing. In step 2207, among the plurality of objects displayed, an input is received for display of realistic 3D model of desired object, where this step can be directly reached or initiated after display of product view layout (2203) under product mode. The receiving input can be through conventional devices such as a pointing device such as mouse, via a keyboard or hand gesture guided input or eye movement guided input captured by a sensor of a system or touch, or by providing command to a virtual assistant system. The command to the virtual assistant system can be a voice command or via chat. In step 2208, realistic 3D-model of the desired object is loaded and simulated for which input is received. If the desired object is a computer or laptop or any computing device, a virtual operating sub-system is also loaded and installed within the loaded 3D-model such as within simulated 3D-model of laptop based on product or brand characteristics. Step 2209 involves displaying 3D-model of the desired object in 3D-computer graphic environment. The displayed 3D-model of the desired object has standard realistic 3D-view by default. Other interactive views can be a pressure view for judgement of pressure required to operate the said displayed object, a taste view to judge the perception of sense of taste, a temperature view for judging heat generated during operation of the said displayed object after certain time intervals and a touch view for judging the sense of softness touch when applied on the displayed object. Other views are available as per characteristics, state and nature of displayed object. In step 2210, user-controlled realistic interactions can be performed and made available with the displayed realistic 3D-model for emulating real scenario in real set-up. The user-controlled realistic interactions comprises of extrusive interaction, intrusive interactions, time bound changes based interaction, real environment mapping based interactions and/or user body mapping based interaction as per user choice and as per characteristics, state and nature of displayed object.

Fig. 23 shows a system of user-controlled realistic simulation and interaction for enhanced object viewing and interaction experience. The said system comprises: a) a graphical user interface (GUI) connected to a central search component configured for accepting user inputs;

b) a consolidated view displayer for displaying 3D graphics environment, containing one or more 3D-models in an organized manner using a 3D consolidated view generating engine;

c) a 3D-model displayer for displaying 3D-model of an object simulated using a 3D objects generating engine, where the 3D-model displayer comprises at least one display space for displaying the virtual interactive 3D-model;

d) a virtual operating sub-system for providing functionality of operation of displayed 3D- model, where the virtual operating sub-system is installed during loading of said 3D- model as per characteristics, state and nature of displayed object, where the virtual operating sub-system is in direct connection to the 3D-model displayer and the 3D objects generating engine;

e) optionally a virtual assistant sub-system as one input mode for two way communication;

f) optionally a live telecast displayer for displaying live telecast of a place containing plurality of objects, where a dynamic link is built over each identified object, where each dynamic link invokes the 3D-model displayer for displaying 3D-model of the said identified object; and

g) optionally a camera for capturing video for background mapping based interaction, where the video captured from the camera is layered beneath the 3D-model displayer; The GUI is in direct connection with the consolidated view displayer, the virtual assistant subsystem, the 3D-model displayer, and the central database in addition to the central search component, and where the 3D-model displayer and the 3D objects generating engine are in direct connection to each other, and are also connected to the virtual operating sub-system. The 3D-model displayer makes possible displaying real world objects virtually by user-controlled realistic simulation of 3D-model of the said objects in a manner such that interaction is made possible with the said objects in a life-like manner in real scenario. The 3D-model displayer is an interactive platform for carrying out extrusive interaction and/or intrusive interactions and/or time bound changes based interaction and/or real environment mapping based interactions as per user choice and as per characteristics, state and nature of the said object. The 3D objects generating engine uses image associated data, real object associated data, polygon data and texturing data of the said object for generating said 3D-model, where the simulated 3D-model comprises plurality of polygons. The said system can be implemented over hyper text transfer protocol in a wearable or non-wearable display. The virtual assistant sub-system comprises a graphical user interface, a natural language processing component for processing of user input in form of words or sentences and providing output as per the received input, where the natural language processing component is integrated to the central database. The virtual assistant sub-system further includes a microphone for receiving voice command, and sound output device.

It will be noted that the drawing figures included are schematic representations, and generally not drawn to scale. It will be further noted that the schematic representations are used for explaining present invention, and are not actual 3D-models as per present invention. It will be understood that virtually any computer architecture such as client-server architecture may be used without departing from the scope of this disclosure. The system (fig. 23) may take form of a server computer, where some components like camera, GUI, 3D-models are used or displayed or accessed at client side by LAN or through INTERNET. In some embodiments, the client side can also be a hand-held computing device such as laptop, smart phone etc.

Although a variety of examples and other information have been used to explain various aspects within the scope of the appended claims, no limitations of the claims should be implied based on particular features or arrangement in such examples, as one ordinary skill would be able to use these examples to derive a wide variety of implementations. The present embodiments are, therefore, to be considered as merely illustrative and not restrictive, and the described features and steps are disclosed as examples of components of systems and methods that are deemed to be within the scope of the following claims.