Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR PROVIDING AN INTERACTIVE VIRTUAL MODEL OF A PHYSICAL STOREFRONT
Document Type and Number:
WIPO Patent Application WO/2015/028904
Kind Code:
A1
Abstract:
A method for providing an interactive virtual model of a physical area, wherein at least one object is positioned in the physical area, the method comprising the steps of: - presenting an interactive model of the physical area on a display of a computing device remotely located from the physical area; - sensing a grab signal of a user of the computing device indicating a virtual grabbing of an object displayed in the interactive virtual model; - sensing gestures of the user indicating a movement of the grabbed object and displaying the grabbed object on the display of the computing device responsive to the gestures of the user.

Inventors:
FORGHIERI FRANCO (DE)
Application Number:
PCT/IB2014/063272
Publication Date:
March 05, 2015
Filing Date:
July 21, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FORGHIERI FRANCO (DE)
International Classes:
G06Q30/06; G06Q10/08
Foreign References:
US20070179867A12007-08-02
US20100241998A12010-09-23
US7685023B12010-03-23
US7685023B12010-03-23
Other References:
STEFAN WELKER: "3D Virtual Reality Gaming on a Smartphone", 6 March 2013 (2013-03-06), XP054975611, Retrieved from the Internet [retrieved on 20141121]
Attorney, Agent or Firm:
LEFFERS, Thomas (Karlstrasse 35, Munich, DE)
Download PDF:
Claims:
aims

A method for providing an interactive virtual model of a physical area, wherein at least one object is positioned in the physical area, the method comprising the steps of: presenting an interactive model of the physical area on a display of a computing device remotely located from, the phys ica1 area ;

sensing a grab signal of a user of the computing device indicating a virtual grabbing of an object displayed in the interactive virtual model;

sensing gestures of the user indicating a movement of the grabbed object and displaying the grabbed object on the display of the computing device responsive to the gestures of the user.

The method according to claim 1, wherein the computing device is a personal computer, a personal digital assistant or a smartphone .

The method according to claim 2, wherein the smartphone is a 3D smartphone with stereoscopic display and wherein the 3D smartphone is inserted in a special binocular holder fixed to a head of the user.

The method according to one of claims 1 to 3, wherein the grab signal of the user and/or the gestures of the user comprises a motion of a body portion of the user and/or a movement of an additional device integrated into the com¬ puting device and/or a movement of an additional device connected to the computing device.

5. The method according to one of claims 1 to 4, further com¬ prising the steps of:

sensing a head movement of the user;

determining an object the user wants to grab responsive to the head movement of the user.

6. The method according to one of claims 1 to 5, wherein the sensed gestures comprise a rotating and/or a zooming of the grabbed object,

7. The method according to one of claims 1 to 6, further com¬ prising the steps of:

recording a command of the user;

adapting the presented interactive virtual model re¬ sponsive to the command of the user,

8. The method according to claim 7, wherein the command is a motion of a body portion of the user, a voice control command, a movement of an additional device integrated into the computing device or a movement of an additional device connected to the computing device.

9. The method according to claim 7 or 8, wherein the command comprises a request to display a special object.

10. The method according to claim 7 or 8, wherein the command comprises a request to display a group of objects.

11. The method according to claim 7 or 8, wherein the command comprises an order to display another section of the phys¬ ical area presented on the display of the computing de¬ vice ,

12. The method according to one of claims 1 to 11, further comprising the steps of:

receiving information associated with activities of a second user of the interactive virtual model of the physical area;

displaying to the user a representation of activities of the second user;

permitting the user to contact the second user.

13. The method according to one of claims 1 to 12, further comprising the step of permitting the user to contact an avatar .

14. The method according to one of claims 1 to 13, further comprising the steps of:

monitoring a viewing direction of the user;

altering the presented interactive virtual model re¬ sponsive to a change of the viewing direction.

15. The method according to one of claims 1 to 14, wherein the physical area is a physical storefront and wherein the at least one object is a product positioned in the physical storefront .

16. The method according to claim 15, further comprising the step of sensing a gesture of the user indicating a laying of a grabbed product the user wants to purchase into a virtual shopping cart and displaying the laying of the grabbed product into the virtual shopping cart on the dis¬ play of the computing device.

17. The method according to claim 16, wherein the gesture of the user indicating a laying of the grabbed product the user wants to purchase into the virtual shopping cart is a movement of a hand of the user representing throwing some¬ thing down and/or a headmovement of the user and/or a movement of an additional device integrated into the com¬ puting device and/or a movement of an additional device connected to the computing device.

18. The method according claim 16 or 17, wherein the virtual shopping cart is displayed on the display of the computing device if the viewing direction of the user points to the ground ,

19. The method according to claim 18, wherein virtual images of the products laid into the virtual shopping cart and/or a shopping list are displayed on the display of the compu¬ ting device.

20. The method according to claim 19, wherein the shopping list comprises identifications of the products laid into the virtual shopping cart and a total price of the prod¬ ucts laid into the virtual shopping cart.

21. The method according to one of claims 16 to 20, further comprising the steps of:

sensing a movement of the user;

moving the virtual shopping cart through the interac¬ tive virtual model responsive to the sensed movement o the user and displaying the movement of the virtual shopping cart within the user interactive interface.

22. The method according one of claims 16 to 21, further com¬ prising the step of cashless payment of the products laid into the virtual shopping cart.

23. The method according to claim 22, further comprising the step of delivering purchased products to the user.

24. The method according to one of claims 15 to 23, wherein a price of the grabbed product is displayed together with the grabbed product on the display of the computing de¬ vice .

25. A system for providing an interactive virtual model of a physical area, wherein at. least one ob ect is positioned in the physical area, comprising:

an interface server configured to present an interac¬ tive virtual model of the physical are on a display of a computing device remotely located from the physical area to a user;

sensing means to sense a grab signal of the user of the computing device indicating a virtual grabbing of an object displayed in the interactive virtual model;

second sensing means to sense gestures of the user in¬ dicating a movement of the grabbed object;

adapting means to adapt the interactive virtual model such that the grabbed object is displayed on the dis¬ play of the computing device responsive to the gestures of the user.

26. The system according to claim 25, wherein the computing device is a personal computer, a personal digital assis¬ tant or a smartphone .

27. The system, according to claim 26, wherein the smartphone is a 3D smartphone with stereoscopic display and wherein the 3D smartphone is inserted in a special binocular hold¬ er fixed to a head of the user.

28. The system according to claim 25 or 27, further comprising 3D glasses adapted for stereoscopic viewing of the inter¬ active virtual model.

29. The system according to one of claims 25 to 28, further comprising a plurality of third sensing means to sense a head movement of the user and/or gestures of the user and/or a viewing direction of the user and/or a movement of the user and/or a command of the user.

30. The system according to claim 29, wherein the plurality of third sensing means comprise a motion sensor for sensing a motion of a body part of the user and/or an additional de¬ vice integrated into the computing device and/or an addi¬ tional device connected to the computing device.

31. The system, according to one of claims 25 to 30, further comprising :

a communication server configured to enable real time com¬ municatio s between the user and a second use of the in¬ teractive virtual model of the physical area and/or an av¬ atar .

32. The system according to one of claims 25 to 31, 'wherein at least a portion of the organizational structure of the physical area is identical a portion of the organizational structure of the interactive virtual model, the system further comprising:

a plurality of fourth sensing means, each associated with an object in the physical area; a physical area server configured to automatically de¬ tect a location of each of the sensors within the phys ical area and to associate the detected location with location of the object.

a virtual area server configured to update a virtual location of virtual objects in the interactive virtual model corresponding to the objects in accordance with sensed location data received from the physical area server .

The system according to one of claims 25 to 32, 'wherein the physical area is a physical storefront and wherein the object is a product positioned in the physical storefront.

Description:
System and method for providing an interactive virtual model of a physical area

The present application relates to a system, and method for providing an interactive virtual model of a physical area, in order to make a remotely located person feel like being physi ¬ cally present in the physical area.

Handicapped or disabled persons often do not have the oppor ¬ tunity to physically visit a physical area, such as a physical storefront, a museum or other places of interest. Even for persons being physically located far away from the physical area, visiting the physical area is linked with a considerable effort. Thus, these persons for example usually have to revert to online shopping for purchasing items .

However, today's storefront shopping differs from shopping online in many aspects even though often times both the storefront and the online store are operated by the same organiza ¬ tion. For example, storefront shoppers are often able to quickly locate products in a physical store as opposed to online shoppers in an online store. For example, storefront shoppers are, opposed to online shoppers in an online store, able to find new products in a physical storefront.

The goals of both online stores and physical stores are gener ¬ ally the same, facilitating the purchase of goods and services by customers. In some cases, online shopping offers advantages over shopping in a physical store. For example, online stores are often open continuously, whereas most physical stores have set hours. Online shoppers are also able to leverage features, such as search functionality, while physical shoppers are not. However, one drawback of online shopping is that the experi- ence can feel sterile and isolating. Customers in such an en ¬ vironment may be less likely to ave positive feelings about the online shopping experience, may be less inclined to engage in the online equivalent of for example window shopping, and may ultimately spend less money than their counterparts who shop in physical stores.

Further, while online shopping, the customer does not have the opportunity to examine and handle a physical product before purchasing. If the customer does not like the physical prod ¬ uct, once the product has been delivered, the customer has the additional work of returning the product to the supplier.

US 7,685,023 Bl discloses a system and a method for virtualiz- ing a physical storefront, to present an interactive virtual model, in particular a three-dimensional model of a physical storefront to a user within a user interactive interface of a computing device remotely located from the physical store ¬ front. Therein, at. least a portion of the organizational structure of the interactive virtual model can be identical a portion of the organizational structure of the physical store ¬ front .

However, contrary to a physical storefront, such an interac ¬ tive virtual model may not provide all of the information that the customer requires, particularly in the case that the cus ¬ tomer does not yet know exactly which product he would like to purchase . This may be the case if the customer is not sure which model of a particular product he wishes to purchase, or about ingredients and/or expiration date of food or technical data, for example . Therefore, methods for making a person remotely located from a physical area feel like being physically present in the physi ¬ cal area are desireable.

The present invention provides a method for providing an in ¬ teractive virtual model of a physical area, wherein at least one physical object is positioned in the physical area. The method comprises the following: An interactive three- dimensional virtual model of the physical area is presented on a display of a computing device remotely located from the physical area. A grab signal of a user of the computing device indicating a virtual grabbing of an object displayed in the interactive virtual model is sensed. Then, gestures of the us ¬ er indicating a movement of the grabbed object are sensed and the grabbed product is displayed on the display of the compu ¬ ting device responsive to the gestures of the user.

The method enables a user to visually precisely survey an ob ¬ ject that is remote from the user, for example within a physi ¬ cal storefront, a museum or another place of interest. Therein the user does not have to be in the same location as the ob ¬ ject in order to obtain visual images of the object. For exam ¬ ple, , the user can get more information about a product in food trades than the information he can normally see within the interactive virtual model, in particular about ingredients and/or expiration date of food, which may be cited on the bot ¬ tom or a backside of the product . Also the user can get more information about other products, for example connectors placed on the backside of an electronic product or about tech ¬ nical data. Furthermore, the user can observe exhibits in a museum such as a conventional visitor to the museum. Thus, by virtually grabbing the object, the user acts and has the expe ¬ rience as a being physically present in the physical area, for example as being a physical shopper in a physical storefront and, therefore, the experience for the user can be improved. The user of online shopping acting as a physical shopper in a physical storefront has the further advantage that the user can survey products displayed in the interactive virtual mod ¬ el, he actually does not want to purchase, like offers, new products or a sales discount in a physical storefront and, therefore, there is the opportunity of impulse buying within online shopping. Therein, the interactive three-dimensional virtual model of the physical area can, for example, be pre ¬ sented on a fix or wearable stereoscopic display of a compu ¬ ting device remotely located from the physical area.

Therein, the computing device can be a personal computer, a personal digital assistant or a smartphone . These electronic devices can be used here, since usually even each handicapped or disabled person owns such a device. Further, there have been developed electronic devices, for example smartphones, optimally adapted to the needs of handicapped or disabled per ¬ sons . However, the computing device can be any other computing electronic device with wired or wireless networking capabili ¬ ties, in particular a computing electronic device being able to display stereoscopic images, too.

In some embodiments, the smartphone can be a 3D smartphone with stereoscopic display, which is inserted in a special bin ¬ ocular holder fixed to a head of the user. In particular a 3D smartphone, which can comprise an integrated accelerometer and/or a gyroscope unit, can be inserted in a special binocu ¬ lar holder, which is fixed to the head of the user, for dis ¬ playing a stereoscopic image and for sensing a movement of the user, in particular for sensing a head movement of the user. Further, the grab signal of the user and/or the gestures of the user can comprise a motion of a body portion of the user and/or a movement of an additional device integrated into the computing device and/or a movement of an additional device connected to the computing device. Such a grab signal may for example be a pinch grip formed by a thumb and a forefinger of a hand of the user. Further, the grab signal or a gesture of the user can be a voice control command or a movement of an additional device connected to the computing device, such as mouse or a joystick, too. Several motion sensors for sensing motion of a body part of a user are known in the art, for ex ¬ ample sensors for electric muscular activity. Further, addi ¬ tional devices connectable to computing devices, such as a mouse or a joystick, as well as speech recognition software for computing devices are known. Further, common computing de vices include a plurality of sensors, for example a common smartphone includes a motion sensor, which can be used to sense the grab signal and/or the gestures of the user, too. Thus, the method can be realized using these devices without requiring additional effort or appropriate adaptions.

In some embodiments, the method further comprises the follow ¬ ing: A head movement of the user is sensed, and a group of ob jects or a single object the user wants to grab is determined responsive to the head movement of the user. In particular, the focus of the presented interactive virtual model can be targeted laid on the objects, which are placed within a view ¬ ing direction of the user. For example, objects displayed within an aisle or a storage rack of an interactive three- dimensional virtual model of a physical storefront can be tar geted selected according to the viewing direction of the user and, therefore, as if the user would be a physical shopper in a physical storefront. Further, the sensed gestures can comprise a rotating and/or a zooming of the grabbed object. Therefore, the user can inter ¬ act with the grabbed object, as if he would really hold it in the hand and, for example act as a physical shopper examining the product in a physical storefront. For example, the user can modify the size and/or the surface of the ob ect, he is re ¬ garding .

Also, the method may further comprise: A command of the user is recorded. Responsive to the command of the user, the pre ¬ sented interactive virtual model is adapted. As pointed out, user of the interactive virtual model, for example online shoppers, are able to leverage features, such as search func ¬ tionality, while persons being physically present in the phys ¬ ical area, for example physical shoppers, are not. On the oth ¬ er hand, one drawback of using the interactive virtual model, for example for online shopping, is that the experience can feel sterile and isolating. According to the method of the present invention, for example the advantages of online shop ¬ ping and physical shopping can be combined and, therefore, and over-all shopping experience can be improved, in particular making the user feel like being physically present in the physical storefront.

The command may be a gesture of the user, a voice control com ¬ mand and/or a movement of an additional device integrated into the computing device and/or a movement of an additional device connected to the computing device.

Further, the command can comprise a request to display a spe ¬ cial object. The command may be a request to display a group of objects, the user wants to visually survey, too. Thereby, the user is enabled to visually compare similar objects.

Also the command can comprise an order to display another sec ¬ tion of the physical area represented in the interactive vir ¬ tual model .

Furthermore, settings of the computing device and, thus, for the interactive virtual model, such as color of the interac ¬ tive virtual model, a used language or background music can be adapted responsive to a user's command, too.

Accordingly, the user is able to leverage features, such as search functionality, and can move from, one section of the physical area to another, without having to virtually walk through the whole area. For example, with the user moving from one section of a physical store to another or the user re ¬ questing to display a special product, the displayed aisles and shopping racks and, therefore, the products actually dis ¬ played within the user interactive interface are adapted ac ¬ cordingly. For example, the presented interactive virtual mod ¬ el could be adapted to display a section of the store in which a requested product is assumed to be located.

If a plurality of users are visiting the interactive virtual model of the physical area at the same time, the method may further comprise: Receiving information associated with activ ¬ ities of a second user of the interactive virtual model of the physical area. A representation of activities of the second user can be displayed to the user and the user is permitted to contact the second user. Therefore, the user's own information may also be combined with the information of other visitors. In particular, the user can obtain additional information by asking other users questions about a special product and/or by exchanging experiences. Further, it is also possible to ex ¬ change grabbed objects between the users.

Such further visitors may be represented by avatars, rather than a more generic or uniform icon within the user interactive interface. Further, an avatar may for example represent an employee, who is available to assist the user of the compu ¬ ting device, who requires assistance and requests help.

Further, a viewing direction of the user can be mo itored, and the presented interactive virtual model can be altered respon ¬ sive to a change of the viewing direction. Thereby, the orien ¬ tation of portions within the interactive virtual model, for example aisles and shopping racks of the online shop and, therefore, the displayed objects, within the interactive three-dimensional virtual model of the physical area can be adjusted responsive to the viewing direction of the user. Ac ¬ cordingly, the user gets the feeling as he is really standing within a physical area and looking around . Further, the ap ¬ pearance of the physical area in the interactive virtual mod ¬ el, for example the appearance of a physical storefront can be altered responsive to an input of a test customer, too.

Furthermore, the method may comprise a sensing of a change in ¬ volving at least one physical object within the physical area. A virtual object presented in the interactive virtual model is changed, responsive to sensing the change, too, so that the change to the physical object occurring in the physical area is reflected in the interactive virtual model and is displayed in the user interactive interface. Therefore, the interactive virtual model and the experience of being physically present in the physical area can be unified. For example, a physical storefront layout and organizational information can be stored within a planogram in a server. As used herein, a planogram can be a diagram or schematic indicating organizational information about a physical storefront. Planogram can facilitate the automated generation of a virtual storefront model. Plano ¬ gram can include information including, but not limited to, aisle layout or product location, and the like. Other sources containing organizational information about a storefront can be used alone or be combined with planogram usage, too. There ¬ by, a virtual storefront model, in particular an interactive three-dimensional virtual model of the physical storefront can be automatically generated. For example, a location and iden ¬ tity of various in-store products can be obtained from photos, which when combined with planogram data provide an extremely realistic presentation of the stores. Therein, with dedicated software it is possible to extract an object from a photo of a physical object in a physical storefront, in appearance and size, and to add it to the interactive virtual model of the physical storefront. Another advantage therefrom is that as in-store modifications are made, such as moving products, the changes can be automatically detected, which results in update of the virtual model of the physical storefront analog.

In some embodiments, the physical area is a physical store ¬ front and the at least one object is a product positioned in the physical storefront. Thus, a user of the computing device being remotely located from, the physical storefront can feel like being physically present in the physical storefront such as if he would be a physical shopper and, therefore, the expe ¬ rience of online shopping can be improved. Thereby, the method may further comprise the step of sensing a gesture of the user indicating a laying of a grabbed product, the user wants to purchase into a virtual shopping cart and displaying the laying of the grabbed product into the virtual shopping cart on the display of the computing device. There ¬ fore, the user can interact with the interactive virtual mod ¬ el, which can mimic the physical storefront experience, as if he would be a physical shopper in the physical storefront, in particular as if he would virtually walk through a store, shifting a shopping cart, and the experience of online shop ¬ ping can be further improved.

Therein, the gesture of the user indicating a laying of the grabbed product the user wants to purchase into the virtual shopping cart can be a movement of a hand of the user repre ¬ senting throwing something down. Thereby, the user of online shopping can act as if he would lay something into a shopping cart in a physical storefront and, therefore, get the feeling of being a physical shopper in a physical storefront. Further, the gesture of the user indicating a laying of the grabbed product the user wants to purchase into the virtual shopping cart can be any other gesture, such as a headmovement of the user or a movement of an additional device integrated into or connected to the computing device, too.

Therein, the virtual shopping cart can be displayed within the user interactive interface, if the viewing direction of the user points to the ground . In a physical storefront, a physi ¬ cal shopper normally shifts a shopping cart. If the physical shopper wants to know, which products actually are within his shopping cart he is looking down into the shopping cart.

Therefore, the user of online shopping can interact with the user interactive interface as if he would shift such a shop- ping cart, too, and, therefore, as if he would be a physical shopper within the physical storefront presented in the inter ¬ active three-dimensional virtual model.

When the virtual shopping cart is displayed within the user interactive interface, virtual images of the products laid in ¬ to the virtual shopping cart and/or a shopping list can be displayed. Thus, as a physical shopper in a physical store ¬ front, the user can get information about products, actually laying 'within his shopping cart, by looking down, and, there ¬ fore, the experience of online shopping can be further im ¬ proved .

The shopping list may comprise identifications of the products laid into the virtual shopping cart and a total price of the products laid into the virtual shopping cart. Consequently, by looking into the virtual shopping cart, the user does not only see the products actually laying in the virtual shopping cart but also gets information about a total price of these prod ¬ ucts and, therefore, the shopping experience can be enhanced. For example, if an actual total price is larger than an avail ¬ able budget of the user, the user can make appropriate chang ¬ es, for example decide to virtually put products actually lay ¬ ing in the virtual shopping cart back into the shopping racks of the interactive three-dimensional virtual model of the physical storefront. Further, the shopping list may comprise information about the price of each single product, too.

In further embodiments, a movement or a gesture of the user is sensed, and the virtual shopping cart can be moved through the interactive virtual model responsive to the sensed movement of the user. The movement of the virtual shopping cart is dis ¬ played on the display of the computing device. Accordingly, the user gets the feeling as if he would be a physical shopper in a physical storefront, moving a shopping cart. The shopping cart may be pulled, pushed or rotated to the left or the right and, in particular, moved through the aisles of the interac ¬ tive three-dimensional virtual model of a physical storefront.

The user may be further able to purchase the products laid in ¬ to the virtual shopping cart. Therefore, cashless payment of the products laid into the virtual shopping cart can be estab ¬ lished. For example, the user's credit card can be debited for the amount, of the purchase price. Further a credit transfer or an automatic debit transfer system may be used, too. In one embodiment, a password is used as a safety test in the cash ¬ less payment process. The password may be a special gesture, voice command of the user or any other special attribute iden ¬ tifying the user. Further, the payment process may for example be activated by a special gesture of the user or a head move ¬ ment of the user.

In some embodiments, the purchased products are further deliv ¬ ered to the user, for example to the user's address. In par ¬ ticular, according to delivery instructions provided by the user, a delivery agent can deliver the purchased products based on delivery options specified in the delivery instruc ¬ tions. In one embodiment, the delivery agent can deliver the products directly to the user's door or desired physical loca ¬ tions. Therefore, according to the present invention, handi ¬ capped or disabled persons, who are not able to visit a physi ¬ cal storefront, for example a food trade or a drugstore, can have improved online shopping experience, wherein the ad ¬ vantages of online and physical shopping are combined, and fi ¬ nally get delivered the purchased products . Furthermore, a price of a grabbed product can be displayed to ¬ gether with the grabbed product on the display of the compu ¬ ting device, too. Therefore, useful information can be pre ¬ sented based on a product which has been grabbed and which the user is actually surveying.

A system for providing an interactive virtual model of a phys ¬ ical area, wherein at least one physical object is positioned in the physical area, is also provided which comprises an in ¬ terface server configured to present an interactive three- dimensional virtual model of the physical area on a display of a computing device remotely located from the physical area to a user. The system further comprises sensing means to sense a grab signal of the user of the computing device indicating a virtual grabbing of an object displayed in the interactive virtual model, second sensing means to sense gestures of the user indicating a movement of the grabbed object, and adapting means to adapt the display of the computing device such that the grabbed object, is displayed on the display of the computing device responsive to the gestures of the user. Therein, the sensing means and the second sensing means can be connect ¬ ed to the interface server via a 'wireless or a wired network.

In some embodiments, the computing device can be a personal computer, a personal digital assistant or a smartpnone. Howev ¬ er, the computing device can be any other computing device with 'wireless or 'wired networking capabilities, too.

Therein, the smartpnone can be a 3D smartpnone with stereo ¬ scopic display, which is inserted in a special binocular hold ¬ er fixed to a head of the user. In particular a 3D smartpnone, which can comprise an integrated accelerometer and/or a gyro ¬ scope unit, can be inserted in a special binocular holder, which is fixed to the head of the user, for displaying a ste ¬ reoscopic image and for sensing a movement of the user, in particular for sensing a head movement of the user.

In some embodiments, the system may comprise 3D glasses, in particular 3D wearable virtual reality glasses. 3D glasses , in particular 3D wearable virtual reality glasses, can make a user feel like being part of the action. Therefore, the user can feel like walking through a physical area, however, he is not really physically present within the physical area. Fur ¬ ther, the display can be a virtual retinal display, too.

In some embodiments, the system can further comprise a plural ¬ ity of third sensing means to sense a head movement of the us ¬ er and/or gestures of the user and/or a viewing direction of the user and/or a movement of the user and/or a command of the user. Therein, the sensing means to sense a head movement of the user can, for example, comprise attitude sensors and mo ¬ tion sensors, to monitor the head movement. Therein, the re ¬ spective sensing means can be integrated into the computing device or connected to the computing device. For example, a smartphone with integrated accelerometer and/or gyroscope unit can be inserted in a special binocular holder, which is fixed to the head of the user, for sensing a head movement of the user. There may further be an evaluation unit, for real time or near real time evaluation of the sensed head movement, for example for determining an object displayed in the interactive three-dimensional virtual model of the physical area, the user wants to grab. The means for sensing gestures of the user may, for example, comprise motion sensors and/or mobility sensor devices for biomechanical sensing of muscle contractions for making sensitive, accurate gesture based control of computing devices. Therein, the respective sensing means can be inte- grated into the computing device or connected to the computing device. For example, a camera unit integrated in a smartphone can be used for sensing the gestures of the user. There may also be an evaluation unit, real time or near real time evalu ¬ ating the sensed signal, for example for the determination of a grabbed object, the user wants to purchase. Further, the means for sensing a viewing direction can be established through optical sensors or parts of the 3D glasses, for exam ¬ ple parts of 3D wearable virtual reality glasses or a compu ¬ ting device with eye projection. Also the means for sensing a viewing direction of the user can comprise motion sensors and/or mobility sensor devices for biomechanical sensing of muscle contractions for making sensitive, accurate gesture based control of computing devices, too. There may also be an evaluation unit, for real time or near real time evaluating the sensed signals, for example to alter the presented inter ¬ active virtual model responsive to a change of the viewing di ¬ rection. The means for sensing a movement of the user may for example be integrated into the computing device, for example a motion sensor incorporated into the computing device. There may also be an evaluation unit real time or near real time evaluating the sensed signal, for example for moving a virtual shopping cart through the interactive virtual model responsive to the sensed movement. Further, the means for sensing a move ¬ ment of the user can be an apparatus similar the grip of a physical shopping cart, too. Finally, the means for sensing a command of the user may comprise speech recognition software, mobility sensor devices for biomechanical sensing of muscle contractions for sensitive, accurate gesture based control of computing devices or means for sensing a movement of an addi ¬ tional device connected to the computing device, such as a mouse or a joystick. There may also be an evaluation unit for real time or near real time evaluating the sensed commands, for example to adapt the presented interactive virtual model responsive to the command of the user.

Further, the system, can comprise a communication server con ¬ figured to enable real time communications between the user and a second user of the interactive virtual model of the physical area and/or an avatar, thereby enabling the use to communicate and to exchange grabbed objects with further us ¬ ers, visiting the interactive virtual model at the same time, or an employee.

In some embodiments,, at least a portion of the organizational structure of the physical area is identical a portion of the organizational structure of the interactive virtual model and the system further comprises a plurality of fourth sensing means, each associated with a physical object of the physical area, a physical area server configured to automatically de ¬ tect a location of each of the sensors within the physical ar ea and to associate the detected location with a location of the physical object, and a virtual area server configured to update a virtual location of virtual objects in the interac ¬ tive virtual model corresponding to the physical objects in accordance with the sensed location data received from the physical area server.

In some embodiments, the physical area is a physical store ¬ front and the at least one object is a product positioned in the physical storefront. Thus, a user of the computing device being remotely located from the physical storefront can feel like being physically present in the physical storefront such as if he would be a physical shopper and, therefore, the expe rience for the use can be improved . However, the physical ar ea can be for example a museum or any other place of interest,

Embodiments of the invention will now be described with refer ¬ ence to the drawings .

Figure 1 illustrates a flow chart of a method for providing an interactive virtual model of a physical area accord ¬ ing to a first embodiment.

Figure 2 illustrates the step of sensing gestures of the user indicating a movement of the grabbed object and dis ¬ playing the grabbed object on the display of the com ¬ puting device responsive to the gestures of the user according to a second embodiment.

Figure 3 illustrates further steps in a method for providing an interactive virtual model of a physical storefront according to a third embodiment.

Figure 4 illustrates a schematic diagram of a virtual shopping cart according to a fourth embodiment.

Figure 5 illustrates a schematic diagram of a system for

providing an interactive virtual model of a physical area .

Figure 1 illustrates a flow chart of a method 1 for providing an interactive virtual model of a physical area according to a first embodiment, wherein at least one object is positioned in the phys ica1 area. In this embodiment, the method 1 begins at step 2 when an in ¬ teractive three-dimensional virtual model of a physical area is presented on a display of a computing device remotely lo ¬ cated from the physical area. Within the embodiment shown, the user can interact with the interface, which can improve the user's experience.

At step 3, a grab signal of a user of the computing device in ¬ dicating a virtual grabbing of an object displayed in the in ¬ teractive virtual model is sensed.

Such a grab signal may for example be a pinch grip formed by a thumb and a forefinger of a hand of the user. Further, the grab signal can be a voice control command, a gesture, or a movement of an additional device integrated into the computing device or a movement of an additional device connected to the computing device, too. As used herein, an additional device integrated into the computing device for example is a motion sensor integrated into a smartphone and a movement, of an input device connected to the computing device for example means a movement of a mouse or a joystick wireless or wired connected to the computing device.

Then, at step 4, gestures of the user indicating a movement of the grabbed object are sensed and the grabbed object is dis ¬ played on the display of the computing device responsive to the gestures of the user. Further, the grabbed object can be displayed responsive to a voice control command or a movement of an additional device integrated into the computing device or a movement of an additional device connected to the compu ¬ ting device, too. Thus, the method 1 enables a user, for example a handicapped or disabled person or a person being located far away from the physical area, to visually precisely survey an object that is remote from, the user, for example within a physical store ¬ front, a museum or another place of interest. Therefore, the method 1 shown in figure 1 is provided for making a person re ¬ motely located from, a physical area feel like being physically present in the physical area.

In particular, the step 4 of sensing gestures of the user in ¬ dicating a movement of the grabbed object and. displaying the grabbed, object on the display of the computing device respon ¬ sive to the gestures of the user enables the user to visually precisely survey an object that is remote form the user within a physical area. Therein, the user need, not be in the same lo ¬ cation as the object in order to obtain visual images of the object. For example, in food trades the user can get more in ¬ formation of the object than the information normally shown by the interactive virtual model, in particular about ingredients or about shelf life of food, which, may be cited, on. the bottom or a backside of the object. Also the user can get more infor ¬ mation about other objects, for example pins placed on the backside of an. electronic product. Furthermore, the user can observe exhibits in a museum such as a conventional visitor to the museum.

Therein, according to the embodiment shown in figure 1, the computing device is a smartphone . However, the computing de ¬ vice can be a personal computer, a personal digital assistant or any other computing device with wireless or wired networking capabilities, too. According to the embodiment shown in figure 1, the physical area is a physical storefront and the at least one object is a product positioned in the physical storefront. Thus, a user of the computing device being remotely located from the physical storefront can feel like being physically present in the phys ¬ ical storefront such as if he would be a physical shopper and, therefore, the experience of online shopping can be improved . However, the physical area can be for example a museum or any other place of interest, too.

Thereby, figure 1 illustrates the optional steps 5, 6 of cash ¬ less payment of the products the user wants to purchase and delivering the purchased products to the user.

At step 5, products, the user wants to purchase, can be cash ¬ less paid. For example, user's credit card can be debited for the amount of the purchase price. Alternatively the purchased price can be paid via an automatic debit transfer system or a credit, transfer. In the embodiment, of figure 1, a password is used as a safety test in the cashless payment process. The password may be a special gesture, a voice command of the user or any other special attribute identifying the user. Further, the payment process may for example be activated by a special gesture of the user or a head movement of the user.

At step 6, the purchased products are delivered directly to the user. Therefore, a delivery agent will deliver the prod ¬ ucts directly to user's door or desired physical locations ac ¬ cording to delivery instructions provided by the user.

Figure 2 illustrates the step 3 of sensing gestures of the us ¬ er indicating a movement of the grabbed object and displaying the grabbed object on the display of the computing device re- sponsive to the gestures of the user according to a second em ¬ bodiment .

Therein, reference numeral 10 illustrates a hand of a user making gestures, as if the user would grab and handle an object 11 displayed in the interactive three-dimensional virtual model, of the physical, area.

Reference numeral 12 illustrates a display of the computing device, on which the grabbed object 11 is displayed responsive to the gestures of the user. Therein, the display can be a fix or wearable display.

Therein, the sensed gestures may comprise a rotating and/or a zooming of the grabbed object. Therefore, the user can inter ¬ act with the object as if he would really hold it in the hand and, therefore, act as if he would be physically present in the physical area.

In the left part of figure 2, the grabbed object 11 is shown responsive to a first position 13 of the users hand 10 with a first surface 14 on top. Further, additional information, such as a price 15 of the grabbed object 11 is displayed together with the grabbed object on the display 12. If the user now wants to change the size of the displayed object 11 or to re ¬ gard another surface of the grabbed object 11, on which, for example ingredients or expiration date of the object 11 is cited, the user can rotate his hand 10 to a second position 16. Within figure 2, the rotation of the users hand is symbol ¬ ized by arrow 17.

The right part of figure 2 shows the grabbed object 11 dis ¬ played on the display 12 of the computing device responsive to the second position 16 of the users hand 10, As can be seen, there is displayed a second surface 18 of the object on which additional information 19, for example about ingredients or expiration date of the object 11, is cited. Therefore, the us ¬ er Call cLCt and has the experience as being physically present in the physical area.

Figure 3 illustrates further steps in a method 1 for providing an interactive virtual model of a physical area according to a third embodiment.

According to the embodiment shown in figure 3, the physical area is a physical storefront.

As can be seen in figure 3, there is illustrated a user 20 of a computing device 21 remotely located from a physical store ¬ front and using an interactive three-dimensional virtual model 22 of the physical storefront.

In the embodiment shown, at least a portion of the organiza ¬ tional structure of the physical storefront is identical a portion of the organizational structure of the interactive virtual model 22.

Layout and organization structure in the storefront can be represented on a display 12 of the computing device 21. In the embodiment of figure 3, entities and objects within the store ¬ front, such as shoppers 23, movable displays, not-movable dis ¬ plays, shopping carts 24, and the like, are represented within the interactive three-dimensional virtual model 22. For in ¬ stance, a physical product in the storefront can be presented as the virtual product 11 within aisle 25 in the interactive three-dimensional virtual model 22. Therein, the user 20 can interact with the user interactive interface 12, which can mimic the storefront experience.

For example by moving his head, the user 20 can determine a product he wants to grab, for example the virtual product 11. In particular, the user 20 can rotate, raise or drop his head in order to select an object displayed in the interactive three-dimensional virtual model 22 he wants to visually pre ¬ ciously survey. Within the embodiment of figure 3, the head movement of the user 20 is symbolized by arrow 26. Therein, the respective sensing means can be integrated into the compu ¬ ting device or connected to the computing device. For example, a smartphone with integrated accelerometer and/or gyroscope unit can be inserted in a special binocular holder, which is fixed to the head of the user, for displaying a stereoscopic image and sensing a head movement of the user.

Further, the user 20, which is symbolized within the interac ¬ tive three-dimensional model 22 by shopper 23, can lay a grabbed product he wants to purchase into the virtual shopping cart 24, for example the virtual product 11, as if he would lay a product in a shopping cart within a physical storefront. Therefore, gestures of the use 20 are sensed, indicating a laying of a grabbed product the user wants to purchase into the virtual shopping cart. In the embodiment of figure 3, the gesture of the user 20 indicating a laying of the grabbed product the user 20 wants to purchase into the virtual shop ¬ ping cart 24 is a movement of a hand 10 of the user 20 repre ¬ senting throwing something down. This movement of a hand 10 of the user 20 is symbolized by arrow 27 in figure 3.

Also the piresented interactive virtual model 22 can be altered responsive to a change of a viewing direction of the user 20. Thereby, the focus of the displayed part of the physical storefront can be laid on products the user 20 is actually re ¬ garding and seems to be interested in. Within the embodiment of figure 3, the viewing direction of the user 20 is symbol ¬ ized by arrow 28. Therein, the required means for sensing a viewing direction can be established through optical sensors or parts of the 3D glasses, for example a computing device with eye projection.

Further, the virtual shopping cart 24 can be moved through the interactive virtual model 22 responsive to a sensed movement of the user 20. Therefore, the user 20 can act and has the ex ¬ perience, as if he would be a physical shopper within a physi ¬ cal storefront, in particular moving a shopping cart through a physical storefront. The movement of the user 20 is symbolized by arrow 29 in figure 3.

Further, the user 20 may want to move from one section of the store to another . Therefore, referring to the embodiment of figure 3, a command of the user 20 is detected and the pre ¬ sented interactive virtual model 22 is adapted responsive to the command of the user 20. Referring to the embodiment of figure 3, the command is a voice controlled command. However, the command can be a gesture of the user 20 or a movement of an additional device integrated into the computing device 21, such as a motion sensor integrated into a smartpnone or an ad ¬ ditional device connected to the computing device 21, such as a mouse or a joystick, too. Therein, the command can comprise a request to display a special product or a group of products, the user 20 wants to visually precisely survey or can comprise an order to display another section of the physical store ¬ front. Furthermore, settings of the computing device and, thus, for the interactive virtual model, such as color of the interactive model, a used language or background music can be adapted responsive to a user's command, too. Therefore, the advantages of online shopping, such as search functionality, can be combined with the advantages of physical shopping and, therefore, a person remotely located from a physical area can feel like being physically present in the physical storefront.

According to the shown embodiment, the user 20 can further in ¬ teract with another user 35, symbolized by shopper 30, and, therefore, with a second user using the interactive three- dimensional virtual model 22 of the physical storefront at the same time. Therefore, information associated with activities of a second user of the interactive virtual model 22 of the physical storefront are received, all representations of ac ¬ tivities of the second use are displayed to the use 20 and the user 20 is permitted to contact the second user. Further, it is also possible to exchange grabbed products between the users ,

Therefore, the computing device 21 may contain a keyboard and a microphone as additional devices, receivers for receiving real time text, video images or audio signals form a second user to conduct a conversation in real time via the network.

The interaction between the user 20 and the second user is symbolized by arrow 31 in figure 3.

Further, the user 20 may also be permitted to contact an ava ¬ tar, for example indicating an employee, available to assist the user 20 who requires assistance.

Further, referring to figure 3, the virtual shopping cart 24 is displayed within the user interactive interface 12 if the viewing direction of the user 20 points to the ground. The viewing direction to the user pointing to the ground is sym ¬ bolized by arrow 32 in figure 3, Such a visualization of the virtual shopping cart 24 is illustrated referring to figure .

Figure 4 illustrates a schematic diagram of a virtual shopping cart 24 according to a fourth embodiment.

As can be seen in figure 4, virtual images of the products laid into the virtual shopping cart 24 are displayed on the display 12 of the computing device. In the embodiment shown, virtual object 11 is represented in the shopping cart 24, in ¬ dicating that the user 20 wants to purchase the object 11 and has recently placed it 11 in his shopping cart 24.

Referring to figure 4, there is also a shopping list 33 dis ¬ played on the display 12 of the computing device. According to the embodiment shown, the shopping list 33 comprises identifi ¬ cations of products 34 laid into the virtual shopping cart 24 and the total price 35 of the products laid into the virtual shopping cart 24. Therefore, the user 20 gets an overview of the total price of the products actually laid into his virtual shopping cart 24. For example, if an actual total price 35 is larger than an available budget of the user 20, the user 20 can make appropriate changes, for example decide to virtually put products actually laying in the virtual shopping cart 24 back into the shopping racks of the interactive three- dimensional virtual model 22 of the physical storefront.

Figure 5 illustrates a system for 40 providing an interactive virtual model of a physical area according to a fifth embodiment, wherein at least one object is positioned in the physi ¬ cal area. In this embodiment the system 40 comprises an interface server 41 configured to present an interactive three-dimensional vir ¬ tual model of a physical area on a display 42 of a computing device 43 remotely located from the physical area to a user. Therein, the interface server 41 receives data from the physi ¬ cal area via a wireless or a wired network, for example the internet. The system 40 of figure 5 further comprises sensing means 44 to sense a grab signal of the user of the computing device 43 indicating a virtual grabbing of an object displayed in the interactive virtual model, second sensing means 45 to sense gestures of the user indicating a movement of the grabbed object and adapting means 46 to adapt the display 42 such that the grabbed object is displayed on the display 42 responsive to the gestures of the user. In the embodiment shown, the second sensing means 42 include means for biome- chanical sensing of muscle contractions for sensitive accurate gesture based control of computing devices. Therein, the sens ¬ ing means 44 and the second sensing means 45 can be connected to the interface server 1 via a wireless network, which is symbolized by arrow 47 in figure 5. Further, the sensing means 44 and the second sensing means 45 can be connected to the in ¬ terface server 41 via a wired etwork, too.

In this embodiment , the computing device 43 is illustrated as a smartpnone 48 and is associated with the user who is able to carry a smartpnone. Further, the computing device 43 may be any other computing device with wireless or wired networking capabilities, for example a personal computer or a personal digital assistant, too.

The system 40 further comprises 3D wearable virtual reality glasses 49. Using 3D wearable virtual reality glasses 49, the user can virtually walk through an area as if he would truly walk through the physical area and, therefore, feel like being physically present in the physical area.

Figure 5 illustrates that the system 40 further comprises a plurality of third sensing means 50 to sense a head movement of the user and/or gestures of the user and/or a viewing direction of the user and/or a movement of the user and/or a command of the user. Therein, each of the plurality of third sensing means can be a standalone device or can be integrated in the computing device.

Therein, sensing means 51 to sense a head movement of the user include an attitude sensor 52, a motion sensor 53 as well as an evaluation unit 54, for real time or near real time evalu ¬ ating the sensed signals, for example to determine an object the user wants to grab. Therein, the respective sensing means 52,53 can be integrated into the computing device or connected to the computing device. For example, a smartphone with inte ¬ grated sensi g mea s can be inse ted i a special binocular holder, which is fixed to the head of the user, for displaying a stereoscopic image and sensing a head movement of the user.

The shown sensing means 55 to sense gestures of the user in ¬ clude mobility sensor devices 56 for biomechanical sensing of muscle contractions for sensitive accurate gesture based con ¬ trol of the computing device 43 and a second evaluation unit 57 for real time or near real time evaluating the sensed sig ¬ nals, for example for indicating a laying of a grabbed object the user wants to purchase into the virtual shopping cart. Further, a camera unit integrated in a smartphone can be used for sensing the gestures of the user, too. Further, the sensing means 58 to sense a viewing direction of the user in the embodiment shown includes an optical sensor 59 and a third evaluation unit 60, for real time or near real time evaluating the sensed signals, for example for altering the presented interactive virtual model responsive to a change of the viewing direction. Also the sensing means 58 to sense a viewing direction of the user can comprise motion sensors and/or mobility sensor devices for biomechanical sensing of muscle contractions for making sensitive, accurate gesture based control of computing devices, too. Further, the sensing means 58 for sensing a viewing direction can be established through optical sensors or parts of 3D glasses, for example a computing device with eye projection, too.

The shown sensing means 61 to sense a movement of the user is realized by a motion sensor 62 incorporated within the compu ¬ ting device 43 associated to the user and a fourth evaluation unit 63, for real time or near real time evaluating the sensed signals, for example for moving a virtual shopping cart through the interactive virtual model of a physical store ¬ front. Further, the means 61 for sensing a movement of the us ¬ er can be an apparatus similar the grip of a physical shopping cart, too.

The shown sensing means 64 to sense a command of the user in ¬ cludes speech recognition software 65 for evaluating voice control commands of the user in order to display a special ob ¬ ject or to display another section of the physical area. Furthermore, settings of the computing device and, thus, for the interactive virtual model, such as color of the interactive model, a used language or background music can be adapted responsive to a user's command, too. Further, the command can be a movement of an input device integrated into or connected to the user interactive device, too. Therefore, also a joystick 70 of the computing device 43 is shown.

The system 40 of figure 5 further comprises a plurality of fourth sensing means 66, each associated with a physical ob ¬ ject of a physical area, a physical area server 67 configured to automatically detect a location of each of the sensors 66 within the physical area and to associate that detected loca ¬ tion with a location of the physical object and a virtual area server 68 configured to update a virtual location of virtual objects in the interactive virtual model corresponding to the physical objects in accordance with the sensed location data received from the physical area server 67. Therefore, in the embodiment of figure 5, real time or near real time updates can ensure that an interactive virtual model and a physical area are synchro i zed .

As illustrated in figure 5, the system 40 also comprises a communication server 69 configured to enable real time commu ¬ nications between the user and a second user of the interac ¬ tive virtual model of the physical area and/or another task. Therein, other users may call an employee to assist the users who require assistance maybe represented by avatars.