Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A LOCATION INFORMATION SYSTEM
Document Type and Number:
WIPO Patent Application WO/2016/036311
Kind Code:
A1
Abstract:
A location information system is described that comprises a storage device arranged to store model data indicative of a three- dimensional (3D) model of a location, and at least one point of interest (POI) data indicative of a POI associated with the location. The POI is displayed if its position is visible in an area of view. A renderer renders an image that is indicative of a current area of view of the 3D model and the POI identifier such that the POI is visible in the current area of view. A user interface facilitates selection by a user of the POI identifier, which when selected, renders content information associated with the selected POI for display by the user interface.

Inventors:
MAK JOON MUN (SG)
Application Number:
PCT/SG2014/000411
Publication Date:
March 10, 2016
Filing Date:
September 01, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
3RD PLANET PTE LTD (SG)
International Classes:
G06F17/30; G06Q30/02
Domestic Patent References:
WO2014058916A12014-04-17
Foreign References:
US20120162253A12012-06-28
US20130321461A12013-12-05
Attorney, Agent or Firm:
JOYCE A. TAN & PARTNERS (#15-04 Suntec Tower Three, Singapore 8, SG)
Download PDF:
Claims:
The Claims defining the Invention are as follows:

1. A location information system comprising:

a storage device arranged to store model data indicative of a three-dimensional (3D) model of a location, and points of interest (POI) data indicative of points of interest (POI) associated with the location, the POI data comprising positional information defining a position of each POI relative to the 3D model at which the POI will be displayed if the position is visible in an area of view, and content information associated with each POI;

the system arranged to combine model data indicative of . an area of view with POI data having an associated position located in the area of view;

a renderer arranged to render an image that is indicative of a current area of view of the 3D model, the rendered image including at least one POI identifier representative of at least one POI if any POI are visible in the current area of view; and

a user interface arranged to facilitate selection by a user of a POI identifier, wherein the system is arranged such that when the a POI identifier is selected by a user, content information associated with the selected POI is retrieved and rendered such that the content information can be displayed.

2. The location information system of claim 1 wherein an area of view is determined by at least one of: a position of a virtual camera, a direction of the virtual camera, and an angle of view of the virtual camera.

3. The location information system of claim 2 wherein the user interface is arranged to facilitate selection by a user of a direction of a view of the virtual camera and/or of an angle of view of the virtual camera.

4. The location information system of any one of the preceding claims wherein the system is arranged to facilitate selection of a navigation mode.

5. The location information system of claim 4 wherein in a manual navigation mode, the user interface is arranged to facilitate selection by a user of a position of the virtual camera.

6. The location information system of claim 4 or 5 wherein in a semi-automatic navigation mode, positions of the virtual camera are predefined according to a virtual camera path, the virtual camera path defining successive positions of the virtual camera.

7. The location information system of claim 6, wherein the storage device is arranged to store path data indicative of the virtual camera path, each position being associated with a default direction of view and a default angle of view.

8. The location information system of claim 7 wherein the path data comprises stopping information indicative of stopping positions at which the- virtual camera stops along the virtual camera path, each stopping position being associated with a default direction of view of the virtual camera and a default angle of view of the virtual camera.

9. The location information system of claim 8, wherein the user interface is arranged to facilitate selection of a stopping position.

10. The location information system of claim 8 or 9 wherein the user interface is arranged to facilitate selection of a previous or next stopping position according to an order of stopping positions.

11. The location information system of claim 9 or 10 wherein a navigation component is used that is provided on a display, the navigation component comprising selectable stop elements representative of respective stopping positions.

12. The location information system of any one. of claims 8 to 11 being arranged to retrieve path data and to use the retrieved path data to retrieve model data indicative of the successive areas of view from a previous stopping position to a current stopping position.

13. The location information system of any one of the preceding claims being arranged to display the rendered image .

14. The location information system of any one of the preceding claims, wherein the POI. data comprises identifier information, the identifier information being indicative of the PQI identifier associated with a POI.

15. The location information system of claim 14 wherein the POI identifier includes a pin.

16. The location information system of any one of the preceding claims being arranged such that the content information is displayed in the current area of view.

17. The location information system of any one of the preceding claims comprising an authentication component such that upon correct authentication of a user the system is arranged to operate in admin mode in which data stored at the data storage can be edited.

18. The location information system of any one of the preceding claims wherein the renderer is arranged to render images in real time.

19. The location information system of any one of the preceding claims wherein the user interface comprises a network interface and a web server such that the system is accessible through a communications network.

20. A method for providing information on a location, the method comprising:

providing and storing model data indicative of a 3D model of a location,

providing and storing POI data indicative of points of interest (POI) associated with the location, the POI data comprising positional information defining a position of each POI relative to the 3D model at which the POI will be displayed' if the position is visible in an area of view, and content information associated with each POI;

combining model data indicative of an area of view with POI data having an associated position located in the current area of view;

rendering an image that is indicative of a current area of view of the 3D model, the rendered image including at least one POI identifier representative of at least one POI if any POIs are visible in the current area of view;

displaying the rendered image; and

facilitate selection of a POI identifier such that when a POI identifier is selected by a user, content information associated with the selected POI is retrieved and rendered such that the content information can be displayed.

21. A computer program arranged when loaded into a computing device to instruct the computing device to operate in accordance with the location information system according to any one of the preceding claims.

22. A computer readable medium storing a computer program arranged when loaded into a computing ' device to cause the computing device, to operate in accordance with a location information system according to any one of the preceding claims.

Description:
A LOCATION INFORMATION SYSTEM

Field of the Invention

The present invention relates to a location information display/navigation system, and a method of providing information in relation to a location.

Background of the Invention

Existing geographical information programs such as Google Earth or Apple maps enable a user to navigate through a 3D representation of a . city, ' including 3D buildings and structures such as bridges.

Google Earth further provides a street view feature in which a user can navigate along selected streets in the world. In order to provide the street view feature, an automobile travels along the streets taking a photograph every few meters. The photographs are then combined such that the user is provided with panoramic views along the streets.

However, navigation using existing programs is cumbersome and information of interest may not readily be available. As a consequence, users may not fully . appreciate a destination.

Summary of the Invention

In accordance with a first aspect, there is provided a location information system comprising:

a storage device arranged to store model data indicative of a three-dimensional (3D) model of a location, and points of interest (POI). data indicative of points of interest (POI) associated with the location, the POI data comprising positional information defining a position of each POI relative to the 3D model at which the POI will be displayed if the position is visible in an area of view, and content information associated with each. POI;

the system arranged to combine model data indicative of an area of view with POI data having an associated position located in the area of view;

a renderer. arranged to render an image that is indicative of a current area of view of the 3D model, the rendered image including at least one POI identifier representative of at least one POI if, any POI are visible in the current area of view; and

a user interface arranged to facilitate selection by a user of a POI identifier, wherein the system is arranged such that when the a POI identifier is selected by a user, content information associated with the selected POI is retrieved and rendered such that the content information can be displayed.

The system " in accordance with embodiments of the present invention provides significant advantages. " In particular, the system provides an interactive tool that enables a user to quickly understand and to fully explore a location virtually before the user visits the location. Furthermore, by combining the data indicative of POIs with a 3D model of the location, the user is able to obtain more detailed information about unique features and/or aspects of the location as the features and/or aspects are displayed.

An area of view may be determined by at least one of: a position of a virtual camera, a direction of the virtual camera, and an angle of view of the virtual camera. In an embodiment, the user interface is arranged to facilitate selection by a user of a direction of a view of the virtual camera and/or of an angle of view of the virtual camera.

In an embodiment, the system is arranged to facilitate selection of a navigation mode..

In a manual navigation mode, the user interface is arranged to facilitate selection by a user of a position of the virtual camera.

In a semi-automatic navigation mode, positions of the virtual camera are predefined according to a virtual camera path, the virtual camera path defining successive positions of the virtual camera.

In this embodiment, the storage device is arranged to store path data indicative of the virtual camera path, each position being associated with a default direction of view and a default angle of view.

The path data may further comprise stopping information indicative of stopping positions at which the virtual camera stops along the virtual camera path, each stopping position being associated with a default direction of view of the virtual camera and a default angle of view of the virtual camera.

In an embodiment, the user interface is arranged to facilitate selection of a stopping position. In an embodiment, the virtual camera path defines an order of stopping positions. The user interface may be arranged to facilitate selection of a previous or next stopping position according to the order of stopping positions. For example, a navigation component may be used that is provided on a display, the navigation component comprises selectable stop elements representative of respective stopping positions.

In this regard, the system may be arranged to retrieve path data and to use the retrieved path data to retrieve model data indicative of the successive areas of view from a previous stopping position to a current stopping position. The system is further arranged to combine the retrieved model data with POI data having an associated position in the area of view at the current stopping position. In this way, a plurality of images may be rendered that is indicative of successive areas of view of the 3D model.

In an embodiment, the system is arranged to display the rendered image .

In an embodiment, the plurality of rendered images is displayed in the form of a moving image.

In an embodiment, the POI data comprises identifier information, the identifier information being indicative of the POI identifier associated with a POI. The POI identifier may include a pin, a marker, a label, or the like.

In an embodiment, the system is arranged such that the content information is displayed in the current area of view. For example, the content information■ may be displayed as a text box in a display area of a display. Alternatively, the content information may be displayed in a further display area, such as in a separate window.

In an embodiment, the system comprises an authentication component such .that upon correct authentication of a user the system is arranged to operate in admin mode in which data stored at the data storage can be edited.

In this regard, the authentication component may enable the user to add, remove or amend any one or more of: model data, POI data and path data. Specifically, a user may add, remove or amend positional data to specify which POI is visible for a ' selected area of view. For example, the system may be arranged such that POI data and/or path data is added by selecting positions in a current area of view of the 3D model.

In an embodiment, the renderer is arranged to render images in real time.

In an embodiment, the user interface comprises a network interface and a web server such that the system is accessible through a communications network.

In accordance with a second aspect, there is provided a method for providing information on a location, the method comprising:

providing and storing model data indicative of a 3D model of a location,

providing and storing POI data indicative of points of interest (POI) associated with the location, the POI data comprising positional information defining a position of each POI relative to the 3D model at which the POI will be displayed if the position is visible in an area of view, and content information associated with each POI;

combining model data indicative of an area of view with POI data having an associated position located in the current area of view;

rendering an image that is indicative of a current area of view of the 3D model, the rendered image including at least one POI identifier representative of at least one POI if any POIs are visible in the current area of view;

displaying the rendered image; and

facilitate selection of a POI identifier such that when a POI identifier is selected by a user, content information associated with the selected POI is retrieved and rendered such that the content information can be displayed .

In accordance with a third aspect of the present invention, there is provided a computer program arranged when loaded into a computing device to instruct the computing device to operate . in accordance with the location information system according to the first aspect.

In accordance with a fourth aspect of the present invention, there is provided a computer readable medium, storing a computer program arranged when loaded into a computing device to cause the computing device to operate in accordance with a location information system according to the first aspect of the present invention.

The invention will be more fully understood from the following description of specific embodiments of the invention. The description is provided with reference to the accompanying drawings . Brief Description of the Drawings

Figure 1 is a schematic representation of a location information system in accordance with an embodiment of the present invention;

Figures 2 to 5 illustrate a flow chart of a method of providing location information in accordance with an embodiment of the present invention; and

Figures 6 to 10 illustrate exemplary screenshots displayed to a user by the location information system shown in Figure 1.

Detailed Description of Specific Embodiments

Embodiments of the present invention relate to a system for providing information about a geographic location. The system is arranged to display a three-dimensional (3D) representation of the location including specific points of interest (POI) that are visible for a selected area of view associated with the location. In this way, a user is able to obtain more detailed information about features and/or aspects of the location as the 1 features and/or aspects are displayed .

Specifically, upon selection of a POI, POI information associated with the selected POI is displayed. For example, the POI information may include text relating to the POI, images or video relating to the POI, and/or an interactive feature- associated with the POI, such as an activity or an online purchase. In this way, specific POI relating to a location can be reviewed before a decision is made to travel to the location.

This may be particularly advantageous for marketing purposes. For example, the location may be a hotel complex and the system may provide specific content information for POI in relation to the hotel complex, such as the pool area. Upon selection of the POI associated with the pool area, a text box may be displayed informing a user about services that are offered at the pool area ' or displaying photos to the user illustrating the pool area. Thus, a potential visitor to the hotel complex can explore and better understand the hotel complex, before making a booking.

The system has particular application to destinations such as hotels, tourist sites, theme parks, shopping malls, airports, town and city centres, or any other location wherein it is desirable to gain location specific information prior to travelling to the location in person. However, the system does have broader applications and may be used in relation to other geographic locations such as continents, countries, provinces, cities, buildings, forests and the like.

Referring " initially to Figure 1, there is shown an embodiment of a system 100 for providing location information. In this particular embodiment, the system 100 is implemented by a computing device, such as a personal computer, and the system 100 is directly accessible by a user for example by virtue of a user input device 102 such as a keyboard, a mouse, a touch pad or any other suitable input device. The system 100 comprises a user interface 104 for receiving input from the user input device 102. Alternatively, the system 100 may be remotely accessible as further described below.

The system 100 is further arranged to provide images on a display 106 that are indicative of a 3D representation of a location from a particular area of view. An area of view is typically defined by at least one of: a position of a virtual camera, a direction of the virtual camera, and an angle of view of the virtual camera.

The displayed 3D representation includes POI identifiers representative of associated POI, if any POI are visible for the particular area of view.

In this embodiment, the input device 102 and the display 106 are illustrated as separate devices.- However, a person skilled in the art will appreciate that the input device 102 and the display 106 may be combined into a single touchscreen-enabled user computing . device, such as a smartphone, a laptop or a tablet computer.

The system 100 further comprises a control unit 108 that may be " implemented using a processor such as a central processing unit (CPU) or a graphics processing unit (GPU). However, other implementations are envisaged. The control unit 108 is arranged to control and coordinate operations in the system 100.

The system 100 also includes a data storage device 110 arranged to store programs, data and information (not shown) used by the system 100. In this specific embodiment, the data storage device 110 comprises a 3D model database 112, a POI database 114 and a camera path database 116. The 3D model database 112 comprises model data indicative of a 3D model of at least one location, such as a hotel complex. A 3D model of a location includes visual feature data indicative of features of the location that may be visible depending on a particular area of view shown on the display.

The POI database 114 comprises data indicative of POIs for each location. The POI database 114 comprises positional information for each POI, and content information linked to each POI. The positional information defines a position of each POI relative to the 3D model at which the POI will be displayed if the position is visible in an area of view, such as for a particular virtual camera position.

The positional information for a POI may for example be in the form of a 3D coordinate (x/y/z) in respect of a common reference point of the associated 3D model. Alternatively, the positional information for a POI may comprise GPS coordinates, i.e. latitude, longitude and elevation information .

The POI database 114 further comprises a graphical identifier for each POI that is displayed on the 3D representation when the positional information of the POI corresponds to the current area of view. For example, a POI identifier may include a pin, a marker, a label or the like.

For example, if the location is a hotel complex, a POI may relate to the pool area of the hotel complex. The POI identifier may be a pin that is displayed at or adjacent the pool area when the current area of view includes the pool area. The content information linked to the POI may provide the user with further information in relation to the pool area, such as historical information, photographs, services provided at the pool area or the like.

The camera path database 116 comprises . path data indicative of a virtual camera path that defines successive positions of the virtual camera relative to the 3D model. The path data further comprises information indicative of a default direction of view of the virtual camera and a default angle of . view of the virtual camera associated with each position of the virtual camera along the virtual camera path.

For an automated navigation mode such as a guided tour mode, the path data may further comprise information indicative of stopping positions at which the virtual camera stops along the . virtual camera path. Each stopping position is associated with a default direction of view of the virtual camera and a default angle of view of the virtual camera. For example, the system may be arranged such that a moving image is displayed on the display 106 illustrating successive areas of view along a virtual camera path until a stopping position is reached.. At each - stopping position, the moving image stops and movement does not continue until an input is received from a user. Typically, the stopping positions would be selected to coincide with POI that are considered to be of particular interest to the user. Further, stopping positions may be selected according to requirements by a user. For example, a first guided tour may be created illustrating POI in relation to shopping areas and a second guided tour may be created illustrating POI in relation to historical sites. Referring back to Figure 1, the system 100 comprises a database management system ("DBMS") 118 that is arranged to access information in the data storage device 110. The DBMS may for example be implemented by a Relational Database Management System. Based on instructions from the control unit 108, the DBMS . 118 retrieves data from the■ data storage device 110 and stores the data in a buffer 120. The buffer 120 may for example be implemented by a memory device, such as RAM.

Specifically, in response to an input by a user indicative of a selection of an area of view relative to a location atthe user interface 104, the DBMS 118 is arranged to retrieve model data indicative of the area of view from the 3D model database 112. The DBMS 118 is further arranged to retrieve POI data having an associated position located in the area of view from the POI database 114. The control unit 108 combines the retrieved model data with the retrieved POI data indicative of the POI and the combined data is stored at the buffer 120 by the DBMS.

In addition, the DBMS 118 may retrieve -path data from the camera path database 116. The control unit 108 then uses the retrieved path data to retrieve model data from the 3D model database 112 indicative of the successive areas of view.

The system 100 further includes a renderer 122 arranged to control and coordinate image rendering operations such that a rendered image is displayed on the display 106. The renderer 122 is arranged to render an image indicative of a current area of view of the 3D model using the combined data stored at the buffer 120. If any POI are visible in the current area of view, the rendered image, includes at least one POI identifier representative of at least one POI . The renderer 122 may further be arranged to render a plurality of successive images such that an animated sequence of images indicative of successive areas of view when travelling along a virtual camera path is displayed on the display 106.

The system 100 also includes frame buffers 124 used to , render images for use by the display 106. In this particular example, the frame buffers 124 include a display buffer used to store information indicative of an image to be displayed by the display 106, . and a back buffer in which image information is initially rendered prior to transference to the display buffer and thereby display on the display 106. One or more off-screen buffers may further be provided that are used to add other functionality to the rendering operations, for example a stencil buffer usable to add features such as shadows to a rendered image.

It will be appreciated that the renderer 122 may be part of a GPU. In this regard, the GPU may further comprise a data storage device arranged to store programs and/or data for use by the renderer 122, and a GPU memory arranged to temporarily store .programs and/or data for use by the GPU. However, other suitable implementations are envisaged. For example, the renderer 122 may be part of the CPU.

In accordance with an alternative embodiment, the system 100 is implemented in the form of a computer server. In this embodiment, the user interface 106 comprises a webserver and a network interface. In this embodiment, the system 100 is accessible through a communications network such as the Internet from a user computing device, such as a tablet computer, a personal computer or a smart phone. Specifically, the system 100 may be accessible through web pages served to the user computing device by the web server. This may be realised by software implemented by the control unit 108 of the system 100, and through an application programming interface (API) that communicates with the user computing devices using a dedicated application installed on the user computing device. The network interface may for example be arranged to facilitate wireless communications with the system 100.

In this example, the system 100 is arranged to display the rendered images on the corresponding display of the tablet computer, the personal computer or the smart phone, respectively. However, it will be understood that any communications enabled computing device that is capable of communicating with the system 100. is envisaged, such as a laptop computer or PDA.

Furthermore, a person skilled in the art will appreciate that the data storage device 110 may not be part of the system 100. For example, if the system is accessible through a communications network, the data may be stored in cloud storage remote from the system 100.

Referring now to Figure 2 - 5, there are shown flow charts illustrating a method of providing location information in accordance with an embodiment of the present invention. The method of providing location information may for example be implemented by the system 100 as shown in Figure 1.

Access to a location information system is facilitated such that the system can receive inputs from a user at a user interface. As an alternative to receiving direct inputs at the location information system 100 from a user, this step may be implemented using a network interface and a web server so as to facilitate remote access to the system as described above.

The location that is to form the subject of the information system may be selected by a user at the user interface. In this regard, the user may be provided with a plurality of selectable locations, for example by displaying a list of selectable locations on a display. However, other implementations are envisaged.

In an alternative example, the location information system may be specific to a single location. For example, the system may be used for marketing purposes to provide a person with location information about a hotel ' complex or a tourist site. In this regard, the location information system may be accessible through a website of the hotel complex.

Model data indicative of a 3D model of the location is provided and stored in a data storage device. POI data indicative of POI associated with the selected location is provided and stored in a data storage device. The POI data comprises positional information that defines a position of each POI relative to the 3D model at which the POI will be displayed if the position is visible in an area of view. The POI data further comprises content information associated with each POI. Also, path data indicative of a virtual, camera path that defines a plurality of successive areas of view is provided and stored in the data storage device. The path data further comprises information indicative of a plurality of stopping positions for the virtual camera along the virtual camera path.

The model data indicative of the 3D model of the ~ location may be created using any suitable method.. Exemplary software for creating such model data includes Autocad, 3D studio max and Autodesk Maya. It should be noted that the model data may alternatively be created using photographs. Exemplary software for this method includes Autodesk 123D catch.

Referring now to Figure 2, there is shown a flow chart illustrating process steps of rendering that are implemented by the system 100 shown in Figure 1.

In step 202, an area of view of the 3D model of the location is selected, either by the system 100 or by a user. In this particular embodiment, an area of view is determined according to a combination of a position of a virtual camera, a direction of the virtual camera, and an angle of view of the virtual camera.

Initially, an area of view is selected that is stored as a default setting in the data storage device. In other words, the initial area of view is . defined by a default virtual camera position, a default direction of view of the virtual

s

camera and a default angle of view of the virtual camera. For example, the initial area of view relative to the 3D model may provide the user with a bird' s eye view of the location. However, it will be appreciated that in other implementations, the user may select at least one of: an initial virtual camera position, an initial direction of view of the camera position and an initial angle of view of the virtual, camera. In an alternative implementation, a random number generator is used to determine the settings for the initial area of view.

In a next step 204, model data indicative of the selected area of view of the 3D model is retrieved from the data storage device.

In a further step 206, POI data having an associated position located in the selected area of view is retrieved from the data storage device 110 and combined with the retrieved model data. The combined data is then stored 208 at a buffer 120.

In further step 210, a renderer 122 uses .the combined data from the buffer 120 in order to render an image that is indicative of the selected area of view of the 3D model of the location. The rendered image includes at least one POI identifier representative of at least one POI if any POI are visible in the initial area of view.

The rendered image is subsequently displayed 218 on a display.

An exemplary image indicative of an area of view of a 3D model that is displayed on a display 106 is shown in Figure 6 in which the user is provided with a bird' s eye view of the hotel "Venetian" 602. - In this example, . the POI identifiers are pins 604 that are displayed at the associated POI.

A text box 606 is additionally displayed in the top left corner of the display area providing a user with a list of POI. In this example, each POI displayed in the list of the text box 606 is associated with an area of view in which the POI is displayed in further detail. Upon selection of a POI in the text box 606, the associated area of view is selected and the rendering process in Figure 2 is implemented.

Referring now to Figure 3, there are shown method steps relating to selection of a POI identifier.

In step 302, an input indicative of a selection of a POI is received from a user at the user interface.

In response to receiving such input at the user interface, content information associated with the selected POI is retrieved 304 from the data storage device and displayed 306 on the display.

The content information may for example include text that is displayed in the current area of view of the 3D model. For example and as shown in Figure 7, a user has selected a POI identifier associated with the pool area of a hotel complex. In response to receiving the input, content information associated with pool area is displayed in the area of view of the pool area. Specifically, the content information is displayed in a text box 701 within the display area. It will be appreciated that the content information may be displayed in any suitable form, for example in a separate window or pop up. The content information may include an embedded link to a website such that upon selection of the link, the system is arranged to direct the user to the website.

In a further embodiment, upon selection of a POI identifier, an image is rendered that is indicative of an enlarged of view of the 3D model in which the selected POI is displayed in further detail. For example, if the user selects a POI identifier that is associated with the pool area 608 as. shown in Figure 6, an enlarged view of the pool area 608 ' is displayed, as shown in Figure 7. This enlarged view shows a plurality of further POI identifiers that are not visible in the previous area of view of the hotel building 602 shown in Figure 6.

In further embodiments, the content information linked to a POI may include one or more of the following: sound, animation, games, prizes, online transactions and combinations thereof. For example, a POI identifier may be displayed at the pool area of a hotel complex. Upon selection of the POI identifier, the system may be arranged to implement a game or activity, such as a boat ride or navigating a ski slope, purchase of a ticket, redemption of a coupon. Depending on the outcome of the activity, the user may be provided with a discount on accommodation at the hotel or a prize that the user can collect at the pool area.

Referring now to Figures 4 and 5, two navigation modes are further described. In this particular example, selection of one of two navigation modes is facilitated; a manual navigation mode and a semi-automatic navigation mode. However, any suitable navigation modes are envisaged. For example, the system may provide a fully automatic mode in which the system provides a moving image indicative of areas of view along a camera path of the 3D model without allowing a user to select or control the area of view that is displayed. An exemplary screenshot illustrating a selection menu 801 for selecting the manual navigation mode and the semi-automated navigation mode is shown in Figure 8. The manual navigation mode may also be referred to as explorer mode and the semi-automatic navigation mode may be referred to as guided tour mode in which the system is arranged to display a moving image along a virtual camera path. In this particular example, the navigation mode is selectable by a user at any time during operation of the location information system.

Referring now to Figure 4, an input indicative of a manual navigation mode is received 402. In the manual mode, a user can control the area of view that is rendered in rendering steps 202 - 208. Specifically, the user may select an area of view by using the cursor movement keys of a keyboard and/or using a mouse. However, any other suitable implementations for manually selecting an area of view are envisaged.

In this particular example, the user may select a position of a virtual camera, a direction of view of the virtual camera and an angle of view of the virtual. For example, the selected area of view may relate to an enlarged view relative to a previous area of view. Thus, the selection is indicative of a change in the angle of view. By changing the angle of view, further or alternative POI identifiers may be displayed. For example, by zooming in, the area of view may change from representing a continent to a country to a city to a building and to activities on a beach such . as scuba diving or surfing. An example of a zoomed view is shown in Figure 7 which relates to a zoomed view in relation to the area of view shown in Figure 6.

Referring now to Figure 5, there is shown a flow chart illustrating method steps in relation to the semi-automated navigation mode. In a first step 502, an input indicative of a selection by a user of the semi-automated navigation mode is received at the user interface 104. In the semi-automated navigation mode, a user may select a stopping position along a virtual camera path, and a moving image indicative of areas of view is displayed from the current stopping position to the selected stopping position along the virtual camera path.

The storage device 110 stores path data indicative of a virtual camera path. The path data further comprises stopping information indicative of stopping positions at which the virtual camera stops along the virtual camera path, each stopping position being associated with a default direction of view of the virtual camera and a default angle of view of the virtual camera. The virtual camera path further defines an order of stopping positions

In step 504, an input is received at the user interface 104 indicative of a selection of a stopping position. For this step, a navigation component may be used as illustrated in Figures 7 and 8. The navigation component 7Q2, 802 includes stop elements 704, 804 that are each selectable and associated with respective stopping positions along the virtual camera path. The navigation component 702, 802 further comprises a previous stop element 706, 806 and a next stop element 708, 808.

Referring back to Figure 5, in response to receiving an input indicative of a virtual camera position, path data is retrieved 506 indicative of successive areas of view from the current camera position to the selected stopping position along the virtual camera path. The retrieved path data is then used to implement 508 the rendering process in Figure 2.

Exemplary illustrations of a virtual camera path 906, 1006 are shown in Figures 9 and 10.

Figures 9 and 10 illustrate screen shots of two areas of view of the hotel "Venetian" 902, 1002. The area of view includes a plurality of POI identifiers 904, 1004. The illustrated virtual camera path 906, 1006 includes 8 stopping positions for the virtual camera where the virtual camera stops and an area of view of the 3D model from the stopping position is provided. An exemplary stopping position 908 is located at the pool area of the hotel complex and a further stopping position 910, 1008 is located at a POI "Rialto Bridge".

Referring back to Figure 5, each stopping position along the virtual camera path is associated with a default direction of view and a default angle of view of the camera position. However, while the default view is shown at each stopping position, the user may nevertheless select 510 an alternative area of view. In particular, the user may select a direction of view of the virtual camera at the stopping position and/or an angle of view of the virtual camera at the stopping position.

In response to receiving an input indicative of an area of view at step 510, the rendering process shown in Figure 2 is implemented.

In a further embodiment (not shown) , the method may include a step of authenticating a user so that the system may be operated in an admin mode. Upon correct authentication of an admin user data stored in the data storage device may be edited. In this regard, data may be added, removed or amended. The data may relate to any one or more of: model data, POI data and path data. For example, POI data may be added by selecting a position on a rendered image of a 3D model when the image is displayed on the display 106. This embodiment allows an admin user to add, remove or amend POI when a location develops and changes over time. For example, development of a particular POI may require an addition of further POI for additional areas of view.

It will be appreciated embodiments of the invention may be provided in the form of a computer program or a computer readable medium that stores the computer program. The computer program is arranged when loaded into a computing device to instruct the computing device to operate in accordance with the system 100 described with reference to Figure 1. The computer readable medium may for example be a portable medium such as a USB device or a CD Rom.

In the claims which follow -and in the preceding description of the invention, except where the context requires otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.