Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR INITIATING A SCANNING OPERATION FOR INCORPORATING PAGES IN A PHYSICAL ENVIRONMENT INTO AN ELECTRONIC DOCUMENT
Document Type and Number:
WIPO Patent Application WO/2014/142909
Kind Code:
A1
Abstract:
A system and method are provided for initiating a scanning operation for incorporating pages in a physical environment into an electronic document. The method comprises displaying a user interface (150) that includes a view (152) of a page in a physical environment; while displaying the user interface, detecting an input (154) that is distinguishable from another input received at the user interface to capture images or modify the view; and after detecting the input, initiating a page scanning mode to capture an image of the page.

Inventors:
GRIFFIN JASON TYLER (CA)
HAMILTON ALISTAIR ROBERT (CA)
WESTAWAY ADRIAN LUCIEN REGINALD (GB)
GAGGERO CLARA (GB)
Application Number:
PCT/US2013/031604
Publication Date:
September 18, 2014
Filing Date:
March 14, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BLACKBERRY LTD (CA)
RESEARCH IN MOTION LTD (US)
International Classes:
H04N1/00; H04N1/107; H04N1/195; H04N101/00
Foreign References:
US20110069180A12011-03-24
US20040201720A12004-10-14
US20110312380A12011-12-22
Other References:
ENRIQUE SERRANO: "CamScanner App Review - Scan with your Smartphone", 11 April 2012 (2012-04-11), XP055066488, Retrieved from the Internet [retrieved on 20130612]
Attorney, Agent or Firm:
ZIMMERMAN, Mark, C. (Flight & Zimmerman LLC,150 S. Wacker Drive, Suite 210, Chicago IL, US)
Download PDF:
Claims:
Claims:

1 . A method of operating a mobile device, comprising:

displaying a user interface that includes a view of a page in a physical environment; while displaying the user interface, detecting an input that is distinguishable from another input received at the user interface to capture images or modify the view; and

after detecting the input, initiating a page scanning mode to capture an image of the page.

2. The method of claim 1 , wherein the user interface is provided by a camera application.

3. The method of claim 1 or claim 2, wherein the input includes a touch input applied to a display screen displaying the user interface.

4. The method of claim 3, wherein the touch input includes a press and hold operation to initiate the page scanning mode.

5. The method of claim 4, wherein continued application of the press and hold operation enables scanning multiple pages in succession.

6. The method of any one of claims 1 to 5, wherein the page scanning mode comprises operating a first camera to detect at least one page that enters the view as a result of movement of the mobile device towards the at least one page.

7. The method of any one of claims 1 to 6, wherein the page scanning mode comprises operating a second camera to detect at least one page that enters the view while the mobile device is stationary.

8. The method of any one of claims 1 to 7, further comprising:

detecting a first page in the view;

initiating a scanning operation; and

capturing an image of the first page.

9. The method of claim 8, further comprising detecting at least one additional page in the view and capturing an image of each additional page.

10. The method of claim 8 or claim 9, further comprising providing feedback during the scanning operation.

1 1 . The method of claim 10, wherein the feedback comprises at least one of a visual indication of the scanning operation, and an indication after each page has been scanned.

12. The method of claim 1 1 , wherein the visual indication illustrates a progress of the scanning operation for each page.

13. The method of any one of claims 8 to 12, further comprising generating a document comprising the first page and each additional page.

14. A computer readable storage medium comprising computer executable instructions for performing the method of any one of claims 1 to 13.

15. A mobile device comprising a processor, memory, a display, and at least one camera, the memory including computer executable instructions for performing the method of any one of claims 1 to 13.

Description:
SYSTEM AND METHOD FOR INITIATING A SCANNING OPERATION FOR INCORPORATING PAGES IN A PHYSICAL ENVIRONMENT INTO AN ELECTRONIC

DOCUMENT

TECHNICAL FIELD

[0001] The following relates to systems and methods for initiating a scanning operation for incorporating pages in a physical environment into an electronic document.

DESCRIPTION OF THE RELATED ART

[0002] As the viewing, editing and storing of electronic files increases, the difficulty in working with documents and other pages that exist in a physical environment also increases. For example, although a document may exist in an electronic format, edits to that document may still be made by hand. For those edits to be captured, typically a manual editing process is required of the electronic document, or other software is required to recognize the handwritten text in order to automatically incorporate such changes.

[0003] Despite the convenience of electronic storage and the ease of transferring and reproducing electronic documents is becoming greater, documents and other pages created and/or modified in the physical environment will often still exist in many practical scenarios, and can be difficult to incorporate into the electronic or virtual environment.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] Embodiments will now be described by way of example only with reference to the appended drawings wherein:

[0005] FIG. 1 is a schematic illustration of a mobile device configured to obtain images of documents in a physical environment and communicate over a network;

[0006] FIG. 2 is a block diagram of an example of a configuration for a mobile communication device;

[0007] FIG. 3 is a flow chart illustrating an example of a set of computer executable operations that may be performed in scanning pages in a physical environment and incorporating such pages into an electronic collection of documents;

[0008] FIG. 4 is a flow chart illustrating an example of a set of computer executable operations that may be performed in collaborating over an electronically stored document; [0009] FIG. 5 is a screen shot of an example of a camera viewer with a physical page located therein;

[0010] FIG. 6 is a screen shot of an example of a camera viewer during a page scan operation;

[0011] FIG. 7 is a flow chart illustrating an example of a set of computer executable operations that may be performed in scanning a page in a physical environment from a camera viewer;

[0012] FIG. 8 is a screen shot of an example of a menu including a Present Page to Scan option;

[0013] FIG. 9 is a perspective view illustrating an example of a Present Page to Scan operation;

[0014] FIG. 10 is a flow chart illustrating an example of a set of computer executable operations that may be performed in execution of a Present Page to Scan operation;

[0015] FIG. 1 1 A is a plan view of an accessory supporting a tablet computer and a notepad;

[0016] FIG. 1 1 B is an elevation view of the accessory of FIG. 1 1 A;

[0017] FIG. 1 1 C is an elevation view of the accessory of FIGS. 1 1 A and 1 1 B in a page scan configuration;

[0018] FIG. 12 is an elevation view of the accessory of FIG. 1 1 in a reading configuration;

[0019] FIG. 13A is a plan view of an accessory supporting a tablet computer and a notepad;

[0020] FIG. 13B is an elevation view of the accessory of FIG. 13A;

[0021] FIG. 13C is an elevation view of the accessory of FIGS. 13A and 13B in a page scan configuration;

[0022] FIG. 14 is a flow chart illustrating an example of a set of computer executable operations that may be performed in detecting the page scan position shown in FIG. 13; [0023] FIG. 15 is a screen shot of an example of a message composition user interface and a menu including a Paste from Page option;

[0024] FIG. 16 is a screen shot of an example of a camera viewer during a Paste from Page operation;

[0025] FIG. 17 is a screen shot of an example of the camera viewer shown in FIG. 16 after highlighting a copied portion of a page being viewed;

[0026] FIG. 18 is a screen shot of an example of the message composition user interface of FIG. 15 including the copied portion;

[0027] FIG. 19 is a flow chart illustrating an example of a set of computer executable operations that may be performed in executing the Paste from Page operation;

[0028] FIG. 20 is a schematic view of a first stage in an example multi-page scanning operation;

[0029] FIG. 21 is a schematic view of a second stage in an example multi-page scanning operation;

[0030] FIG. 22 is a schematic view of a third stage in an example multi-page scanning operation;

[0031] FIG. 23 is a flow chart illustrating an example of a set of computer executable operations that may be performed in a multi-page scanning operation;

[0032] FIG. 24 is a screen shot of an example of an image view of a multiple scanned page document;

[0033] FIG. 25 is a screen shot of an example of a gesture being performed on a particular page in the image view of a multiple scanned page document;

[0034] FIG. 26 is a screen shot of an example of a first document view of a multiple scanned page document;

[0035] FIG. 27 is a screen shot of an example of a second document view of a multiple scanned page document; [0036] FIG. 28 is a flow chart illustrating an example of a set of computer executable operations that may be performed in navigating between an image view, a first document view, and a second document view in scanned document;

[0037] FIG. 29 is a screen shot of an example of an hand-edited page being scanned in a physical environment;

[0038] FIG. 30 is a screen shot of an example of a hand-edited page that has been scanned;

[0039] FIG. 31 is a screen shot of an example of an electronically edited version of the hand-edited page shown in FIG. 30 with an action option displayed according to a detectable edit to the page;

[0040] FIG. 32 is a screen shot of an example of the hand-edited page and an action option displayed according to a detectable edit to the page;

[0041] FIG. 33 is a schematic view of an example of a page being scanned in a physical environment;

[0042] FIG. 34 is a screen shot of an example of a first document view of a multiple page scanned document having multiple version layers;

[0043] FIG. 35 is a screen shot of an example of a layer of an earlier version of the document shown in FIG. 34;

[0044] FIG. 36 is a flow chart illustrating an example of a set of computer executable operations that may be performed in detecting and correcting edits in a scanned page;

[0045] FIG. 37 is a flow chart illustrating an example of a set of computer executable operations that may be performed in displaying different version layers of the same scanned document;

[0046] FIG. 38 is a screen shot of an example of a scanned page and an option to determine others viewing the same page or document including the page;

[0047] FIG. 39 is a screen shot of an example of user interface for determining others viewing the same page or document including the page and an option to initiate an instant messaging chat with the others; [0048] FIG. 40 is a flow chart illustrating an example of a set of computer executable operations that may be performed in utilizing the option to determine others viewing and the option to initiate an instant messaging chat with the others;

[0049] FIG. 41 is a schematic view of an example of a hand-edited page being scanned in a physical environment;

[0050] FIG. 42 is a screen shot of an example of the hand-edited page shown in FIG. 41 and an auto correction option;

[0051] FIG. 43 is a screen shot of an example of an auto correction preview;

[0052] FIG. 44 is a flow chart illustrating an example of a set of computer executable operations that may be performed in an auto correction operation;

[0053] FIG. 45 is a screen shot of an example of a mounted page being scanned in a physical environment;

[0054] FIG. 46 is a screen shot of an example of the mounted page being scanned in FIG. 45 and an option to add information from the scanned page to a calendar;

[0055] FIG. 47 is a screen shot of an example of an add to calendar preview and a Go to Site option;

[0056] FIG. 48 is a flow chart illustrating an example of a set of computer executable operations that may be performed in detecting information from a scanned page and initiating an associated action;

[0057] FIG. 49 is a is a schematic illustration of an example peer-to-peer

communication system;

[0058] FIG. 50 is a schematic illustration of an example multi-cast message delivery in a peer-to-peer communication system;

[0059] FIG. 51 is a schematic illustration of an example peer-to-peer message; and [0060] FIG. 52 is an example of a configuration for a mobile device. DETAILED DESCRIPTION

[0061] It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the examples described herein. However, it will be understood by those of ordinary skill in the art that the examples described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the examples described herein. Also, the description is not to be considered as limiting the scope of the examples described herein.

[0062] It will be appreciated that the examples and corresponding diagrams used herein are for illustrative purposes only. Different configurations and terminology can be used without departing from the principles expressed herein. For instance, components and modules can be added, deleted, modified, or arranged with differing connections without departing from these principles.

[0063] Pages in a physical environment may be incorporated into an electronic document by obtaining images of such pages and creating or updating an electronic document to include multiple views of the document. For example, page images may be combined in an image view from scanned images of the pages in the physical environment. From the image view, a first document view may be generated by rendering a

readable/scrollable/printable or otherwise different version or format of the document, e.g., in a portable document format (PDF). Another or second document view may be generated from the image view and/or first document view that provides a formatted version of the first document view, e.g. editable version, word-wrapped text, larger font, etc., of the same document. Detectable inputs applied in the different views can be utilized to navigate between the different views to accommodate different uses of the same document.

[0064] A page scanning capture mode can be integrated into a mobile device to enable a page scanning operation to be initiated directly from a camera application or other viewing window or user interface providing a continuous view of a physical environment. In this way, while a user is provided with a view of a page in the viewing window or user interface, the user may initiate a page scanning mode and/or operation. The page scanning capture mode may be configured to be initiated based on a touch gesture, menu option, convenience key or other input detectable from within the camera application and/or viewing window or interface. The page scanning capture mode may also trigger the use of a front or rear facing camera to enable both "bring-device-to-page" and "bring-page-to-device" options.

[0065] A page scanning capture mode can also be initiated in conjunction with a copy- and-paste operation to enable a portion of a viewed object (e.g., a page) to be scanned into a document or to be added as a portion of text in an application.

[0066] A mobile device that is configured to perform page scanning as herein described may also be operable to initiate a page scanning operation by detecting movement of an accessory which the mobile device is attached or coupled to, or otherwise physically supported in or by, e.g., to allow the mobile device to scan a pad of paper also supported by the accessory.

[0067] The page scanning operations described herein also facilitate collaboration between multiple parties, enabling different parties to generate different layers of the same document for incorporating edits to the document from different devices. The page scanning operations further facilitate the inclusion of both handwritten and electronic edits and the detection of actions from edits made to a document. The page scanning operations also enable information in scanned text to be used to automatically create new items, such as calendar events and contact entries.

[0068] Turning to FIG. 1 , a mobile device 10 is shown, which is capable of capturing images of pages 12 or portions of a page 12 in a physical environment 14 within a field of view (FOV) 16 of an imaging device such as a camera. It will be appreciated that for illustrative purposes, a "page 12" as herein described may generally refer to any item in the physical environment 14 for which an image or scan can be captured by the mobile device 10. For example, a page 12 may include a sheet of paper, a poster, a business card, an image displayed on a screen or monitor, indicia or writing/text on a surface, etc. It will also be appreciated that the FOV 16 may be utilized by image or video capturing technology to capture at least a portion of a page 12 within the FOV 16. As such, the FOV 16 may capture multiple images or utilize multiple frames of a video in order to capture an entire page 12, e.g., by combining or stitching the multiple images together.

[0069] The mobile device 10 in this example is also capable of communicating with other devices for sharing and collaborating with respect to scanned pages and documents related thereto, hereinafter a "collaborating device 22". The collaborating device 22 may also be capable of capturing images of pages 12, similar to the mobile device 10. The mobile device 10 may communicate with the collaborating device 22 via a collaboration server 20 connectable to the network 18 and/or a messaging server 24, which is also connectable to the network 18. It can be appreciated that the network 18 shown in FIG. 1 is for illustrative purposes only and in practice more than one network (e.g., including local and wide area networks) may be traversed in establishing communications between the devices 10, 22. As also shown in FIG. 1 , the mobile device 10 may also be capable of communicating with the collaborating device 22 via a short-range connection 26 such as WiFi, Bluetooth, infrared, etc. It can be appreciated that the mobile device 10 and collaborating device 22 may also constitute tethered or "paired devices" under the control of the same user or multiple users.

[0070] FIG. 2 illustrates an example of a configuration for the mobile device 10. It can be appreciated that the configuration shown in FIG. 2 may also be applicable to the collaborating device 22 but not necessarily so. The mobile device 10 includes at least one communication interface 30 for enabling the mobile device 10 to send and/or receive data using various communication media. For example, the communication interface(s) 30 may include one or more radio access technologies including both wireless network access and short range communication capabilities.

[0071] The mobile device 10 also includes a collaboration application 32 for

collaborating on documents 38 accessible to the mobile device 10. The collaboration application 32 includes or otherwise has access to a page scanner 34 for scanning an item or items (hereinafter referred to as "page 12" or "pages 12"), to be incorporated into documents 38. The collaboration application 32 also includes or otherwise has access to an editor 36 for detecting and applying non-electronic edits (hereinafter referred to as "hand- edits"), to a scanned page 12. It can be appreciated that the page scanner 34 and editor 36 are shown in an illustrative configuration and may also be stand-alone applications or modules executable in other applications (not shown). Similarly, the mobile device 10 may also utilize the page scanner 34 and/or editor 36 independently of the collaboration application 32.

[0072] The collaboration application 32 also includes or otherwise has access to a data store 37 of documents 38. A document 38 may include one or more layers of the same page 12. Each layer of a document 38 may include one or more pages 12 in either a native electronic format (i.e. created in an electronic format), an imaged or scanned electronic format or a combination of native and scanned electronic formats. A layer of a document 38 may also be associated with a particular version of a document 38, a particular format for the document 38, or both. A layer of a document 38 may also have multiple views, including an image view, a first document view, and a second document view. By providing multiple views, not only can the native look and feel of the page 12 be captured but also

scrollable/printable and formatted/editable versions of the same page 12 can be

conveniently navigated to and from to facilitate reviewing, editing, and collaboration. For example, as will be illustrated below, an image view or first document view may be used to navigate throughout a document to find a portion of text that the user wishes to edit. After applying a particular input (e.g., a zoom gesture), the user may drill down or "dive into" the second document view to more carefully review and/or make changes. The second document view may therefore include larger font and word-wrapping to facilitate editing when compared to the first document view that facilitates navigation, particularly on mobile devices 10 having relatively small screen sizes.

[0073] The mobile device 10 includes a display 40 for rendering user interface elements on a display screen. For example an image obtained using the page scanner 34 may be displayed by the mobile device 10. The mobile device 10 also includes a messaging application 50, which may be used in the collaboration process and/or to send and receive documents 38. It can be appreciated that the collaboration application 32 may also be a component or function of the messaging application 50 or vice versa and the configuration and delineations shown in FIG. 2 are illustrative only. The messaging application 50 may represent any application with electronic communication capabilities, e.g., email, instant messaging, text messaging, social networking, voice/video chatting, etc. The messaging application 50 in this example includes or otherwise has access to a data store of contacts 52. The contacts 52 may include at least one contact 52 associated with a collaborating device 22 to enable the user of the mobile device 10 to communicate with the contact 52.

[0074] In order to obtain images of pages 12, the mobile device 10 includes a camera application 42. The camera application 42 includes a page scan application programming interface (API) 44 for interacting with the page scanner 34, e.g., in order to launch the page scanner 34 directly from the camera application 42 as discussed in greater detail below. It can be appreciated that, in other examples, the camera application 42 may include the page scanner 34 and/or collaboration application 32 or vice versa and the configuration and delineations shown in FIG.2 are illustrative only. The camera application 42 is configured to operate at least one camera on the mobile device 10 in order to enable the acquisition of images in the physical environment 14. In the example shown in FIG. 2, the mobile device 10 includes a front camera 46 and a rear camera 48 which may be operated together or independently by the camera application 42. The designation "front" refers to the direction of the FOV 16 of the front camera 46 which in this example is in the same direction as the display screen of the mobile device 10 (i.e. the "front" of the mobile device 10 housing). The rear camera 48 has a FOV 16 that is substantially opposite that of the FOV 16 of the front camera 46.

[0075] Operations that may be performed by the page scanner 34 of mobile device 10, in capturing images of pages 12 in the physical environment 14, and incorporating such pages 12 into documents 38, are illustrated in FIG. 3. At 60 a page scan is initiated and the page scanner 34 detects a page 12 in a viewer provided by the camera application 42 at 62. As will be discussed in greater detail below, there are various mechanisms that may be used to initiate the page scan at 60. The page 12 is scanned by the page scanner 34 at 64 and an image of the page 12 is stored at 66. At 68, the page scanner 34 determines from the image whether or not the page 12 is associated with an existing document 38. For example, the page 12 may include text or other indicia that is recognizable in a search of the data store 37 of documents 38, which enables a particular page 12 to be catalogued as a version of the same document 38, or as a method of determining whether or not a document 38 associated with the page 12 has been previously interacted with, whether contacts 52 are also viewing or editing the document 38, etc. Since the data store 37 of documents 38 provides an organized repository of pages 12 that have been previously scanned or otherwise incorporated into the system, an efficient scanning and retrieval operation can be performed using the page scanner 34.

[0076] If the page scanner 34 detects that the page 12 is associated with an existing document 38, the page 12 is added or appended to the existing document 38 at 70. If the page 12 is not associated with an existing document 38 (or if such an association cannot be determined from the immediate scan), a new document 38 that includes at least the page 12 is created at 72. It can be appreciated that documents 38 may be manually or automatically merged if at a later time it is determined that the page 12 should be added or appended to an existing document 38 or in reality the pages 12 of multiple documents 38 later become a single document 38.

[0077] The page scanner 34 may scan multiple pages 12 in generating a document 38 or a new document 38. At 74, the page scanner 34 determines if another page 12 is to be scanned (e.g., by determining that a page scanning option continues to be in operation). For example, as exemplified below, the page scanner 34 may be continuously operated while the FOV 16 moves from one page to the next to generate a document from multiple pages

12. If another page 12 is to be scanned, the page scanner 34 repeats operations 62 to 74. When no further pages 12 are to be scanned, the page scanner 34 or editor 36 may determine from the image of the page 12 whether or not any automated actions should be executed. For example, the page scanner 34 or editor 36 may detect a recognizable symbol or word in a margin that can be used to trigger an action such as sending the document 38 to a particular contact 52. The page scanner 34 or editor 36 may also determine that the image of the page 12 includes location, event and/or temporal information is included and generate a calendar appointment or other actionable item.

[0078] If an action is to be taken, the action(s) is/are determined at 78 and executed at 80. After the page 12 has been scanned, an image view of the page 12 and, if applicable, other pages 12 in the same document 38 may be displayed so that a user may interact with the displayed document. In the example operations shown in FIG. 3, the page scanner 34 determines at 82 whether or not a zoom gesture or other zoom input is detected. If so, the next view is displayed at 84. For example, a zoom out gesture may navigate from an second document view to the first document view and further out to an image view whereas a zoom in gesture may navigate from an image view into a first document view and further into a second document view. The page scanner 34 determines at 86 whether or not interactions and operations with the current document are done and the method ends at 88 after detecting that the interactions with the current document 38 are done.

[0079] The collaboration application 32 may be used to allow a user of the mobile device 10 to collaborate with users of collaborating devices 22 on a document 38. FIG. 4 illustrates example operations that may be performed when collaborating on a document 38. At 100 the mobile device 10 scans a page 12 in the physical environment 14 and stores the scanned page 12 in a document 38 at 102. The collaboration application 32 is then used to send the document 38 to a collaborating device 22 at 104, e.g., by using the messaging capabilities of the messaging application 50. In the example shown in FIG. 4, the document 38 is either sent via the collaboration server 20, or also to the collaboration server 20, to enable the collaboration server 20 to create a new document 38 or update an existing document 38 stored on the collaboration server 20. In this way, the collaboration server 20 can be used to synchronize edits to documents 38 that are involved in collaboration.

[0080] The collaborating device 22 receives the document 38 at 108. In the example scenario shown in FIG. 4, edits to the document 38 are made at 1 10 by the mobile device 10 and at 1 12 by the collaborating device 22. For example, a draft version of a document may continue to be revised by a user of the mobile device 10 while also being edited by a user of the collaborating device 22. At 1 10 and 1 12, a synchronization operation is initiated by each of the mobile device 10 and the collaborating device 22 by communicating with the collaboration server 20, and edits are stored and updated by the collaboration server 22 at 1 14. Since the document 38 may be stored in different layers of edits, conflicting edits can be viewed in the different layers stored, to allow a comparison. For non-conflicting edits, it can be appreciated that different layers of edits may also be used, but that a single new layer showing edits from multiple parties can also be generated (e.g., using different colors or other identifiers of the multiple parties making the edits). The collaboration server 20 may then synchronize the mobile device 10 and collaborating device 22 at 1 16 by sending updates thereto. The mobile device 10 receives the updates at 1 18 and the collaborating device 22 receives the updates at 120. The local documents stored by the mobile device 10 and the collaborating device 22 are synchronized using the updates at 122 and 124 respectively.

[0081] Independent of the edits being made during the collaboration, a chat or conversation may be launched from or in association with the document 38 at 126, and a message related to this chat sent at 128. In the example shown in FIG. 4 an instant message (IM) is sent at 128 after launching an IM conversation with the collaborating device 22. The messaging server 24 routes the message to the collaborating device 22 at 130 and the message is received by the collaborating device 22 at 132.

[0082] The page scanner 34 can be invoked in various ways to initiate a page scanning operation. FIGS. 5 to 7 illustrate one example, wherein a page scanning operation is invoked directly from a camera application user interface 150 of mobile device 10. As shown in FIG. 5, the camera application user interface 150 of mobile device 10 displays a continuous view or preview 152 of what is currently within the FOV 16 of the camera 46 or 48 being used. It can be appreciated that the principles discussed herein may be applied to any user interface that provides a continuous view of a physical environment such that a view of a page is provided in the user interface and a page scanning mode and/or operation initiated therewithin.

[0083] It the example shown in FIG. 5, by performing a press and hold operation 154 on a particular portion of the user interface 150 (e.g., bottom center as illustrated in FIG. 5), a scan of the page preview 152 commences. It can be appreciated that any input that is distinguishable from at least one other input utilized by the user interface to capture images or modify the view of the user interface may be used to initiate the page scanning mode. [0084] Progress of the scanning operation is illustrated in FIG. 6 wherein a visual feedback is displayed on the display 40 of mobile device 10, by coloring the page preview 152 as the page 12 is being scanned during the press and hold operation 154. The visual feedback may be provided to notify the user that the currently previewed page 152 has been scanned in order to cue the user to, for example, move the FOV 16 towards a next page to be scanned, if applicable. It can be appreciated that other feedback regarding the progress of the scanning operation may be provided, e.g., to show progress of a page scan and/or to indicate completion of a scan of a particular page 12. For example, a flash or beep may be output by the mobile device 10, to indicate that the page 12 has been successfully scanned.

[0085] FIG. 7 illustrates example operations that may be performed by mobile device 10, in initiating a page scanning operation. At 162 the camera application 42 is launched in order to allow pages 12 to be located within a viewing area of the camera application user interface 150 (e.g., as shown in FIG. 5). A page scan mode input is then detected at 164, for example, the press and hold operation 154 (discussed above in the discussion of FIG.5). It can be appreciated that the press and hold operation 154 may be applied while normally using the camera application 42, or may be performed after launching the camera application 42 specifically for performing a page scan operation. For example, prior to launching the camera application 42 at 162, an input may be detected initiating a page scan, e.g. via another user interface such as one associated with the collaboration application 32.

[0086] After the page scan mode input is detected, the page scanner 34 determines at 166 whether or not a page 12 has been found in the FOV 16. If not, the page scanner 34 may continue to analyze what is viewable within the FOV 16. Once a page 12 is found, the page 12 is scanned at 168 and stored at 170. In the example shown in FIG. 7, by continuing to apply the scan mode input, e.g., by continuing the press and hold operation 154, additional pages 12 for a same document can be scanned together. The page scanner 34 determines at 172 if the scanning operation should continue to scan additional pages 12. If so, operations 166-172 are repeated. Once the scanning operation is completed, the scan ends at 174.

[0087] The page scanning operation executed by the page scanner 34 may also be initiated using other inputs, for example a menu option displayed in the camera application user interface 150 of mobile device 10, as shown in FIG. 8. In the example shown in FIG. 8, a menu 180 is invoked from within the camera application user interface 150. However, it can be appreciated that such a menu 180 can be invoked in other user interfaces, e.g., a user interface associated with the collaboration application 32. The menu 180 includes a Page Scan option 184, and a Present Page to Scan option 182. The Page Scan option 184 may be used to initiate the page scanning operation as shown in FIGS. 5 and 6 instead of applying the press and hold operation 154.

[0088] The Present Page to Scan option 182 may be used to modify the scanning technique by having the front camera 46 of mobile device 10 scan pages 12 that are brought into the FOV 16 of the front camera 46, as shown in FIG. 9, rather than bringing the mobile device 10 towards the page 12 (as shown in FIG. 1 ). The Present Page to Scan option 182 may be useful in scenarios wherein many pages 12 need to be scanned in a document 38 and space to spread out the pages 12 is limited. By using the front camera 46 in this way, the display screen of the mobile device 10 may be used to provide a visual confirmation that the page 12 is within the FOV 16, since the mobile device 10 screen would be facing the same direction as the front camera 46. Additionally, other notification mechanisms can be used to confirm that a page 12 has been successfully scanned. For example, as shown in FIG. 9, a notification light 46 may be operated to provide a visual confirmation of a successful scan. As illustrated in FIG. 9, to utilize the Present Page to Scan option 182, the page 12 is brought towards a stationary mobile device 10 operating the Present Page to Scan option 182 in Stage 1 . The page 12 enters the FOV 16 in Stage 1 and is scanned. At Stage 3 a visual notification is provided to confirm that the scan was successful. It can be appreciated that auditory (e.g., beep) or tactile (e.g., vibrate) notifications may also be used to confirm a successful scan has been achieved. It can be appreciated that the Present Page to Scan option 182 may also be initiated automatically, e.g., by detecting from a sensor that the mobile device 10 is idle or stationary. For example, an accelerometer output may be used to detect that the mobile device 10 is resting on a table or other surface and use such information to initiate the Present Page to Scan option 182.

[0089] FIG. 10 illustrates an example set of operations that may be performed by the page scanner 34 of mobile device 10, in executing the Present Page to Scan option 182 (discussed above in the discussion of FIGS. 8 and 9). At 200 the page scanner 34 detects initiation of the Present Page to Scan option 182, e.g., via the menu 180 (as shown in FIG. 8). The front camera 46 is launched in a scanning mode at 202, e.g., from within the camera application user interface 150 or elsewhere and the page scanner 34 determines at 204 whether or not a page 12 has been detected within the FOV 16 and is ready to be scanned. If not, the scanning mode may continue to operate at 202. Once a page 12 has been found, the page 12 is scanned at 206 and a notification of a successful scan is provided. The page 12 is stored at 208, e.g., as discussed above, and the page scanner 34 determines at 210 whether or not another page 12 is to be scanned. For example, the page scanner 34 may continue to operate in the page scanning mode until the user closes or otherwise ends or cancels the operation. If the scanning mode continues to be used, operations 204-208 may be repeated. Once the scan is completed, the scan operation ends at 212.

[0090] Turning now to FIGS. 1 1 A-C-13A-C, the mobile device 10 may be supported by and operated while being supported by or within an accessory 220. In the example shown in FIGS. 1 1 A-C, the accessory 220 is embodied as a relatively stiff or rigid "case" or "cover" (e.g., lined in leather, rubber, plastic, etc.) which protects and supports the mobile device 10. The mobile device 10 in FIGS. 1 1 A-C is a tablet computer supported on a first portion 222 of the accessory 220, with a pad of paper 228 supported on a second portion 226 of the accessory 220 opposite the mobile device 10. The first and second portions 222, 226 are foldable with respect to each other by way of a spine 224 or other hinged element. The spine 224 enables the accessory 220 to have both opened and closed positions, e.g., to close the case or cover. The spine 224 may also allow the mobile device 10 to be supported in a "reading mode" as shown in FIG. 12, wherein the mobile device 10 is propped up between the first portion 222 and the second portion 226 with an innermost edge of the mobile device 10 resting on the second portion 226, in this example atop the pad of paper 228.

[0091] An accessory 220 configured in the way shown in FIGS. 1 1 A-C and 12 allows a user to both view and interact with the mobile device 10 and make notes and sketches on the pad of paper 228. It has been recognized that a rear camera 48 on the mobile device 10 may be triggered to scan the uppermost page 12 of the pad of paper 228 after detecting a rotation of the mobile device 10 in the accessory 220 as shown in FIG. 1 1 C. In the example shown in FIGS 1 1 A-1 1 C, the mobile device 10 is rotatably attached to the first portion 222 of the accessory 220 at an outer fold 230 to enable the mobile device 10 to flip outwardly with respect to the first portion 222. Rotation of the mobile device 10 with respect to the first portion 222 allows the underside of the mobile device 10 to be exposed as the first portion 222 is rotated relative to the second portion 226. In this way, the mobile device 10 may be flipped outwardly with respect to the first portion 222, about the fold 230, as the first portion 222 is folded towards the second portion 226. These rotational movements of the accessory 220 and mobile device 10 enable the page of paper 228 to come within the FOV 16 of the rear camera 48 when the mobile device 10 and the pad of paper 228 are substantially parallel as shown in FIG. 1 1 C. By detecting at least one of the rotational movements, the mobile device 10 can initiate the camera application 42 and page scanner 34 in order to capture an image of the top page 12 of the pad of paper 228.

[0092] The mobile device 10 may detect rotational movements of the spine 224, the outer fold 228, or both via a sensor or other detecting mechanism in the accessory (not shown). Such a sensor may be communicably connected to the mobile device 10 via any available communication mechanism, whether hard wired or wireless, e.g., micro USB, Bluetooth, etc. It has been recognized that the mobile device 10 can usefully rely on rotational movements detected in the outer fold 230 of the accessory 220 in order to trigger the page scanner 34 to avoid triggering the camera application 42 and page scanner 34 during normal opening and closing operations of the accessory 220. It can be appreciated, however, that the mobile device 10 may also detect movement in both the spine 224 and the outer fold 228 by a particular amount (e.g., to a certain degree) before triggering the camera application 42 and page scanner 34. Relying on the detection of both movements can avoid triggering the camera application 42 and page scanner 34 when the mobile device 10 is lifted away from the first portion 222 or when the accessory 220 is being used to prop the mobile device 10 in a reading mode or when configured to be used in other configurations. For example, the mobile device 10 may trigger the camera application 42 and page scanner 34 only after detecting a first acute angle between the first portion 222 and the second portion 226 and a second acute angle between the folded portions of the first portion 222 measured at the outer fold 230.

[0093] The mobile device 10 may also have predetermined ranges of angles at the outer fold 230 that allow the rear camera 48 to capture an image of the page 12 on top of the pad of paper 228. Such a range of angles can be determined empirically according to the relative sizes of the accessory 220, mobile device 10, and pad of paper 228. It can be appreciated that the lens used by the rear camera 48 should also be capable of focusing on the pad of paper 228 in the position shown in FIG. 1 1 C. The lens would be chosen according to the focal length dictated by the relative dimensions of the mobile device 10, first portion 222, and the pad of paper 228. The mobile device 10 may also be configured to provide feedback indicative of the triggering of the page scanner 34 in the manner shown in FIG. 1 1 C. For example, the camera flash or other visual indicator (e.g., using the display screen of the mobile device 10), auditory beep, vibration, etc., may be used to indicate one or more of the triggering of the page scanning operation and completion of the scanning operation, as discussed above. [0094] The accessory 220 shown in FIGS. 1 1 A-C and 12 is illustrative only and various other configurations may be used. An alternative configuration is shown in FIGS. 13A-C. As shown in FIG. 13A, a third portion 225 is interposed between the first portion 222 supporting the mobile device 10 and the second portion 226 supporting the pad of paper 228. The third portion 225 is coupled to the first and second portions 222, 226 via a first spine 224a and a second spine 224b respectively. The configuration shown in FIG. 13 therefore provides a "tri-fold" accessory 220'. The tri-fold configuration enables the pad of paper 228 to be folded over the third portion 225 as best seen in FIG. 13B. This configuration may allow, for example, the mobile device 10 to be propped in the reading mode as shown in FIG. 12 by resting the mobile device 10 on the underside of the second portion 226. In this way, the material used to line the second portion 226 can be chosen to have a higher coefficient of friction than the pad of paper 228 to further facilitate the reading mode configuration. In addition to facilitating the reading mode, the configuration shown in FIG. 13 permits a front camera 46 of the mobile device 10 to perform the scanning operation when the first and second portions 222, 226 are substantially parallel as shown in FIG. 13C. It can therefore be appreciated that the accessory 220 may include a plurality of portions that when moved or otherwise operated relative to each other trigger a camera 46, 48 of the mobile device 10 to scan a page 12 of a pad of paper 228 as herein described.

[0095] The triggering mechanism for initiating a page scanning operation as shown in FIGS. 1 1 A-C-13A-C allows a mobile device 10 to be integrated with an accessory 220, 220' while permitting use of the page scanner 34. It can be appreciated that both of the front and rear cameras 46, 48 may still be operated according to the other examples described above when supported in the accessory 220, 220'. By incorporating the mobile device 10 and the pad of paper 228 into the same accessory 220, 220', a convenient page scanning operation can be performed using a controlled movement of the accessory 220, 220'. The controlled movement facilitates a quicker scan of the page 12 since the movement is largely predictable and consistent. For example, when triggering the camera application 42 using the hinged movements shown in FIGS. 1 1 C or 13C, a course adjustment of the focus of the rear camera 48 can be automatically applied since the relative distance between the camera's lens and the page 12 should be within a predetermined range (which may vary based on the number of pages 12 in the pad of paper 228 but within a predictable range). Additionally, the controlled movement encourages alignment of the FOV 16 and the page 12 in at least a portion of the movement thus facilitating a faster scan of the page 12. [0096] FIG. 14 illustrates an example set of operations that may be performed by the mobile device 10 in triggering a page scanning operation in the manner illustrated in FIGS. 1 1 A-C-13A-C. At 250 the mobile device 10 detects movement of the accessory 220, e.g., by receiving an input from a sensor in the spine 224 and/or outer fold 230. It can be appreciated that detecting the movement of the accessory 220 can be performed by the camera application 42, page scanner 34, collaboration application 32, or any other suitable program or system operating on the mobile device 10. The mobile device 10 determines whether or not the detected movement is within a predetermined range at 252. For example, as discussed above, the sensor may indicate angles between portions of the accessory 220 which are indicative of a movement meant to trigger a page scanning operation. When the detected movement is within the predetermined range, the camera application 42 is launched at 254 and the page scanner 34 is used to determine at 256 if the top page 12 of the pad of paper 228 can be found, i.e. if it is within the FOV 16. If the page 12 can be found, the page 12 is scanned at 258 and stored at 260. The page scanner 34 determines at 262 if there are more pages 12 to be scanned and thus the page scanning operation should continue. In the examples shown in FIGS. 1 1 C and 13C, the controlled movement of the mobile device 10 encourages scanning of a single page 12 in a particular position, namely atop the pad of paper 228. However, it can be appreciated that the display screen of the mobile device 10 can be used to display a user interface to allow the user to indicate that further pages are to be scanned. The user can indicate that further pages are to be scanned by e.g., by tearing away the top page of the pad of paper 228 to scan the next page, to present a page to the front camera 46, to reposition the mobile device 10 to direct the rear camera towards a page 12 that is not in the accessory 220, etc. If no further pages 12 are to be scanned, the scanning operation ends at 264.

[0097] The page scanning operation herein described may also be triggered from within another application, e.g., a messaging application 50, in order to capture an attachment to a message, to capture a portion of text for composing a message, etc. FIGS. 15-18 illustrate an example wherein the page scanning operation is utilized to copy and paste objects such as text from the physical environment 14 into an email message user interface 280. It can be appreciated that the principles discussed below regarding the copy-and-paste operation may be applied to other examples. For example, the copy-and-paste operation may be used to scan a desired portion of a page 12 within the FOV 16.

[0098] Turning now to FIG. 15, an example email message user interface 280 of mobile device 10 is shown, which includes a message composition portion 282. A menu 284 may be invoked as shown in FIG. 15, which includes various options including a Paste from Page option 286. The Paste from Page 286 may also be invoked in other ways, for example, using a convenience key, a gesture, voice command, etc.

[0099] By selecting the Paste from Page option 286 as shown in FIG. 15, the page scanner 34 is initiated and the camera application user interface 150 launched as shown in FIG. 16 to allow the mobile device 10 to scan a portion of a page 12 visible in the user interface 150. A first location of page 12 in the physical environment (associated with a first object 290 in the physical environment 14) and a second location of page 12 in the physical environment (associated with a first input received at the mobile device 10) can be used to define a bounding area 294 for the copy-and-paste operation. For example, as shown in FIG. 16, the tip of the user's finger in the physical environment, visible in a view of the user interface 150, is associated with a first location of page 12 in the physical environment. This first location defines a first boundary point of bounding area 294. A touch input 292, received at the touch-sensitive display 860 of mobile device 10, is associated with a second location of page 12 in the physical environment. This second location defines a second boundary point of bounding area 294. (It can be appreciated that other objects 290 may be used in defining a first or second boundary point, for example a stylus, writing instrument, etc. Furthermore, it can be appreciated that other inputs may be used to define a first or second boundary point, for example, using eye tracking or positioning a pointing element of a pointing device (e.g., cursor operated by a track pad).)

[00100] As illustrated by way of example in FIG. 17, after detecting the touch input 292 a scan swath 296 may be illustrated in the user interface 150 of mobile device 10, to indicate progress of the scan of the bound portion of the page 12 and/or that the bound portion has been scanned.

[00101] As illustrated by way of example in FIG. 18, after the mobile device 10 scans the bounding area 294, the desired portion of text 298 is inserted into the message composition portion 282. It can be appreciated that the portion of text 298 may be inserted in any suitable or desirable format, e.g., an image, raw text, link to reveal text, etc.

[00102] FIG. 19 illustrates an example set of operations that may be performed by the mobile device 10 in utilizing the Paste from Page option 286. At 300 the mobile device 10 detects selection of the Paste from Page option 286, e.g., from within menu 284 (as shown in FIG. 15), and the camera application 42 and page scanner 34 are launched at 302 to initiate a page scan. The page scanner 34 determines at 304 whether or not a bounding area 296 has been detected, e.g., determines whether an object 290 in the physical environment 290 and a touch input 292 to the mobile device 10 can be detected. Once a bounding area 296 has been detected, the bounding area 296 is scanned at 306 and the scanned text added at 308 to the item from which the Paste from Page option 286 is initiated, e.g., the email message user interface 280 shown in FIGS. 15 and 17.

[00103] As discussed above, the page scanner 34 is also operable to scan multiple pages 12 in a physical environment 14 during a single scanning operation to create a multi-page document 38. FIGS. 20-22 illustrates an example scanning operation in which a document 38 is generated from three successively scanned pages 12a, 12b, and 12c. The document 38 generated in this example is generated using the rear camera 48, e.g., as illustrated in FIGS. 5-7, however, it can be appreciated that other example embodiments may also be used, e.g., using the front camera 46 as illustrated in FIGS. 8 to 10, the accessory 220 illustrated in FIGS. 1 1 -13, etc.

[00104] Turning first to FIG. 20, the mobile device 10 is used to perform a scanning operation by directing the FOV 16 towards a first page 12a, which triggers a scan of the first page 12a in the physical environment 14 and creation of a document 38 includes a first page image 310a in the virtual environment 312, e.g., by creating a new file and storing the file on the mobile device 10.

[00105] As shown in FIG. 21 , the same scanning operation may proceed by directing the FOV 16 of mobile device 10 towards a second page 12b in the physical environment 14, which adds or appends a second page image 310b to the first page image 310a in the virtual environment 312 to create an updated document 38'.

[00106] As shown in FIG. 22, the same scanning operation may proceed by directing the FOV 16 of mobile device 10 towards a third page 12c in the physical environment 14, which adds or appends a third page image 310c to the first page image 310a and the second page image 310b, in the virtual environment 312, to create a further updated document 38".

Although three pages are illustrated in FIGS. 20-22, it can be appreciated that this process may be repeated for more or fewer than three pages. By adding or appending images of pages 12 scanned in the physical environment 14, a document 38 can be generated in a manner that resembles a physical document in the physical environment. In this way, the user may not only capture individual pages 12 but also capture entire documents made up of multiple pages 12. [00107] FIG. 23 illustrates an example set of operations that may be performed by the page scanner 34 of mobile device 10, in generating a document 38 from multiple scanned pages 12. At 320 the page scanner stores a first page image 310a scanned from a first page 12a, and determines at 322 whether or not to continue scanning one or more additional pages 12. It can be appreciated that whether or not to continue scanning can be determined from a pre-selected mode (e.g., an option indicating that multiple pages will be scanned), the persistence of an input (e.g., continuing the press and hold operation 154), a further input (e.g., an option to either scan another page 12 or end the scan), etc. If the page scanner 34 is to continue the scan, the next page 12 is scanned to obtain a second page image 310b at 324, which is added or appended to the first page image 310a at 326. Operations 322, 324, and 326 are repeated until no further scanning is required. The pages 12a, 12b, 12c although scanned in a particular order, may not be in the order desired for the document 38. The page scanner 34 or collaboration application 32 may therefore be configured to also determine if any reordering is required at 328. This may be done by detecting whether an option to reorder, which has been displayed following a scanning operation, has been selected. If the images 310 of the document 38 are to be reordered, a page reordering operation is initiated at 330. For example, a user interface may present the currently selected page ordering and provide a user interface element that allows the pages to be reordered. If no reordering is required, the scanning operation ends at 332.

[00108] A document 38 that has been generated from multiple page images 310 may be displayed in multiple different layers of the same document 38, and each layer of the document 38 may be displayed in multiple different views. FIGS. 24-27 illustrate example navigation between multiple different views of a layer of a document 38. An image view user interface 350 of mobile device 10 is shown in FIG. 24. In the example shown in FIG. 24, an image view of the appended three page images 310a, 310b, 310c used in the above- described example is shown. The image view may be navigated to focus on particular page images 310. For example, as shown in FIG. 24, a touch gesture such as a swipe gesture between an originating point 352 and an end point 354 may be used to reposition or pan from left to right in order to center the second page image 310b in the image view user interface 350 (as shown in FIG. 25). A subsequent input may then be used to load the next view, which in this example is a first document view resembling an electronic document in, for example, a portable document format (PDF). It can be appreciated that the first document view includes pages displayed in a format different from the pages displayed in the image view. [00109] By applying a predefined input while in one view can cause navigation to another view. Bi-directional inputs enable intuitive movement "up" and "down" or "in" and "out" between different views. It has been recognized that touch inputs such as touch gestures provide suitable inputs for navigating between views. It has also been recognized that a zoom input applied to a current view provides an intuitive input for moving between views to provide the "in" and "out" navigational experience. In FIG. 25 a pinch-to-zoom-in gesture input to the user interface 350 mobile device 10 is illustrated wherein a pair of originating touch inputs 356a, 356b are detected, followed by a pair of end touch inputs 358a, 358b, wherein the end touch inputs 358 are further away from each other than the originating touch inputs 356.

[00110] After detecting that the user has zoomed into the image view beyond a predetermined threshold (e.g., to or beyond a predetermined zoom level), a first document view user interface 360 of mobile device 10 is displayed as shown in FIG. 26. The first document view presents document pages 364a, 364b, 364c, which correspond to the page images 310a, 310b, 310c, and which are scrollable in a familiar document viewing format such as a PDF. The first document view is scrollable and may be presented in a familiar commercially available format and user interface. As illustrated in FIG. 26, a further pinch- to-zoom-in gesture may be used to load a second document view user interface 370 (as shown in FIG. 27).

[00111] As illustrated in FIG. 27, the second document view user interface 370 of mobile device 10 displays reformatted page 372b corresponding to the document page 364b that was in focus in the first document view user interface 360. A cursor 374 is also shown in FIG. 27 which may be placed within the text of the reformatted page 372b for viewing and/or to make electronic edits to the text.

[00112] FIG. 28 illustrates an example set of operations that may be performed by the page scanner 34 and/or collaboration application 32 of mobile device 10, in navigating between different views of the same document 38. At 400 an image view of a document 38 is loaded and displayed and, in this example, a swipe gesture is detected at 402, which repositions the appended multiple page images in the image view user interface 350 at 404. A zoom gesture is detected at 405, and it is determined at 406 whether or not the zoom gesture is "in" or "out". If a zoom out gesture is detected, the previous layer (if any) in the document 38 is loaded at 407. Since the image view is the highest level view in this example, the zoom gesture can be additionally used to navigate between layers of the same document 38. [00113] If a zoom in gesture is detected at 405, the first document view user interface 360 is loaded at 408 in order to display document pages 364 which correspond to the image pages 310 from the image view. A further zoom gesture is detected at 410. Since the first document view is "between" the image view and the second document view, the page scanner 34 and/or collaboration application 32 determines at 412 whether or not the zoom gesture is "in" or "out". If a zoom-out gesture is detected, the image view is re-loaded at 400. If a zoom-in gesture is detected, the second document view user interface 370 is loaded at 414 to display the reformatted view. A further zoom gesture is detected at 416, which may be "in" or "out" as determined at 418. If a zoom-out gesture is detected, the first document view is re-loaded at 408. If a zoom-in gesture is detected, the second document view user interface 370 further zooms in on the text at 420. It can be appreciated that a zoom-in limit may also be imposed to restrict the amount of zooming that can be applied to the second document view. The zoom gesture detected at 416 can also be used to continue the navigation between different layers of the same document 38 by determining at 418 whether or not the end of the zooming operation has been reached. If not, further zooming of the second document view may be applied. If the end of the zoom operation is detected the next layer in the document 38 (if any) may be loaded at 420.

[00114] In order to capture and incorporate changes to a document 38, and to allow for collaboration on a document 38, each document 38 may include multiple layers. A layer of a document 38 enables different versions of the document 38 to be stored and to allow both electronic edits and hand-written edits to be visualized in different forms by creating different layers for showing and incorporating handwritten edits to "typed" page 12.

[00115] FIGS. 29-35 illustrate an example of the incorporation of handwritten edits into different document layers and the navigation between document layers, as performed by mobile device 10. Referring to FIG. 29, a page 12 of a document 38 is shown, in which handwritten edits 422 have been added to the page 12. In this example, the edits 422 include a notational element 424 and an action note 426.

[00116] The notational element 424 is a detectable symbol or indicator that, when detected, indicates a particular edit to be applied to the associated text. In the example shown in FIG. 29, the notational element 424 corresponds to a capitalization symbol indicating that the underlying letter should be capitalized. It can be appreciated that various other notational elements 424 may be detectable from the edits 422, e.g., those for "new paragraph", lower-case, bold, italics, etc. [00117] The action note 426 may be made detectable by storing a library of action verbs that, when detected in hand-written edits 422 cause an associated action to be executed rather than the edit being added to the text of the page 12. For example, notes made within a margin of the page 12 may be considered action notes 426. In the example shown in FIG. 29, the action note 426 indicates that something should be sent to "Bob". The action note 426 may trigger a prompt or automatic operation for initiating a message composition user interface (e.g., to email or instant message Bob). A prompt, if used, may also ask the user to confirm whether the entire document 38 is to be sent, the current page 12, etc.

[00118] By scanning the page 12 as illustrated in FIG. 29, page image 310 is displayed in the image view user interface 350 of mobile device 10, as shown in FIG. 30, which includes the handwritten edits 422.

[00119] The edits 422 may be automatically applied using the editor 36 of mobile device 10, as shown in FIG. 31 , to generate a document page 364 including the edits, or may need to be confirmed by the user. By having multiple views of the same layer, the handwritten edits 422 can be both shown and applied in different views without losing the original markings to the page 12. In FIG. 31 , an action button 434 is also displayed, which corresponds to the action note 426 detected in the image view.

[00120] It can be appreciated that the action button 434 may also be displayed in the image view in the image view user interface 350 of mobile device 10, as shown in FIG. 32. In FIG. 32, the image page 310 is rendered to remove the action note 426 from the margin to have an edits-only version of text 450 of the image page 310 sent to Bob. It can also be appreciated that the action can also be provided as an option in a menu or triggered using another input.

[00121] FIG. 33 illustrates a page 12 that has been printed from an edited version of the document 38 as shown in FIG. 31 . By scanning the page 12 as shown in FIG. 33, the previously stored document 38 can be detected from the data store 37 and loaded for further editing, viewing, and/or navigation.

[00122] FIG. 34 illustrates a timeline tool 440 that may be displayed on the display 40 of mobile device 10, in order to navigate between layers of a document 38 being displayed. In the example shown in FIG. 34, the document page 364 shown in FIG. 31 is displayed within a first document view of the document 38 in which the corresponding page 12 is found using the first document view user interface 360. The timeline tool 440 may be loaded after detecting an input, for example a downward swipe gesture from the top of the first document view user interface 360.

[00123] The timeline tool 440 includes a time bar 442 and a day indicator 444. A swipe gesture 446 applied to the timeline bar 442 enables navigation between different layers of the corresponding document 38. For example, by swiping to the left as shown in FIG. 34, an earlier version of the document 38 corresponding to the hand-edited version shown in FIGS. 29 and 30 is displayed and the timeline bar 442' updated on the display 40 of mobile device 10, as shown in FIG. 35.

[00124] FIG. 36 illustrates an example set of operations that may be performed by the collaboration application 32 of mobile device 10, in applying edits to a page 12 into a document layer. At 460 the collaboration application 32 detects one or more edits 420 to a scanned image. Although the edits 420 illustrated in the above examples are handwritten, it can be appreciated that electronic edits may also be detectable in a scanned image, e.g., tracked or redlined changes or via document comparison. The edited version of the image is used to generate a new layer for the document 38 at 462. It is determined at 464 if the document 38 should be edited per the detected edits 422, e.g., automatically, via a selection from a prompt or menu, etc. If the document 38 is not to be corrected the method ends at 466. If the document 38 is to be edited, the document 38 is corrected at 468 and the corrected version is saved as another new layer, i.e. a layer that shows the document 38 post corrections.

[00125] It can be appreciated that the zoom gestures used to navigate between different views of the same document layer and between document layers can also be used to move between hand-edited and electronically corrected versions of a page.

[00126] FIG. 37 illustrates an example set of operations that may be performed by the collaboration application 32 of mobile device 10, in utilizing the timeline tool 440 for navigating between layers of a document 38. At 480 the collaboration application 32 detects initiation of the timeline tool 440, e.g., by detecting a downward swipe gesture and displays the timeline tool 440 at 482. A scroll operation applied to the timeline bar 442 is detected at 484 and a new layer is determined according to the direction in which the timeline bar 442 moves and where the timeline bar 442 lands and displayed at 486.

[00127] Documents 38 that are created and modified using the collaboration application 32, page scanner 34, and editor 36 may be shared and collaborated on by other users also capable of scanning, editing, storing, and communicating documents 38. In FIG. 38, the image view user interface 350 of mobile device 10 displays a page image 310 for a report entitled "Report - 2013". If the collaboration application 32 is able to identify the associated document 38 in the data store 37 after scanning the page 12, the collaboration application 32 can determine if other users are viewing and/or collaborating on the same document 38. The collaboration application 32 can determine which users are currently viewing the document 38 in various ways. For example, the messaging application 50 may include a presence function that includes information related to documents 38 which the contacts 52 are viewing. The collaboration application 32 may also obtain such information from the collaboration server 20 or messaging server 24 by communicating over the network 18.

[00128] If others are viewing the same document, a People Viewing button 490 may be displayed as shown in FIG. 38 to enable the user to initiate communications with the others immediately after scanning the document 38. In this way, the user is provided with enhanced information with respect to the document 38. This allows the user to immediately obtain context regarding the document 38 and be brought up to speed regarding the state of editing and collaboration on the document 38. For example, the user may receive a copy of the "Report - 2013" and wish to begin reviewing. By first scanning the page 12

corresponding to the page image 310 shown in FIG. 38, the user can determine whether others are working on the same task and initiate a discussion prior to commencing a substantive review. This allows a user to more efficiently review and edit a document since they may determine that other edits should be made before they begin their review.

[00129] By selecting the People Viewing button 490 shown in FIG. 38, a preview user interface 500 of mobile device 10, is displayed as shown in FIG. 39. The preview user interface 500 includes a photo 502, identifying information 504 (e.g., name, location, etc.), and a message 506 indicating what the others are doing with the document 38. In the example shown in FIG. 39, the message 506 indicates that Contacts A and B are viewing the document 38 and for how long. A back option 510 enables the user to return to the image view user interface 350 thus enabling the user to "peek" or otherwise preview information regarding the others viewing the document 38 without having to initiate a conversation or other communication. A Chat on IM option 508 is also shown in FIG. 39 which may be selected to initiate an instant messaging conversation with the contacts 52 identified in the preview user interface 500 via the messaging application 50. It can be appreciated that the Chat on IM option 508 when selected may initiate a new conversation (with e.g. a subject related to the document 38) or may initiate an existing conversation if one exists.

[00130] FIG. 40 illustrates an example set of operations that may be performed by the collaboration application 32 of mobile device 10, in enabling the People Viewing option to be used. At 520 the collaboration application 32 determines from a scanned page image 310 which document 38 is being viewed. In the example illustrated in FIG. 40 it is assumed that the page image 310 corresponds to a document 38 in the data store 37. The collaboration application 32 determines at 522 whether or not any contacts 52 are viewing the same document 38. As discussed above, this may be done locally by communicating with, for example, the messaging application 50, or by communicating with an outside source such as the messaging server 24 or collaboration server 20. If no contacts 52 are currently viewing the document 38, the collaboration application 32 determines if the image view user interface 350 is to be closed. If not, operations 520 and 522 may be repeated to account for the scenario wherein a contact 52 has since begun to view the document 38. If the image view user interface 350 is to be closed the method ends at 526.

[00131] If at least one other contact 52 is viewing the same document 38, the People Viewing button 490 is displayed at 528 and the collaboration application 32 determines at 530 whether or not the People Viewing button 490 has been selected. If not, operations 524/526 or 524/520/522 are repeated as exemplified above. If the People Viewing button 490 is selected, the preview user interface 500 is displayed in order to enable the corresponding contacts 52 and Chat on IM option 508 to be provided at 532. The collaboration application 32 determines at 534 whether or not the Chat on IM option 508 has been selected. If not, the operations discussed above may be repeated beginning at operation 524. If the Chat on IM option 508 has been selected, a chat is launched at 536.

[00132] As discussed above, handwritten edits to a page 12 may be applied to a document 38 using, for example, the editor 36. FIGS. 41 -43 illustrates an example editing workflow stemming from handwritten edits 420 identified on a page 12 that is scanned by the page scanner 34. Turning now to FIG. 41 , the page 12 having handwritten edits 420 is scanned by the page scanner 34 from the physical environment 14 which creates a page image 310 displayed in the image view user interface 350 of mobile device 10, shown in FIG. 42. In this example, an Auto Correct button 540 is displayed which when selected causes the editor 36 to apply changes to the document 38 creating a new "corrected layer". [00133] By selecting the Auto Correct button 540 in FIG. 42, the image view user interface 350 of mobile device 10 is updated as illustrated in FIG. 43 providing an edit preview 544 to be compared with a native edit snapshot 542. In the example shown in FIG. 43, the subtitle: "The Greatest Subtitle Ever" has been hand-edited to read: "Greatest subtitle ever". A back option 550 is provided to enable the user to return to the page image 310 shown in FIG. 42, e.g., if the user only wishes to preview or peek at the edited version. An accept option 546 may be selected to confirm and apply the handwritten edits, and a skip option 548 may be selected to forgo the edits.

[00134] FIG. 44 illustrates an example set of operations that may be performed by the collaboration application 32 of mobile device 10, in enabling the Auto Correct option to be used. At 560 the editor 36 detects hand-edited revisions in a page image 310 and displays the Auto Correct button 540 at 562. The editor 36 determines at 564 whether or not the Auto Correct button 540 has been selected. If not, operation 562 may be repeated. If the Auto Correct button 540 has been selected, the correction preview 544 is shown at 566 (e.g., as illustrated in FIG. 43). The editor 36 determines at 568 whether or not the changes have been accepted, e.g., if the Accept option 546 has been selected. If not, the method ends at 572. If the Accept option 546 has been selected a new layer in the document 38 is saved at 570 that includes the corrected version of the page 12 that was scanned.

[00135] In addition to detecting handwritten action notes 426, information in the page 12 itself may be detected to determine additional actions that may be performed in association with the page 12 or document 38. FIGS. 45-47 illustrate one such example wherein a page 12 affixed to a board 580 (e.g., a poster - see FIG. 45) is scanned by the page scanner 34, the poster including event information.

[00136] As illustrated in FIG. 46, a page image 310 of the poster, which includes event details 582, is displayed in the image view user interface 350 of mobile device 10. In this example, the event information includes an event name, a location, a date, and a uniform resource locator (URL) for accessing additional event information. The page scanner 34 detects that the text associated with the event details 582 includes event information that is actionable and displays an Add to Calendar button 584.

[00137] By selecting the Add to Calendar button 584 an item preview user interface 586 of mobile device 10 may be displayed as shown in FIG. 47. The item preview user interface 586 includes a page image 588 corresponding to the scanned page 12 (shown in FIG. 45) and a prompt 590. The prompt 590 identifies which information in the event details 582 have been parsed from the page image 310 and will be used to create the calendar event. An OK button 592 is provided to enable the user to confirm addition of the new calendar event according to the prompt 590. A Back option 598 is also provided to enable the user to return to the image preview user interface 350 (in FIG. 46), and a Skip option 596 is provided to enable the user to forgo any further steps with respect to the scanned image. A Go to Site option 594 is displayed to allow the user to select the URL in the event details to 582 to automatically launch a web browser or app.

[00138] FIG. 48 illustrates an example set of operations that may be performed by the page scanner 34 of mobile device 10, in enabling the Add to Calendar option to be used. At 600 the page scanner 34 detects from text in a scanned page 12 that event details 382 are included, which correspond to temporal and/or location information and displays the Add to Calendar button 584 at 602. The page scanner 34 determines at 604 whether or not the Add to Calendar button 584 has been selected. If not, operation 602 is repeated. If so, the prompt 590 for creating a new calendar event is provided at 608, e.g., using the preview user interface 586 (shown in FIG. 47). The page scanner 34 determines at 610 whether or not the new calendar event has been accepted (e.g., by detecting selection of the OK button 592) and if so, adds a new event to a calendar application at 614. If the new calendar event has not been accepted (e.g., the Back option 598 or Skip option 596 is selected without selecting the OK button 592), the method ends at 612.

[00139] In the example scenario illustrated in FIG. 48 it is assumed that the page scanner 34 also detects a URL in the event details 582 at 616 and displays the Go to Site option 594 at 618. The page scanner 34 determines at 620 whether or not the Go to Site option 594 has been selected and if so, launches a web browser (or app) at 622. If the Go to Site option 594 is not selected (e.g., the Back option 598 or Skip option 596 is selected rather than the Go to Site option 594) operation 618 may be repeated.

[00140] It can be appreciated that the principles illustrated above using FIGS. 45-47 may be equally applied to other page types, for example, adding contact details from a business card, recipes, parking garage locations, etc.

[00141] Accordingly, there is provided a method of operating a mobile device, comprising: displaying a user interface that includes a view of a page in a physical environment; while displaying the user interface, detecting an input that is distinguishable from another input received at the user interface to capture images or modify the view; and after detecting the input, initiating a page scanning mode to capture an image of the page. [00142] There is also provided a computer readable storage medium comprising computer executable instructions for performing the above method.

[00143] There is also provided a mobile device comprising a processor, memory, a display, and at least one camera, the memory including computer executable instructions for performing the above method.

[00144] As discussed above, the collaboration application 32 may be incorporated into or otherwise provided by or with a messaging application 50, e.g., a P2P-based communication application and underlying system. An example of a P2P communication system 700 including a wireless infrastructure 702, is shown in FIG. 49. The communication system 700, at least in part, enables the client devices 10, 22 (e.g. the first device 10 and second device 22 as shown in FIG. 49) to communicate via a peer-to-peer (P2P) system 704. In this example, the P2P system 704 is accessed by connecting to a wireless network 18.

[00145] In the example shown in FIG. 49, the first and second devices 10, 22 are illustrated as being mobile devices such as smart phones. However, it can be appreciated that other types of electronic devices configured to conduct P2P messaging may also be capable of communicating with or within the communication system 700. It will also be appreciated that although the examples shown herein are directed to mobile communication devices, the same principles may apply to other devices capable of communicating with the P2P system 704. For example, an application (not shown) hosted by a desktop computer or other "non-portable" or "non-mobile" device may also be capable of communicating with other devices (e.g., including first and second devices 10, 22) using the P2P system 704.

[00146] The P2P system 704 is, in this example, a component of the wireless

infrastructure 702 associated with the wireless network 18. The wireless infrastructure 702 in this example includes, in addition to the P2P system 704, and among other things not shown for simplicity, a person identification number (PIN) database 706. The PIN database 706 in this example is used to store one or more PINs associated with particular devices, whether they are subscribers to a service provided by the wireless infrastructure 702 or otherwise. To illustrate operation of the P2P system 704 with respect to FIGS. 49 to 51 , the first and second devices 10, 22 will be referred to commonly as "mobile devices 10".

[00147] One of the mobile devices 10 may communicate with the other of the mobile devices 10 and vice versa via the P2P system 704, in order to perform P2P messaging or to otherwise exchange P2P-based communications. For ease of explanation, in the following examples, any P2P-based communication may also be referred to as a P2P message 708 (as shown in FIG. 51 ).

[00148] In some examples, the P2P system 704 may be capable of sending multi-cast messages, i.e. forwarding a single message from a sender to multiple recipients without requiring multiple P2P messages 56 to be generated by such sender. For example, as shown in FIG. 50, the P2P system 704 can be operable to enable a single P2P message 708 to be sent by a first client device 10a to multiple recipient client devices 10b, 10c, and 10d, by addressing the P2P message 708 to multiple corresponding P2P addresses, and having the P2P system 704 multicast the P2P message 708 to those recipient client devices 10b, 10c, and 10d.

[00149] An example P2P message 708 is shown in greater detail in FIG. 51 , and has a format that is particularly suitable for a PIN-to-PIN based system. In a typical P2P protocol, each P2P message 708 has associated therewith a source corresponding to the mobile device 10 which has sent the P2P message 708 and includes a destination identifying the one or more intended recipients. Each P2P message 708 in this example includes a body 712, which contains the content for the P2P message 708 (e.g., text or other data), and a header 714, which contains various fields used for transmitting and processing each P2P message 708. In this example, the header 714 includes a message type field 716 to specify the type of transmission (e.g., chat, registration, block, presence, sharing session etc.), a source field 718 to specify the device address for the sender, a destination field 720 to specify the device address(es) for the one or more intended recipients, an ID field 722 to identify the corresponding P2P application (e.g., see messaging application 50 in FIG. 2) and a timestamp field 724 to indicate the time (and if desired, the date) at which the P2P message 708 was sent by the designated sender.

[00150] It can be appreciated that in this example, the ID field 722 can be used to specify the application ID to identify a P2P application on the mobile device 10. Where the P2P application relates to, for example, an IM system, the message type field 716 can also be used to designate an IM communication, and the ID field 722 may then correspond to a conversation ID, i.e. a conversation thread the P2P message 708 corresponds to (e.g., such that each P2P message 708 is identified by the conversation in which it was sent).

[00151] It will be appreciated that other information or attributes may be included in the P2P message 708, such as a subject field (not shown) to enable a subject for part or all of a conversation (in an IM embodiment) to be transported with the P2P message 708 (e.g., to create new subjects, modify subjects, notify others of subjects, etc.), or application details field (not shown) to provide application-specific information such as the version and capabilities of the application.

[00152] The P2P system 704 can utilize any suitable P2P protocol operated by, for example, a P2P router (not shown), which may be part of the wireless infrastructure 702. It can be appreciated however that a stand-alone P2P configuration (i.e. that does not rely on the wireless infrastructure 702 - not shown) may equally apply the principles herein. The P2P system 704 may also enable mobile devices 10 to communicate with desktop computers, thus facilitating, for example, communications such as instant messaging between mobile applications and desktop applications on the desktop computer.

[00153] The P2P system 704 can be implemented using a router-based communication infrastructure, such as one that provides email, Short Message Service (SMS), voice, Internet and other communications. Particularly suitable for hosting a P2P messaging router, is a wireless router or server used in systems such as those that provide push-based communication services. In FIG. 16, the wireless infrastructure 702 facilitates P2P communications such as instant messaging between mobile devices 10. P2P messaging, such as IMing, is provided by an associated application stored on each mobile device 10, e.g., an IM application, which can be initiated, for example, by highlighting and selecting an icon from a display as is well known in the art. The P2P system 704 routes messages between the mobile devices 10 according to the P2P protocol being used. For example, the P2P protocol may define a particular way in which to conduct IM or other types of messaging.

[00154] In general, in a P2P protocol, the sender of the P2P message 708 knows the source address of the intended recipient, e.g., a PIN. Knowledge of the source address may be established when the two devices request to add each other to their respective contact or buddy lists. A particular mobile device 10 can communicate directly with various other mobile devices 10 through the P2P system 704 without requiring a dedicated server for facilitating communications. In other words, the P2P system 704 enables the mobile devices 10 to communicate with each other directly over the wireless infrastructure 702 in accordance with the P2P protocol.

[00155] When conducting a P2P session according to the example shown in FIG. 49, the mobile devices 10 can communicate directly with the wireless infrastructure 702 in a client based exchange where, as noted above, an intermediate server is not required. A P2P message 708 sent by one mobile device 10 is received by the wireless infrastructure 702, which obtains the source address for the intended recipient (or recipients) from information associated with the P2P message 708 (e.g., a data log) or from the P2P message 708 itself. Upon obtaining the recipient's address according to the P2P protocol, the wireless infrastructure 702 then routes the P2P message 708 to the recipient associated with the mobile device 10 having such address (or recipients having respective addresses). The wireless infrastructure 702 typically also provides a delivery confirmation to the original sender, which may or may not be displayed to the user. The destination device can also provide such delivery information. The wireless infrastructure 702 may be capable of routing P2P messages 708 reliably as well as being capable of holding onto the P2P messages 708 until they are successfully delivered. Alternatively, if delivery cannot be made after a certain timeout period, the wireless infrastructure 702 may provide a response indicating a failed delivery. The wireless infrastructure 702 may choose to expire or delete a P2P message 708 if a certain waiting period lapses.

[00156] Referring to FIG. 51 , to further aid in the understanding of the example mobile devices 10, 22 described above, shown therein is a block diagram of an example configuration of a device configured as a "mobile device", referred to generally as "mobile device 10". The mobile device 10 includes a number of components such as a main processor 802 that controls the overall operation of the mobile device 10. Communication functions, including data and voice communications, are performed through at least one communication interface 30. The communication interface 30 receives messages from and sends messages to a wireless network 18. In this example of the mobile device 10, the communication interface 30 is configured in accordance with the Global System for Mobile Communication (GSM) and General Packet Radio Services (GPRS) standards, which is used worldwide. Other communication configurations that are equally applicable are the 3G and 4G networks such as Enhanced Data-rates for Global Evolution (EDGE), Universal Mobile Telecommunications System (UMTS) and High-Speed Downlink Packet Access (HSDPA), Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (Wi- Max), etc. New standards are still being defined, but it is believed that they will have similarities to the network behavior described herein, and it will also be understood by persons skilled in the art that the examples described herein are intended to use any other suitable standards that are developed in the future. The wireless link connecting the communication interface 64 with the wireless network 18 represents one or more different Radio Frequency (RF) channels, operating according to defined protocols specified for GSM/GPRS communications. [00157] The main processor 802 also interacts with additional subsystems such as a Random Access Memory (RAM) 806, a flash memory 808, a touch-sensitive display 860, an auxiliary input/output (I/O) subsystem 812, a data port 814, a keyboard 816 (physical, virtual, or both), a speaker 818, a microphone 820, a GPS receiver 821 , a front camera 46, a rear camera 48, short-range communications subsystem 822, and other device subsystems 824. Some of the subsystems of the mobile device 10 perform communication-related functions, whereas other subsystems may provide "resident" or on-device functions. By way of example, the touch-sensitive display 860 and the keyboard 816 may be used for both communication-related functions, such as entering a text message for transmission over the wireless network 18, and device-resident functions such as a calculator or task list. In one example, the mobile device 10 can include a non-touch-sensitive display in place of, or in addition to the touch-sensitive display 860. For example the touch-sensitive display 860 can be replaced by a display 40 that may not have touch-sensitive capabilities.

[00158] The mobile device 10 can send and receive communication signals over the wireless network 18 after required network registration or activation procedures have been completed. Network access is associated with a subscriber or user of the mobile device 10. To identify a subscriber, the mobile device 10 may use a subscriber module component or "smart card" 826, such as a Subscriber Identity Module (SIM), a Removable User Identity Module (RUIM) and a Universal Subscriber Identity Module (USIM). In the example shown, a SIM/RUIM/USIM 826 is to be inserted into a SIM/RUIM/USIM interface 828 in order to communicate with a network.

[00159] The mobile device 10 is typically a battery-powered device and includes a battery interface 832 for receiving one or more rechargeable batteries 830. In at least some examples, the battery 830 can be a smart battery with an embedded microprocessor. The battery interface 832 is coupled to a regulator (not shown), which assists the battery 830 in providing power to the mobile device 10. Although current technology makes use of a battery, future technologies such as micro fuel cells may provide the power to the mobile device 10.

[00160] The mobile device 10 also includes an operating system 834 and software components 836 to 842, 32, 50 and 38. The operating system 834 and the software components 836 to 842, 32, 50 and 38, that are executed by the main processor 802 are typically stored in a persistent store such as the flash memory 808, which may alternatively be a read-only memory (ROM) or similar storage element (not shown). Those skilled in the art will appreciate that portions of the operating system 834 and the software components 836 to 842, 32, 50 and 38, such as specific device applications, or parts thereof, may be temporarily loaded into a volatile store such as the RAM 806. Other software components can also be included, as is well known to those skilled in the art.

[00161] The subset of software applications 836 that control basic device operations, including data and voice communication applications, may be installed on the mobile device 10 during its manufacture. Software applications may include a message application 838, a device state module 840, a Personal Information Manager (PIM) 842, a collaboration application 32, a messaging application 50, and a data store 37 of documents 38. A message application 838 can be any suitable software program that allows a user of the mobile device 10 to send and receive electronic messages, wherein messages are typically stored in the flash memory 808 of the mobile device 10. A device state module 840 provides persistence, i.e. the device state module 840 ensures that important device data is stored in persistent memory, such as the flash memory 808, so that the data is not lost when the mobile device 10 is turned off or loses power. A PIM 842 includes functionality for organizing and managing data items of interest to the user, such as, but not limited to, e- mail, contacts, calendar events, and voice mails, and may interact with the wireless network 18.

[00162] Other types of software applications or components 839 can also be installed on the mobile device 10. These software applications 839 can be pre-installed applications (i.e. other than message application 838) or third party applications, which are added after the manufacture of the mobile device 10. Examples of third party applications include games, calculators, utilities, etc.

[00163] The additional applications 839 can be loaded onto the mobile device 10 through at least one of the wireless network 16', the auxiliary I/O subsystem 812, the data port 814, the short-range communications subsystem 822, or any other suitable device subsystem 824.

[00164] The data port 814 can be any suitable port that enables data communication between the mobile device 10 and another computing device. The data port 814 can be a serial or a parallel port. In some instances, the data port 814 can be a Universal Serial Bus (USB) port that includes data lines for data transfer and a supply line that can provide a charging current to charge the battery 830 of the mobile device 10. [00165] For voice communications, received signals are output to the speaker 818, and signals for transmission are generated by the microphone 820. Although voice or audio signal output is accomplished primarily through the speaker 818, the display 40 can also be used to provide additional information such as the identity of a calling party, duration of a voice call, or other voice call related information.

[00166] The touch-sensitive display 860 may be any suitable touch-sensitive display, such as a capacitive, resistive, infrared, surface acoustic wave (SAW) touch-sensitive display, strain gauge, optical imaging, dispersive signal technology, acoustic pulse recognition, and so forth, as known in the art. In the presently described example, the touch- sensitive display 860 is a capacitive touch-sensitive display which includes a capacitive touch-sensitive overlay 864. The overlay 864 may be an assembly of multiple layers in a stack which may include, for example, a substrate, a ground shield layer, a barrier layer, one or more capacitive touch sensor layers separated by a substrate or other barrier, and a cover. The capacitive touch sensor layers may be any suitable material, such as patterned indium tin oxide (ITO).

[00167] The display 40 of the touch-sensitive display 860 may include a display area in which information may be displayed, and a non-display area extending around the periphery of the display area. Information is not displayed in the non-display area, which is utilized to accommodate, for example, one or more of electronic traces or electrical connections, adhesives or other sealants, and protective coatings, around the edges of the display area.

[00168] One or more touches, also known as touch contacts or touch events, may be detected by the touch-sensitive display 860. The processor 802 may determine attributes of the touch, including a location of a touch. Touch location data may include an area of contact or a single point of contact, such as a point at or near a center of the area of contact, known as the centroid. A signal is provided to the controller 866 in response to detection of a touch. A touch may be detected from any suitable object, such as a finger, thumb, appendage, or other items, for example, a stylus, pen, or other pointer, depending on the nature of the touch-sensitive display 860. The location of the touch moves as the detected object moves during a touch. One or both of the controller 866 and the processor 802 may detect a touch by any suitable contact member on the touch-sensitive display 860. Similarly, multiple simultaneous touches, are detected.

[00169] In some examples, an optional force sensor 870 or force sensors is disposed in any suitable location, for example, between the touch-sensitive display 860 and a back of the mobile device 10 to detect a force imparted by a touch on the touch-sensitive display 860. The force sensor 870 may be a force-sensitive resistor, strain gauge, piezoelectric or piezoresistive device, pressure sensor, or other suitable device.

[00170] It will be appreciated that any module or component exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the mobile device 10, collaborating device 22, collaboration server 20, messaging server 24, any component of or related to these entities, etc., or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media.

[00171] The steps or operations in the flow charts and diagrams described herein are just for example. There may be many variations to these steps or operations without departing from the principles discussed above. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified.

[00172] Although the above principles have been described with reference to certain specific examples, various modifications thereof will be apparent to those skilled in the art as outlined in the appended claims.