Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TACTILE GRAPHICS READER
Document Type and Number:
WIPO Patent Application WO/2022/053330
Kind Code:
A1
Abstract:
A tactile graphic reader including a reading area for receiving a tactile graphic having an identifier and a processor configured to execute program instructions. The program instructions are configured to cause the at least one processor to detect the identifier of the tactile graphic and retrieve a digital tactile graphic file representing the tactile graphic from a database of digital tactile graphics files using the identifier. The digital tactile graphic file includes payloads associated with a variety of digital positions. The program instructions are further configured to determine a physical position of at least one finger or at least one object on the tactile graphic, map the physical position to a digital position in the digital tactile graphic file, and generate an audio output based on a payload associated with the digital position.

Inventors:
ALEXANDER HARS (DE)
HARS KLAUS-PETER (DE)
Application Number:
PCT/EP2021/073693
Publication Date:
March 17, 2022
Filing Date:
August 26, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INVENTIVIO GMBH (DE)
International Classes:
G09B21/00; G06F3/00; G06F3/042
Domestic Patent References:
WO2003025886A12003-03-27
Foreign References:
US20160240102A12016-08-18
US6115513A2000-09-05
Other References:
ANONYMOUS: "EU-gefördertes Start-up Inventivio stellt auf SightCity neues Modell des Tactonom und 2 neue Technologien vor - openPR", 9 May 2019 (2019-05-09), pages 1 - 2, XP055862999, Retrieved from the Internet [retrieved on 20211118]
Attorney, Agent or Firm:
VALET PATENT SERVICES LIMITED (DE)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A tactile graphic reader, comprising: a reading area for receiving a tactile graphic having an identifier; at least one processor configured to execute program instructions, wherein the program instructions are configured to cause the at least one processor to: detect the identifier of the tactile graphic; retrieve a digital tactile graphic file representing the tactile graphic from a database of digital tactile graphics files using the identifier, wherein the digital tactile graphic file includes payloads associated with a variety of digital positions; determine a physical position of at least one finger or at least one object on the tactile graphic; map the physical position to a digital position in the digital tactile graphic file; and generate an audio output based on a payload associated with the digital position.

2. The tactile graphic reader of Claim 1 , comprising: a camera positioned to read the tactile graphic received by the reading area, wherein the program instructions are configured to cause the at least one processor to perform at least one of: determine a physical position of the at least one finger or the at least one object based on images captured by the camera; detect the identifier based on images captured by the camera; and calibrate the position of the tactile graphic relative to the reading area by detecting markers on the tactile graphic based on images captured by the camera.

25

3. The tactile graphic reader of Claim 1, comprising a speaker for annunciating the audio output.

4. The tactile graphic reader of Claim 1, comprising a clamp or other locking mechanism for holding the tactile graphic.

5. The tactile graphic reader of Claim 1 , wherein the reading area and the at least one object are held together by magnetism.

6. The tactile graphic reader of Claim 1 , wherein the payload is text data and wherein the program instructions are configured to cause the at least one processor to: perform text to audio processing on the text data to generate the audio output.

7. The tactile graphic reader of Claim 1 , wherein the payload is audio data.

8. The tactile graphic reader of Claim 1 , wherein the program instructions are configured to cause the at least one processor to: detect positions of a plurality of markers on the tactile graphic; calculate a perspective transform based on the positions of the plurality of makers; and map the physical position to the digital position using the perspective transform.

9. The tactile graphic reader of Claim 1, comprising a communications interface, wherein the program instructions are configured to cause the at least one processor to: send a request for the digital tactile graphic file to a remote server using the communications interface and the identifier; obtain the digital tactile graphic file from the remote server in response to the request.

10. The tactile graphic reader of Claim 1, wherein the identifier is coded in one or two dimensional barcodes and the programming instructions are configured to cause the at least one processor to: determine the identifier by decoding the one or two dimensional barcode.

11. A tactile graphics system, comprising: a server comprising a database of digital tactile graphics files; a tactile graphics reader, the tactile graphics reader comprising: a reading area for receiving a tactile graphic having an identifier; and a communications interface, at least one processor configured to execute program instructions, wherein the program instructions are configured to cause the at least one processor to: detect the identifier of the tactile graphic; send a request, over a network coupling the server and the tactile graphics reader, for a digital tactile graphic file, the request being sent from the tactile graphics reader to the server using the communications interface and the identifier; obtain the digital tactile graphic file from the server in response to the request, wherein the digital tactile graphic file includes payloads associated with a variety of digital positions; determine a physical position of at least one finger or at least one object on the tactile graphic; map the physical position to a digital position in the digital tactile graphic file; and generate an audio output based on a payload associated with the digital position.

12. The tactile graphics system of Claim 11, wherein server comprises a graphics editing module configured to allow remote user computers to create and edit tactile graphics and store the tactile graphics in the database of digital tactile graphics files.

13. The tactile graphics system of Claim 11, wherein the server comprises a print version creation module configured to generate a two or three dimensional print file for printers to output a tactile graphic in sheet form using a digital tactile graphics file from the database.

14. The tactile graphics system of Claim 11, wherein the digital tactile graphic files in database are vector graphics files.

15. The tactile graphics system of Claim 14, wherein the server comprises a reader version creation module configured to convert the vector graphics files into reader files including a two dimensional matrix of digital positions and payloads associated with the digital positions.

16. The tactile graphics system of Claim 15, wherein the server is configured to send a reader file in response to the request.

17. A method of reading a tactile graphic, comprising: receiving a tactile graphic in a reading area of a tactile graphic reader, the tactile graphic having an identifier; detecting, via at least one processor, the identifier of the tactile graphic; retrieving, via the at least one processor, a digital tactile graphic file representing the tactile graphic from a database of digital tactile graphics files using the identifier, wherein the digital tactile graphic file includes payloads associated with a variety of digital positions; determining, via the at least one processor, a physical position of at least one finger or at least one object on the tactile graphic; mapping, via the at least one processor, the physical position to a digital position in the digital tactile graphic file; and generating an audio output based on a payload associated with the digital position.

18. The method of Claim 17, comprising:

28 sending, via the at least one processor, a request for the digital tactile graphic file, to a remote server from the reader; and obtaining the digital tactile graphic file from the remote server in response to the request.

19. The method of Claim 17, comprising detecting the identifier and determining the physical position using images output from a camera of the reader.

20. The method of Claim 17, wherein the tactile graphic comprises a variety of different tactile areas and each different tactile area has a corresponding digital area in the digital tactile graphic file, wherein each digital area is associated with a different payload, such that when the user moves from one tactile area to another tactile area on the tactile graphic, a different audio output is generated based on the different payloads in the corresponding digital areas included in the digital tactile graphic file.

29

Description:
TACTILE GRAPHICS READER

TECHNICAL FIELD

[0001] The present technology is generally related to a reader for tactile graphics, which is assistive technology for blind or visually impaired persons. In particular, the reader outputs audio information relating to a three-dimensional tactile graphic.

BACKGROUND

[0002] Tactile graphics provide three-dimensional versions of pictures, maps and diagrams to the visually impaired. The three-dimensional form of the tactile graphic can be followed by one or more fingers of the visually impaired person. However, some practicalities limit the usefulness and appeal of these devices. It may be difficult for a visually impaired person to make sense of tactile shapes and textures without some extra information to confirm or augment what has been touched. One method of providing extra information is to label the tactile presentation with Braille. However, these Braille tags must be large and have plenty of blank space around the tags for them to be legible. Further, the use of tags is not particularly effective or useful with fairly complex or graphically rich images. Yet further, reliance on Braille labeling restricts the usefulness of tactile graphics to the group of visually impaired individuals that are competent Braille readers.

[0003] One of the initial attempts to enrich the tactile graphic experience and allow for a broader range of users involved a touch screen device (known as Iveo), which was connected to a host computer. This type of device promised to enhance the tactile experience by allowing a user to feel pictures, graphs, diagrams, etc., and then the user pressed on various tactile features to hear descriptions, labels, and other explanatory audio material. While this type of device enjoyed some limited success, it suffers from several drawbacks which have prevented the device from gaining widespread use and popularity in the visually impaired community. The device typically uses a touch sensitive surface having a low resolution, so that precise correspondence of graphic images and audio tags is difficult to achieve. Such a tactile graphics reading device must be connected to a personal computer and thus suffers from usability deficiencies. Further, a digital file corresponding to the tactile graphic must be loaded, which will generally necessitate the blind person to have assistance. This will adversely impact the visually impaired person’s sense of independence. Yet further, the known device does not support standard graphics files, which significantly restricts the openness of the system to a large source of tactile graphics.

[0004] Patent application WO 03/025886 discloses another audio tactile system having a touch sensitive work pad connected to a host computer. The touch sensitive work pad includes a frame that is openable to insert a touch overlay. The touch overlay includes a plurality of raised dots for position calibration and a plurality of raised bars to allow the touch overlay to be identified by the system. This user is required to press the raised dots each time a new touch overlay is inserted to allow for position calibration. Further, the user is required to press the raised bars to allow the host computer to identify the touch overlay based on a position of the raised bars. This system suffers from some drawbacks including the drawbacks of using a touch pad and a host computer as described above for the Iveo device. Additionally, the calibration and identification processes are burdensome for a user and the touch overlays are highly specialized and lack design flexibility to facilitate the development of a large stock of touch overlays on all manner of subjects.

[0005] Accordingly, it is desirable to provide a reader for tactile graphics that is easy to use, is self-contained, and facilitates the creation of a large stock of interesting and varied tactile graphics on all manner of subjects. In addition, it is desirable to provide systems and methods by which a visually impaired person has access to a large stock of tactile graphics for interactive use. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.

BRIEF SUMMARY

[0006] In one aspect, the present disclosure provides a tactile graphic reader including a reading area for receiving a tactile graphic having an identifier and a processor configured to execute program instructions. The program instructions are configured to cause the at least one processor to detect the identifier of the tactile graphic and retrieve a digital tactile graphic file representing the tactile graphic from a database of digital tactile graphics files using the identifier. The digital tactile graphic file includes payloads associated with a variety of digital positions. The program instructions are further configured to determine a physical position of at least one finger or at least one object on the tactile graphic, map the physical position to a digital position in the digital tactile graphic file, and generate an audio output based on a payload associated with the digital position.

[0007] In another aspect, the disclosure provides a tactile graphics system. The system includes a server comprising a database of digital tactile graphics files and a tactile graphics reader. The reader includes a reading area for receiving a tactile graphic having an identifier, a communications interface, and at least one processor configured to execute program instructions. The program instructions are configured to cause the at least one processor to detect the identifier of the tactile graphic, send a request, over a network coupling the server and the tactile graphics reader, for a digital tactile graphic file. The request is sent from the tactile graphics reader to the server using the communications interface and the identifier. The reader obtains the digital tactile graphic file from the server in response to the request. The digital tactile graphic file includes payloads associated with a variety of digital positions. A physical position of at least one finger or at least one object on the tactile graphic is determined, mapped to a digital position in the digital tactile graphic file, and an audio output is generated based on a payload associated with the digital position.

[0008] In a yet further aspect, a method of reading a tactile graphic is provided. The method includes receiving a tactile graphic in a reading area of a tactile graphic reader. The tactile graphic has an identifier. The identifier of the tactile graphic is identified by a processor. A digital tactile graphic file representing the tactile graphic is retrieved by the processor from a database of digital tactile graphics files using the identifier. The digital tactile graphic file includes payloads associated with a variety of digital positions. A physical position of at least one finger or at least one object on the tactile graphic is determined by the processor. The physical position is mapped to a digital position in the digital tactile graphic file by the processor. An audio output is generated based on a payload associated with the digital position. [0009] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF THE DRAWINGS

[0010] FIG. 1 is a perspective view of a tactile graphic reader, in accordance with an exemplary embodiment of the present disclosure;

[0011] FIG. 2 is a perspective view of a tactile graphic reader including a tactile graphic sheet, in accordance with an exemplary embodiment of the present disclosure;

[0012] FIG. 3 is an exemplary tactile graphic sheet, in accordance with an exemplary embodiment of the present disclosure;

[0013] FIG. 4 is an exemplary tactile graphics system, in accordance with an embodiment of the present disclosure; and

[0014] FIG. 5 is a method reading a tactile graphic, in accordance with an exemplary embodiment of the present disclosure.

DETAILED DESCRIPTION

[0001] The following detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.

[0002] In one or more examples, techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).

[0003] Instructions may be configurable to be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.

[0004] Techniques and technologies may be described herein in terms of functional and/or logical block components, and with reference to symbolic representations of operations, processing tasks, and functions that may be performed by various computing components or devices. Such operations, tasks, and functions are sometimes referred to as being computer- executed, computerized, software-implemented, or computer-implemented. It should be appreciated that the various block components shown in the figures may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of a system or a component may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.

[0005] For the sake of brevity, conventional techniques related to signal processing, data transmission, signaling, network control, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent exemplary functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the subject matter. [0006] Before describing the device, system and method embodiments of FIGS. 1 to 5, a general overview of the tactile graphic reader, and associated systems and processes, will be described. The reader is a device including a reading area on which tactile graphics can be placed, an optional clamp (or locking mechanism) to hold the tactile graphic, a camera positioned above the reading area, and an embedded processor running one or more algorithms. The clamp may operate mechanically on the tactile graphic, magnetically or in any suitable way for stabilizing a position of the tactile graphic in the reading area. The one or more algorithms (which are embodied by computer program instructions as described above) should identify the tactile graphic based on an identifier included on the tactile graphic. The identifier may be a two or three-dimensional bar code such as a QR code. The processor fetches the digital representation of a graphic based on the identifier. The camera may supply images (e.g. video) of the tactile graphic to determine physical finger (or fingers) position. The physical position (such as the finger position) may be mapped to a digital position in a digital representation of the tactile graphic included in a database of digital tactile graphic files. Audio is generated for selected digital positions in the digital representation based on payloads included in the digital file at different digital positions. The reader acquires the digital representation of the tactile graphic by retrieving it from a local or remote database using the identifier. The reader includes a speaker for outputting the audio. Interaction means (such as keys, microphone, gesture recognition) may be provided to enable user selections in addition to selecting a graphical item in the tactile graphic (e.g. a power button, menu selection buttons, volume buttons, a start button, a stop button, etc.). A user is able to place one of many tactile graphics on the reading area and subsequently receive audio explanations to those parts of the graphic to which the user points. The identifier is obtained automatically and allows automatic selection of the relevant digital tactile graphic file, which allows the visually impaired person to work/play with many tactile graphics independently and without burdensome initiation procedures. Further, the reader, in one embodiment, operates through optical finger/object position detection and optical detection of the identifier, which offers a flexible system in terms of hardware and in terms of creating useable tactile graphics.

[0007] In embodiments, the reader includes a metal plate (or magnetic plate) below the reading area and one or more magnetic blocks/objects (or metal blocks/objects) that can be affixed to reading area by magnetic force. These provide the possibility to create play objects whose position can be detected (e.g. optically) to allow for interaction with the digital tactile graphics file through changing position of the objects. This may allow enhanced selection of an area of the tactile graphic for the user and frees the user’s fingers for interacting with the tactile graphic or the reader or for other purposes. Touch pad based systems are generally not able to work with magnetic objects and thus do not allow for this feature enhancement.

[0008] In embodiments, the reader includes a local digital tactile graphics repository/database stored on memory of the reader and/or a communications interface connected to a remote/server based digital tactile graphics repository/database. In this way, the reader is compatible with many tactile graphics by automatically loading the corresponding digital tactile graphic file from the local or remote database using the identifier. The server may include a graphics editing module providing a service for submitting diagrams including validation and optimization thereof so that physical and digital tactile graphics can easily be created. A print version creation module at the server may provide a method for transforming diagrams for printing tactile diagrams (to be put on the reader) from a digital version included in the local or remote database. Modules as described herein are implemented through computer program instructions, which have been defined in the foregoing.

[0009] An exemplary use of the reader described herein is disclosed. A user is able to place a tactile graphic on a reading area of the reader. The tactile graphic includes tactile features that can be felt by a visually impaired person by the sense of touch. A camera captures an image of the tactile graphic and an embedded processor of the reader searches the image or images of the tactile graphic for a coded identifier such as a QR code. When the QR code is located, the processor determines whether a digital tactile graphic file (or electronic representation) is already available on memory of the reader (e.g. in a local graphics file cache). If yes, the local version of the digital tactile graphic file is used by the processor in subsequent operations. If an electronic representation is not available on the local memory of the reader, then the processor submits a request to a remote server, which may be transmitted over the internet through a communications interface of the reader. In one embodiment, the reader translates the identifier into a url (http://discover.tactonom.com/get/<identifier>"), which points to the server. In response to the request, the digital tactile graphic file associated with the designated identifier is returned and the local processor places the file into the local memory (cache).

[0010] In embodiments, the processor searches, in the image or images from the camera, for a plurality of markers on the tactile graphic sheet. Dedicated markers may be used such as colored circles, but other markers could be used including the QR code itself and other distinctive forms could be used. The markers should be graphical items that are present on every tactile graphic and which can be identified easily by image processing. The markers may have the same position in every tactile graphic but they need not do so. The position of the markers is defined in a vector graphics (SVG) file and thus can be set independently for each tactile graphic. At least 3 or 4 markers may be provided to allow proper calibration of the position and size of the tactile graphic as captured by the camera relative to the digital version of the tactile graphic. In one embodiment, a pixel position of the center of the plurality (e.g. 4) of markers in the image captured by the camera is determined via an image processing algorithm. To find the markers, the search space in the image may be restricted to areas including the marker positions. The marker positions are known by the reader processor reading the marker positions from the retrieved tactile graphics file.

[0011] In embodiments, a perspective transformation matrix is calculated on the basis of the marker positions on the image and the corresponding logical positions of the markers in the electronic representation. The perspective transformation can from then on be used to translate any pixel position into the associated position in the electronic version of the tactile graphic. Corrections for camera lens distortion may be included.

[0012] The aforementioned operations of optically determining the identifier included on a physical tactile graphic, retrieving the digital tactile graphic either from a local database or from a remote database of a server and determining a calibration function (perspective transform) relating the marker positions in the physical tactile graphic to the marker positions in the digital tactile graphic are initialization operations. These initialization operations are performed automatically, without user input and without necessitating the user to have assistance. Further, the initialization operations do not require special steps to be performed by the user and do not require expensive hardware aspects for the tactile graphics as the identifiers and markers are optically identified and can simply be printed on the tactile graphic. Further, normalization of a large number of tactile graphics so as to be compatible with the initialization processes and the reader is readily achievable. Yet further, the reader is self- contained in not requiring connection to a host computer and relatively inexpensively produced.

[0013] In embodiments, the processor tracks the position of a finger and/or another object (e.g. the magnetic/metallic object/s described previously). Using an image processing algorithm, to identify at least one finger (typically, an index finger will be sought, but the algorithm may alternatively look at other fingers or take all fingers into account (e.g. in performing gesture recognition) to determine which point the user is interested in). A pixel position corresponding to a user selection is thus identified. The pixel position represents a physical position of the finger or the object on the tactile graphic. The pixel position is translated or mapped to a logical (or digital) position in the digital tactile graphics file based on the perspective transformation.

[0014] In embodiments, the processor of the reader searches in the digital tactile graphic file for information (also called a payload herein) associated with the digital position. In one embodiment, the processor iterates over the internal structure of the electronic representation to determine which associated information exists at the particular digital position. Depending on the type of electronic representation, this may be a burden on the CPU unless the electronic representation is already optimized for this search process. Generally, the server will have sent a version of the digital tactile file that is optimized for search, as described further herein. If the information associated with the digital position is text (e.g. a title or description), then text is converted into audio via text-to-speech processing and generated audio data is sent to the speakers of the reader. If the information associated with the digital position is audio, then audio is directly sent to the speakers and no text-to-speech conversion is necessary. In some embodiments, if no associated information is found, then, depending on context, an audio feedback message is generated and played through the speakers.

[0015] In one aspect of the present disclosure, a combination of the server and the reader is provided. In another aspect of the present disclosure, the server may be considered as an independent aspect of the invention. In embodiments, the server has at least one of the following roles: a) the server provides electronic representations of tactile graphics to readers requesting them based on sent identifiers; b) the server acts as a searchable online repository of available tactile graphics to allow self-printing thereof by the user or the tactile graphics may be sent by post from a remote printer; and c) the server can be used to submit graphics to the graphics repository. In one embodiment, the service is configured to allow users (e.g. teachers, parents, blind persons) to browse through the database, retrieve visual, textual and/or audible descriptions of available graphics, search for graphics (e.g. by name, type, topic, etc.) and download graphics to their computers so that they can print tactile graphics (if they have the printer type (3D printer, swell paper printer, braille embosser or other)). Alternatively, the users may order printing of the tactile graphics from a third party service provider, which will be delivered by post. The printed tactile graphics can be placed on the reader disclosed herein so that a visually impaired person can interact with tactile graphics. The reader will download the digital tactile graphic file associated with the printed tactile graphic from the server. In embodiments, the server provides graphic creation help, validates the electronic representation, and optimizes the graphic for use by the reader.

[0016] In embodiments described herein, the digital tactile graphic file may be created in Scalable Vector Graphics form (SVG). SVG data is an Extensible Markup Language (XML)- based vector image format for two-dimensional graphics with support for interactivity and animation. SVG images and their behaviors are defined in XML text files. This means that they can be searched, indexed, scripted, and compressed. As XML files, SVG images can be created and edited with many text editors and drawing software. SVG is a highly supported file type, which makes the system described herein open for popular use. SVG allows three types of graphic objects: vector graphic shapes such as paths and outlines consisting of straight lines and curves, bitmap images, and text.

[0017] In embodiments, the SVG files include an identifier (encoded by a QR code), a title of the tactile graphic, graphics (e.g. paths, shapes, etc.) and position descriptions including the graphical markers used to calculate the perspective transform, title and description of each graphical element in text or audio form, and other elements.

[0018] In some embodiments, the reader works from a smaller size digital tactile graphics file than the SVG files stored at least in the database of the server. In one embodiment, the server returns a compressed bitmap file (e.g. a png file), which contains at least two elements: a pixel matrix (e.g. 640x480) including values for each pixel (e.g. grayscale values) and additional payloads. The payloads include the marker positions needed for calculating the perspective transform; an index mapping each pixel matrix value (which corresponds to the digital position) to associated text or audio entries (which corresponds to the payloads associated with each digital position), and the text and audio content. Lor example, pixel x=220,y=305 may have the grayscale value 1024. This then maps to two byte indexes in the text chunk, the first of which contains the title text (e.g. "Colorado" in a tactile graphic of a map of the USA) and the second contains the text description (e.g. “Capital: Denver Inhabitants: 5.700.000”). When the reader loads the compressed bitmap file, the reader loads the pixel matrix, extracts the marker positions (logical positions) and extracts the index. When a finger position is optically recognized by the processor, the associated pixel is calculated by applying the perspective transform. The pixel matrix value for this pixel is retrieved, the payload indexes associated with this pixel are determined and the text/audio data are then loaded starting at the specified payload indices.

[0019] In embodiments, the server facilitates many users creating tactile graphics. When a user wants to create a graphic, the following sequence of steps can be performed. The user accesses the server (e.g. from a personal computing device). The server has a graphics editing module that can be loaded for the user. The graphics editing module can create an empty template containing the plurality of markers, a new unique identifier, and - based on user input - a title (and optionally language) for the document. The user may open the graphic template in a personal editing program or the server might provide the editing program to the user via the graphics editing module. For SVG files, the graphics editing module may be any free or commercially available SVG editor (for example Inkscape, which is free). The user creates graphics in the graphic template and assigns textual or audio information to parts/areas of the document corresponding to graphical elements. For example, form and position of graphical elements are defined and associated with text or audio information including a title attribute and a description attribute. The completed graphic is submitted to the server. The server ultimately saves the created digital tactile graphic file in a server database. In embodiments, the server validates the graphic as described further below. The server may visually display to the user all hit areas, i.e. all logical positions in the graphic which have associated information (text or audio) and alerts the user to obscured information (e.g. it can happen that the user assigns information to a graphical element such as a rectangle that lies below another graphical element and could thus never be activated because the upper graphical element wholly obscures the one beneath). Validation is performed by a validation module of the server. After validation, the server adds the digital tactile graphic file to its database. The server may assign the graphic to one or more categories to allow topic based searching.

[0020] The validation operations performed by the server can include any one or more of the following processes. The validation module might ensure the integrity and completeness of the source file. The SVG files described herein should specify the document language (=SVG root element must have 'lang' attribute), should have an identifier (=SVG root element must have an 'id' attribute), the id attribute must be a valid tactile graphic identifier, and the document must have a title (SVG root element must have a 'title' attribute). These aspects are validated by the validation module. The validation module validates presence of markers and QR code and that the markers are sufficiently spaced apart and are not arranged to be symmetric to both document x and y axis. The validation module validates for graphical elements that are obscured by other graphical elements and therefore could not be triggered by a finger. The validation module ensures that the resolution is appropriate in that graphical elements meet minimum size criteria to ensure that they can be reliably be triggered by a finger and meaningfully distinguished during tactile interaction. Generally, graphical elements should not be smaller than a square finger width (some exceptions apply). The validation module might provide feedback to the user to flag hit areas that are unnecessarily small and other validation issues (e.g. obscured graphical elements) so that these may be corrected before saving to the database. If a small tactile feature has much empty space around it, it would be better to enlarge the hit area around the tactile feature, so that the user gets an associated audio output even if the detected finger position is a little off. The validation module may inhibit making a tactile graphic available for other users in the database until validation errors have been resolved.

[0021] In embodiments, the server creates plural variations of the submitted graphic: the original version (e.g. the SVG file which is used by the graphic designer); a version optimized for tactile printing (in which some of the text that users have added to the graphic is converted into braille characters); and a compressed version optimized for the reader (e.g. the compressed bitmap version described previously). In one embodiment, the server converts all text that has certain properties (for example a certain color, a certain transparency or font type) from normal characters into braille characters for the print version. Every tactile graphic should have at least a title in braille character so that a blind user can quickly select a graphic he/she is interested in from a stack of available tactile graphics by reading their title with their fingers. However, graphic designers may not be able to read braille characters when they see them on their screen and thus cannot detect typing errors. Therefore, they can enter the title or other braille text to be incorporated into the graphic as normal text and only on submission to the server, will this text be automatically converted into braille (and only for the version that is used for printing). The printer version is created by a print version creation module of the server. In one embodiment, the processor included in the reader is an embedded processor with relatively low processing performance. Translating electronic representations in standard formats such as SVG for graphics files or STEP for 3D-printed files requires significant processing power, which can heavily burden the embedded processor and delay the reader’s response. Therefore, the Server preprocesses the electronic representation and provides it in an optimized format as a 2-dimensional pixel matrix (e.g. grayscale) format, where each pixel value is interpreted as a numeric index pointing to the associated information. This is much easier and faster to process for the reader. As described previously, the pixel matrix may be stored as a file using an extension of the PNG (Portable Networks Graphic) format. The reader version of the digital tactile graphics file is created by a reader version creation module of the server.

[0022] The system may include the printer for printing the tactile graphic. The physical graphic is 3 -dimensional. The printer may be a braille paper printer with variable height pins, a 3D printer or the like.

[0023] In some embodiments, the server is not included. An embodiment is envisaged in which electronic representations of the tactile graphics are stored in a database on a memory of the reader. These can be transferred from a local device like a personal computer or memory stick after creation of the tactile graphic on the local device and local printing. However, such an embodiment requires each user to create their own tactile graphics and associated digital version. The provision of a server and internet connection to the server by the reader provides a central access point so that many users can create a large repository of tactile graphics. The systems described herein provide an architecture that bundles together all the activities related to building tactile graphics. Thus, when one user (a teacher, a parent, a designer funded by some pro-blind initiative) adds a graphic, it is instantly available to all - multiplying the value of this activity. Further, the systems disclosed herein use highly compatible file types and freely available graphics editing programs to further facilitate tactile graphics creation. The server performs validations and optimizations to ensure usability of the tactile graphics. Yet further, the server described herein allows editing/improving/update of tactile graphics even after they have been initially created, downloaded and turned into a physical version. The server may be configured to translate graphics from one language into another and make the translated versions available to people in the other language. It is possible to add information to different parts of the graphic later on (for example, by providing more detailed information to parts of a graphic, or by updating the number of inhabitants of a city which is located on a map) or by adding interactive features such as a quiz. It would also be possible to make the audio associated with a graphic more pleasant over time. The text of the most widely used graphics could be spoken by professional speakers and replace the more monotonous output of the text-to-speech engines which is used for standard text. It can furthermore alert users of a graphic that new versions of this graphic or related graphics have become available. The server can also - via an update mechanism - provide entirely new capabilities associated with the graphics and roll them out to the users as they become available. The ability to modify the content of a graphic, to adapt graphics on the server, making them better over time, to have people collaborate around such graphics is highly advantageous in this field.

[0024] Referring now to FIGS. 1 and 2, a reader 10 according to an exemplary embodiment is illustrated in perspective view. The reader 10 includes a body 11 defining a reading area 12 for receiving a tactile graphic 26 (see FIG. 2). The body 11 includes a base plate 28 disposed beneath a pad 13 forming an upper surface that will contact a lower surface of the tactile graphic 26. The base plate 28 may be metallic or magnetic to allow one or more objects (not shown) that include magnetic or metallic parts to be magnetically held on the reading area 12. The tactile graphic 26 is held in place on the reading area 12 by a clamp 20, which is a spring actuated mechanical clamp in the depicted embodiment. However, other holding mechanisms for the tactile graphic 26 could be employed including by magnetic force, a slot into which the tactile graphic 26 is slide fit, projections that fit into holes on the tactile graphic 26, a frame for capturing the tactile graphic 26, etc..

[0025] The reader 10 includes a camera unit 14 including a camera and optics for capturing images of the tactile graphic 26 disposed on the reading area 12. The camera unit 14 is positioned above the reading area 12 by a foldable camera arm 16 that has at least two distinctly defined configurations. In a collapsed configuration, the camera arm 16 and camera unit 14 is folded so as to be placed against the body 11. In a raised position, the camera arm 16 extends upwardly away from the body 11 to position the camera unit 14 so as to look down on the reading area 12. In some embodiments, the camera unit 14 includes a light source to illuminate the reading area 12. In some embodiments, the camera unit 14 includes a microphone to allow verbal input, although the microphone may be otherwise located. The reader 10 includes a speaker 18 through which audio information is output to the user.

[0026] The reader 10 includes buttons 22 allowing user interaction with the reader 10. The buttons 22 may include a power on button, an enter or select button, a navigation button, mode buttons, a start button, etc. Such a range of buttons 22 is not considered necessary. The reader 10 may be user controlled additionally or alternatively by gesture inputs that are recognized through gesture analysis of images captured by the camera unit 14. In some embodiments, the reader 10 may include sockets 24 for a power input plug, a headset output, a USB input/output socket for transferring files (e.g. digital tactile graphics files), charging outputs, a microphone input, etc. The reader 10 may include a rechargeable battery (not shown) to allow use without being plugged into an external power source (e.g. cordless use). In some embodiments, the reader 10 may have Bluetooth or other short range communications capability at least for wireless connection to a headset.

[0027] The reader 10 includes a control unit 30 disposed therein, which includes embedded communications (e.g. WiFi), processing and storage capabilities. With additional reference to FIG. 4, the control unit 30 includes, inter alia, a reader processor 40, a communications interface 44, memory 41 a database of reader’s digital tactile graphics files 42, a tracking module 48, and various software modules 46 to 56. As used herein, the term module refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. The modules 46 to 56 are, generally, software/programming instructions stored on the memory 41 executed by the reader processor 40.

[0028] The database 42 of the reader’s digital tactile graphics files is a computer memory repository of digital tactile graphics files. The digital tactile graphics files define digital positions corresponding to physical positions on a tactile graphic 26 in association with text or audio data payloads for those positions. The digital tactile graphics files each further include a unique identifier that corresponds to the unique identifier on the tactile graphic 26 and physical marker positions, amongst other data items.

[0029] The communications interface 44 provides access for the reader 10 to a server 60 so that digital tactile graphics files can be downloaded from the server 60 when they cannot be found in the database 42 of reader’s digital tactile graphics files. The communications interface 44 may provide wireless (e.g. WiFi) or wired internet communications. [0030] In embodiments, the reader processor 40 is configured to invoke the various modules 46 to 56 in order to carry out the methods and operations described herein. The tracking module 48 is configured to receive images from the camera of the camera unit 14 and to track the position of at least one finger - usually the index finger - based thereon. There are many options for the implementation of finger tracking. In one method, a histogram based approach is used to separate out the hand and fingers from the background frame. Thresholding and filtering techniques can be used for background cancellation to obtain optimum results. A finger tracking algorithm should be selected that works on the naked fingers without markers. The tracking module 48 may incorporate hand gesture recognition software to allow not only selection of areas on the tactile graphic but also input of additional control commands.

[0031] In embodiments, the graphic retrieval module 56 receives images from the camera of the camera unit 14 and searches the image for the identifier 32 (see FIG. 3). The graphic retrieval module 56 uses the identifier to look up the digital tactile graphic file in the local database 42 and, in the alternative, the database 62 of the server 60. The graphic retrieval module 56 returns the digital tactile graphics file identified by the identifier 32. Referring briefly to FIG. 3, it can be seen that the identifier 32 is in the form of a two-dimensional barcode or QR code. The identifier 32 may be otherwise encoded or not encoded at all. The graphic retrieval module 56 may search a specific portion of the image of the tactile graphic such as an upper right corner to more quickly identify the identifier 32. Algorithms for finding and decoding QR codes in an image are known. The algorithm may convert the image to grayscale, use thresholding and pattern recognition to identify the location of the QR code. The QR code information can be isolated, extracted and decoded as known. Provided the identifier 32 is provided in a distinctive form and can be positioned in a consistent way between tactile graphics 26 for ease of identification, a variety of types of optically readable identifier could be provided.

[0032] In cases when the digital tactile graphics file is retrieved from the server 60, the reader places a request to the server 60 that includes the identifier 32 and sends the request over the internet using the communications interface 44. In one embodiment, the request may be included in a URL (Universal Resource Locator) along with a server address such as www.tactilegraphicsserver/identifier. However, other protocols for sending the request for a digital tactile graphics file may be used, which may rely on the reader 10 being able to connect remotely with the server 60 and request the server to find the digital file using the unique identifier 32. Before connecting with the server 60, the reader searches local drives for the digital tactile graphics file in the database 42. The database may be located not only on memory of the reader 10, but also on a local network drive that the reader 10 can connect to via the communications interface over wireless connectivity (e.g. WiFi) or a local server (e.g. a server for a school). In one embodiment, the reader 10 first interrogates local mass storage devices including embedded memory of the reader 10, local network drives, local servers, etc. before sending the request to the remote central server 60. In some embodiments, the reader 10 may act like a dumb terminal that always downloads the digital tactile graphics files from the remote server 60.

[0033] In embodiments, the perspective transform calculation module 46 extracts the digital positions of the markers 34 included as a data element in the retrieved digital tactile graphics file. The marker positions may be x and y coordinates in a two-dimensional coordinate system of the digital version of the tactile graphic 26 defined relative to some reference position such as the top left corner of the tactile graphic 26 being coordinate (0,0). A center of the markers 34 may be specified by the marker position coordinates. Briefly referring to FIG. 3, the markers 34 are distributed around the tactile graphic with one marker 34 being placed in each corner of the rectangular sheet. The markers may not be placed vertically (up and down direction of the sheet) and/or horizontally (left and right direction of the sheet) symmetrically so as to allow the reader processor 40 to perform a check (and output an alert) based on the marker positions when the tactile graphic 26 is not correctly oriented. In the example of FIG. 3, four circular markers 34 are used, but other shapes, sizes, positions and numbers of markers 34 could be used. Indeed, dedicated markers are not strictly necessary (although they do reduce the processing burden) because pattern matching techniques may allow the perspective transform between the digital and imaged versions of the tactile graphic 26 to be calculated. The perspective transform calculation module 46 searches for, and identifies, pixel positions for the markers based on images received from the camera. A center point of each marker 34 in pixel coordinates and a center point of the markers in digital coordinates extracted from the digital tactile graphics file are input to a perspective transform calculation algorithm. A variety of suitable algorithms are available from image processing libraries for calculating perspective transforms. The perspective transform embodies a calibration for converting from pixel positions of an imaged tactile graphic 26 in the reading area 12 to digital positions of the digital tactile graphics file. The perspective transform may rotate, re-size and otherwise spatially adjust the imaged version of the tactile graphic 26 relative to the ideal version retrieved from the database 42.

[0034] In embodiments, the physical position to digital position conversion module 46 receives one or more pixel positions as inputs, applies the perspective transform to the pixel positions to generate a digital position as an output. The pixel positions represent a physical position on the tactile graphic 26 as captured by the pixels of the camera of the camera unit 14. The digital positions represent locations on the digital version of the tactile graphic 26 corresponding to the image captured physical position. In one embodiment, the calculated perspective transform is used to translate pixel positions returned by the tracking module 48 representing a finger position or user selection to a digital position of the digital version of the tactile graphic 26. This digital position can subsequently be mapped to an associated payload to be output by the reader for consumption by the user.

[0035] In embodiments, the output information retrieval module 52 is configured to receive a digital position and to output any payload associated with that position. The payload may be text data or audio data associated with the digital position. Referring to FIG. 3, the tactile graphic 26 is, in this example, a map of the United States of America (USA) and includes divided areas that are referred to as graphical elements 36. The graphical elements in this example are states of the USA. The corresponding digital tactile graphics file is similarly divided into areas making up graphical elements. That is, the digital tactile graphics file is divided into many graphical elements 36 that positionally correspond in the digital version of the tactile graphic 26 to the physical position of respective graphical elements 26 in the physical tactile graphic 26. Assuming a user were to point, with his or her finger, to the state California in the tactile graphic 26, the tracking module 48 would return a pixel position or a cluster of pixel positions that could be averaged into a single pixel position. The physical position to digital position conversion module converts the pixel positions into digital positions using the perspective transform. The output information retrieval module 52 loads the payload associated with that digital position for subsequent output. In some embodiments, the digital tactile graphics file includes a two dimensional matrix or bitmap and payloads for each or many of the two dimensional positions in the matrix of bitmap. Each or many of the values in the matrix or bitmap point to a payload file. Thus, a digital coordinate of x,y is located in the matrix, the value of the matrix is obtained and used to load the payload for the value of the matrix. Other file formats are possible, although this file format is particularly conducive to fast processing. In some embodiments, this version of the digital tactile graphics file is created based on an SVG file, as will be further discussed below.

[0036] In one embodiment when the payload is a text file, the text to audio module 54 is invoked, which runs a text to speech algorithm to transform the text input file to an audio output file. Speech algorithms of various levels of sophistication are available in software libraries.

[0037] The reader processor 40, running suitable programming instructions, outputs the audio file to the speaker 18 so that the retrieved payload can be audibly played to the user. The user is thus provided with audible information specific to the graphical element 36 selected by the location of the finger of the user. When the finger of the user is positioned to select another graphical element, a different audible payload corresponding to that position in the digital tactile graphics file will be loaded and played through the speaker 18.

[0038] Referring again to FIG. 3 and the exemplary tactile graphic 26 disclosed therein, a map of the USA is shown. The contours of each state in the map will be raised or lowered relative to the general plane of the sheet so that a blind or visually impaired person can feel the shape of each state with their fingers. The tactile graphic 26 includes a title 38 in text form and in braille form for easy identification of the general subject of the tactile graphic 26 using touch and sight. The tactile graphic 26 includes plural markers 34 and a coded, optically readable, identifier 32 as has already been described herein. The manner by which the braille title is generated is described below with respect to the server 60.

[0039] With reference to FIG. 4, the tactile graphics system 80 will be described in further detail in accordance with one embodiment. The tactile graphics system 80 includes the control unit 30 of the reader 10, which has been substantially described in the foregoing. The tactile graphics system includes the server 60, a user computer 74 and a tactile printer 76. The server 60 includes a database of server’s digital tactile graphics files 62, a server processor 64, and various modules 66 to 72. The database 62 of the server 60 will, generally, be significantly larger in size than the local database 42 as a result of the server 60 having significantly greater processing bandwidth and data storage depth than the control unit 30 of the reader 10. The server processor 64 is configured by running program instructions to perform the various functions and steps described herein with respect to the server 60. The server 60 will serve a network of readers 10. The server 60 is configured to receive requests for digital tactile graphics files from readers 10 and to provide requested files to respective readers 10.

[0040] In embodiments, the server 60 is further capable of providing access to the database 62 for registered users at a network of user computers 74 (only one of which is shown in FIG. 4). The user computers 74 may access the server 60 to peruse existing digital tactile graphics files on the database 62. The server 60 may provide a sophisticated search interface allowing various fields to be entered as search criteria alone or in combination including title, topic, identifier, author, keywords, language, etc. The user computers 74 may select digital tactile graphics files in the database 62 for three-dimensional printing via the tactile printer 76. The tactile printer 76 may be located at the user end, at a third party site and/or co-located with the server 60. When the tactile graphic 26 is printed by the tactile printer 76 at a remote location, it may be shipped to a user entered address. The server 60 may also provide an interface for creation of new, or editing of existing, digital tactile graphics files. Since a network of users may create the digital tactile graphics files (and the associated tactile graphics 26) and because standard data formats are utilized, a large stock of digital tactile graphics files can be created and stored in the database 62.

[0041] In embodiments, the server 60 hosts a graphics editing module 66 through which new tactile graphics 26 can be created or existing tactile graphics 26 can be edited. The graphics editing module 66 provides point, line and shape drawing capabilities including free hand drawing and standard shapes. The graphics editing module 66 allows graphical elements to be imported and adjusted. Of particular relevance to the present disclosure, the graphics editing module 66 provides the capability to designate certain areas as graphical elements 36 and to associate supplemental text or audio data with the graphical elements 36 as the pay loads described above. The graphics editing module 66 provides vector graphics editing software in one embodiment, such as SVG.

[0042] In embodiments, the server 60 includes a validation module 72 providing at least one of a variety of validation capabilities. The validation module 72 is configured to receive requests to validate graphics files created or edited in the graphics editing module 66. It is important that there is some standardization to the digital tactile graphics files before they are saved in the database 62, particularly in the light of the extensive graphical possibilities provided by the graphics editing module 66. In one embodiment, the validation module 72 ensures that the digital files include a title data element, an identifier data element and optionally a language data element. In embodiments, the validation module 72 (or some other operation of the server processor 64) is configured to add a braille title to the digital tactile graphic using the defined title data element. In some embodiments, the validation module 72 (or some other operation of the server processor 64), automatically assigns a unique identifier to the digital tactile graphics files. In one embodiment, the validation module 72 checks for presence of the markers 34 and/or the coded identifier 32. Further, the markers 34 must be provided according to prescribed spacing and asymmetry criteria, which can be checked by the validation module 72. Like the unique identifier, the validation module 72 (or some other operation of the server 60) may automatically include markers 34 and the coded identifier 32 before saving the digital file to the database 62. In one embodiment, the validation module 72 checks that each defined graphical element 36 is not wholly or partially obscured (up to a maximum allowable extent) by another graphical element 36. In one embodiment, the validation module 72 checks that each designated graphical element 36 meets prescribed minimum size criteria to allow them to be selected by the human digit. The validation module 72 may provide feedback to the user concerning any failed validation checks. The validation module 72 may also block storage of the digital tactile graphics file in the database 62 for access by other users until all flags from the validation report have been resolved.

[0043] In embodiments, the server 60 is configured to create plural versions of the digital tactile graphics file for storage in the database 62. A first version is the vector graphics file created through the graphics editing module 66. This can be considered the original version. A second version is created by the print version creation module 72, which includes braille characters for any text having certain properties (including a certain color, a certain font type, etc). The title data element will be converted to braille in the print version. This version of the tactile graphics file can be sent to the tactile printer 76 on request to print a three- dimensional tactile graphic 26 in which graphical elements 36 are able to be sensed by touch. A third version is created that is optimized for the reader 10. The third or reader version of the digital tactile graphics file is the version that is optimized for fast processing by the reader processor 40 and is the version returned to the reader 10 over the internet in response to a request for the digital tactile graphics file from the reader 10. The reader version is created by the reader version creation module 70. The reader version is a two dimensional matrix or bitmap format with additional embedded data elements. The two dimensional matrix of values includes values at each coordinate pointing to a payload address including a payload data element. The payload data elements are included in the embedded data elements. The two dimensional coordinates correspond to physical positions on the tactile graphic 26 as has been described heretofore. The embedded data elements may also include the unique identifier 32 and the marker coordinates for use in calculating the perspective transform.

[0044] Although a server 60 has been emphasized herein, it is possible that various capabilities of the server 60 could be provided on a user computer 74 such as the graphics editing module 66, the validation module 72, the reader version creation module 70 and the print version creation module 68.

[0045] An exemplary method 100 according to embodiments of the present disclosure is described in FIG. 5. The various tasks performed in connection with process 100 may be performed by software, hardware, firmware, or any combination thereof. For illustrative purposes, the following description of process 100 may refer to elements mentioned above in connection with FIGS. 1 to 4. In practice, portions of process 100 may be performed by different elements of the described system, e.g., the control unit 30 of the reader 10, the server processor 64 of the server 60, or the user computer 74. It should be appreciated that process 100 may include any number of additional or alternative tasks, the tasks shown in FIG. 5 need not be performed in the illustrated order, and process 100 may be incorporated into a more comprehensive procedure or process having additional functionality not described in detail herein. Moreover, one or more of the tasks shown in FIG. 5 could be omitted from an embodiment of the process 100 as long as the intended overall functionality remains intact.

[0046] In step 102, the user places a tactile graphic 26 on the reading area 12 of the reader 10, as shown in FIG. 2. Optionally, the tactile graphic 26 is clamped in placed by the clamp 20. The user may press one of the buttons 22 to begin the following processes commencing with step 104. In step 104, the camera of the camera unit 14 captures one or more images (e.g. a video) of the tactile graphic 26. The reader processor 40 is configured (by running suitable software) to find the identifier 32, which may be a QR or another graphical encoding format. The QR code is decoded by the reader processor to obtain a unique identifier for the tactile graphic 26.

[0047] In decision step 106, the reader processor 40 searches the local database 42 of the reader 10 (and optionally also any local network drives on the same local network (e.g. WiFi) as the reader 10) for the tactile graphic 26 using the unique identifier 32 to determine whether the digital version of the tactile graphic 26 is locally stored. If not, method proceeds to step 108 whereby the reader processor 40 issues a request to the server 60 over the internet via communications interface 44 for the digital tactile graphics file identified by the unique identifier. The server processor 64 receives the request including the unique identifier, locates the digital tactile graphics file in the server database 62 and returns the digital tactile graphics file to the reader processor 40. The returned version of the digital tactile graphics file may be the reader version, which is a simplified bitmap form with additional data elements.

[0048] In step 110, the reader processor 40 finds the markers 34 in the one or more images captured by the camera of the camera unit 14. The reader processor 40 may first read the marker positions from the digital file in order to narrow the search area. The reader processor identifies the markers based on known characteristics of the markers 34 such as shape, size and/or color and corresponding filtering operations to isolate the markers 34 from the rest of the image. In step 114, a perspective transform is calculated to convert the centroid positions of the markers 34 in the image to the positions read from additional data elements in the digital tactile graphics file. The perspective transform can subsequently be used to convert pixel positions to logical positions in the digital file.

[0049] In step 114, the reader processor 40 tracks the finger position based on the images received from the camera of the camera unit 14. Additionally or alternatively, a magnetic or metallic object may be tracked by the reader processor 40. In one embodiment, an object or finger tracking mode may be selected by the user through pressing one of the buttons 22 or through a gesture input. In this way, a user can select a graphical element 36 in the tactile graphic 26 by pointing with a finger to the graphical element 36 or by placing an object on the graphical element 36, which is tracked by the reader processor 40 and image processing operations. In step 116, the tracked pixel position of the finger/object is converted to a digital position in the digital tactile graphics file using the perspective transform. [0050] In step 118, the digital position obtained in step 116 is often associated with a payload. In step 120, a determination is made whether there is a pay load associated with the digital position. When yes, a determination is made in step 124 as to the information type of the payload. When the payload is audio data, the audio is played in step 128 through the speaker 18 of the reader 10. When the payload is text data, the text data is converted to audio data in step 126 and played in step 128 through the speaker 18 of the reader 10. When no payload is found in association with a digital position, predetermined feedback audio may be played in step 128 such as outputting a voice annunciation like “audio data not found here”. In step 122, a determination may be made as to whether feedback is needed before playing the feedback. For example, when a certain period of time has not yet elapsed before the last feedback or when the finger position is outside an active area of the tactile graphic 26, then feedback may not be necessary. After step 128, the method continues to track the finger/object positions according to step 114 and proceed from there. Method 100 may end by the user pressing a stop button 22. In some embodiments, the digital tactile graphics file may have address pointers (or other values) for each digital position within an area of a tactile graphic 36 and the address pointers (or other values) serve as indexes to a specific payload data in the digital tactile graphics file.

[0051] While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the claimed subject matter in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application.