Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SPEECH GENERATION DEVICE WITH OLED DISPLAY
Document Type and Number:
WIPO Patent Application WO/2011/044435
Kind Code:
A1
Abstract:
A speech generation device including an organic light-emitting diode (OLED) display is disclosed. The speech generation device may generally include a computing device including a processor and a related computer-readable medium storing instructions executable by the processor, and a display device comprising an organic light-emitting diode display, wherein said display device is configured to display a message window containing messages, and wherein the instructions stored on the computer-readable medium configure the speech generation device to generate electronic text-to-speech output based on the messages contained within the message window. In some embodiments, the OLED display is formed as a substantially transparent display. In other embodiments, the OLED display is formed as part of a head-mounted display integrated with an item (e.g., helmet, glasses) able to be worn securely adjacent to a user's head.

Inventors:
CUNNINGHAM BOB (US)
HAMMOUD RIAD (US)
SUTTON WILLIAM (US)
Application Number:
PCT/US2010/051935
Publication Date:
April 14, 2011
Filing Date:
October 08, 2010
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DYNAVOX SYSTEMS LLC (US)
CUNNINGHAM BOB (US)
HAMMOUD RIAD (US)
SUTTON WILLIAM (US)
International Classes:
G10L21/06; G10L13/04
Foreign References:
US20060170613A12006-08-03
US20040080267A12004-04-29
US20030020755A12003-01-30
US20060146013A12006-07-06
US20060209039A12006-09-21
US20070229396A12007-10-04
US20060122838A12006-06-08
US20040155846A12004-08-12
Attorney, Agent or Firm:
BAGARAZZI, James, M. et al. (P.A.P O Box 144, Greenville South Carolina, US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A speech generation device, comprising:

a computing device including a processor and a related computer-readable medium storing instructions executable by the processor; and

a display device comprising an organic light-emitting diode display, said display device configured to display a message window containing messages;

wherein the instructions stored on the computer-readable medium configure the speech generation device to generate electronic text-to-speech output based on the messages contained within the message window.

2. The speech generation device of claim 1 , wherein said organic light-emitting diode (OLED) display further comprises an arrayed matrix of OLED components formed on a substrate.

3. The speech generation device of claim 2, wherein said OLED components are arranged in groups in a grid pattern of columns and rows powered by respective anode and cathode lines such that OLED components in a given group emit light when their anode line and cathode line are activated at the same time.

4. The speech generation device of claim 1 , wherein said organic light-emitting diode display is a transparent display capable of transmitting at least some light so that objects or images shown on said transparent display can be seen fully or at least partially through said transparent display.

5. The speech generation device of claim 4, wherein said transparent display comprises a medium that is capable of emitting light from both the top and bottom surfaces of the display device.

6. The speech generation device of claim 4, wherein said transparent display is within a range of between about 50-80% transparent during active operation.

7. The speech generation device of claim 1 , wherein said display device further comprises a frame substantially surrounding and supporting a transparent substrate on which a matrix of organic light-emitting diodes is configured.

8. The speech generation device of claim 7, further comprising driving circuitry and power supply features required to supply operating power to the organic light- emitting diodes; and wherein said driving circuitry and power supply features are housed within said frame.

9. The speech generation device of claim 1 , wherein said organic light-emitting diode display is a frameless display that does not include any structural support or frame around the entire outer perimeter of the display.

10. The speech generation device of claim 9, wherein said display device further comprises a support element provided along at least a portion of a single edge of the organic light-emitting diode display to provide mechanical support for the frameless display.

11. The speech generation device of claim 9, wherein said display device further comprises circuit features required to drive the organic light-emitting diode display, and wherein said circuit features are formed directly on a surface of said organic light-emitting diode display.

12. The speech generation device of claim 1 , further comprising a housing for storing said computing device.

13. The speech generation device of claim 12, wherein said housing is provided adjacent to said display device.

14. The speech generation device of claim 12, wherein said housing is provided in a physically separate location from said display device, and wherein said display device and said computing device are communicatively coupled to one another via a wired or wireless connection.

15. The speech generation device of claim 1 , wherein said display device further comprises a touch screen formed on a front surface of the organic light-emitting diode display facing a user.

16. The speech generation device of claim , wherein said display device further comprises a touch screen formed on a rear surface of the organic light-emitting diode display facing away from a user.

17. The speech generation device of claim 1 , further comprising an eye gaze controller comprising one or more lights sources and one or more sensing elements configured to sense a user's eye location relative to said display device such that the user can employ eye actions to cause input selection for the speech generation device.

18. The speech generation device of claim 17, wherein said eye gaze controller is mounted in an adjacent location to said display device.

19. The speech generation device of claim 17, wherein at least one sensing element of said eye gaze controller is integrated within a frame surrounding said organic light-emitting diode display.

20. The speech generation device of claim 1 , wherein said organic light-emitting diode (OLED) display comprises a matrix display of elements, said elements comprising OLED devices and sensor elements for detecting the eye gaze of a user.

21. The speech generation device of claim 20, wherein selected elements within said matrix display of elements comprise different OLED devices configured for different respective frequencies of operation and corresponding color output.

22. The speech generation device of claim 20, wherein said OLED display is a transparent OLED display.

23. The speech generation device of claim 1 , wherein said organic light-emitting diode display comprises multiple layers of material formed on a substrate, said multiple layers of material comprising at least first and second generally conductive layers forming respective anode and cathode layers for the display, an emissive layer comprising an organic material disposed between said anode and cathode layers.

24. The speech generation device of claim 1 , wherein said display device further comprises at least first and second display portions, the first display portion

corresponding to said organic light-emitting diode (OLED) display and the second display portion corresponding to a non-OLED technology.

25. The speech generation device of claim 24, wherein said second display portion of said display device corresponding to a non-OLED technology comprises one or more of a light-emitting diode (LED) display, electroluminescent display (ELD), plasma display panel (PDP), and liquid crystal display (LCD).

26. The speech generation device of claim 24, wherein said first display portion and said second display portion are provided in a side-by-side configuration.

27. The speech generation device of claim 24, wherein said first display portion is an interior portion embedded within said second display portion such that said second display portion is an exterior portion substantially surrounding the first display portion.

28. The speech generation device of claim 24, wherein said first and second display portions comprise physically separate displays.

29. A head-mounted display for interfacing with a communication device, said head-mounted display comprising:

an item able to be worn securely adjacent to a user's head;

one or more lenses provided within said item able to be worn securely adjacent to a user's head;

one or more transparent organic light-emitting diode matrices integrated within or formed on said one or more lenses such that said one or more transparent organic light- emitting diode matrices are configured to display menus, keypads or other graphical user interfaces directly on said one or more lenses for viewing by a user.

30. The head-mounted display of claim 29, wherein said item able to be worn securely adjacent to a user's head comprises a pair of glasses.

31. The head-mounted display of claim 29, wherein said item able to be worn securely adjacent to a user's head comprises a helmet.

32. The head-mounted display of claim 29, further comprising a plurality of photodiodes incorporated with the one or more transparent organic light-emitting diode matrices such that eye tracking can be used to control said communication device with which said head-mounted display interfaces.

33. The head-mounted display of claim 32, wherein said communication device comprises a speech generation device.

34. The head-mounted display of claim 29, further comprising a housing module physically separate from said item able to be worn securely adjacent to a user's head, said housing module containing said communication device with which said head- mounted display interfaces.

35. The head-mounted display of claim 34, wherein said communication device comprises a speech generation device.

36. The head-mounted display of claim 34, further comprising a wireless link for interfacing between said head-mounted display and said communication device.

37. The head-mounted display of claim 29, further comprising one or more speakers configured to enable audio output for the head-mounted display.

38. The head-mounted display of claim 29, further comprising a microphone configured to enable audio input for the head-mounted display.

Description:
TITLE OF THE INVENTION

SPEECH GENERATION DEVICE WITH OLED DISPLAY

PRIORITY CLAIM

[0001] This application claims the benefit of previously filed U.S. Provisional Patent Application entitled "SPEECH GENERATION DEVICE WITH OLED DISPLAY," assigned USSN 61/250,074, filed October 9, 2009, which is fully incorporated herein by reference for all purposes.

BACKGROUND OF THE INVENTION

[0002] The present invention generally pertains to speech generation devices, and more particularly to speech generation devices having improved display elements.

[0003] Various debilitating physical conditions, whether resulting from disease or injuries, can deprive the afflicted person of the ability to communicate audibly with persons or devices in one's environment in real time. For example, many individuals may experience speech and learning challenges as a result of pre-existing or

developed conditions such as autism, ALS, cerebral palsy, stroke, brain injury and others. In addition, accidents or injuries suffered during armed combat, whether by domestic police officers or by soldiers engaged in battle zones in foreign theaters, are swelling the population of potential users. Persons lacking the ability to

communicate audibly can compensate for this deficiency by the use of speech

generation devices.

[0004] Speech generation devices (SGDs), some embodiments of which may be known as Alternative and Augmentative Communications (AAC) devices, can include a variety of features to assist with a user's communication. In general, a speech generation device may include an electronic interface with specialized software configured to permit the creation and manipulation of digital messages that can be translated into audio speech output. Additional communication-related features may also be provided depending on user preferences and abilities. Users may provide input to a speech generation device by physical selection using a touch screen, mouse, joystick or the like or by other means such as eye tracking or audio control.

[0005] It is estimated that less than ten percent (10%) of the potential users of speech generation devices currently is being served by conventional speech generation devices. This population is highly variable from the standpoint of a range in ages from preschool children through elderly adults and a variety of lifestyles, geographic locations,

educational attainments, language sophistication, and available physical motor skills to operate the speech generation device. As such, a need exists for further refinements and improvements to speech generation devices that continuously adapt such devices for a greater number and variety of users.

[0006] Speech generation devices may be oriented relative to a user in a variety of fashions, most of which usually generally require that the user has visual access to a display associated with the speech generation device. For example, some conventional speech generation devices are desktop devices, while others are rendered portable by being mounted on vehicles such as wheel chairs. Still others may be configured as laptop devices or even handheld devices. In addition to a user being able to generally see a display portion of a speech generation device, a user employing eye tracking control must often orient a speech generation device or associated controller within a particular distance and at a particular angle relative to a user's eyes.

[0007] Because of a user's visual interaction with a speech generation device, such devices may potentially block the user's view to other objects in his environment and may also obscure the user from others. Such concerns may be particularly prevalent when a user communicates using a wheelchair-mounted speech generation device with or without eye tracking control. The potential restriction of a user's visual vantage can sometimes be awkward for a user. In addition, it may limit a user's ability to perform other tasks while using a speech generation device. As such, a need exists to reduce potential restriction of a user's view while utilizing a speech generation device.

[0008] Current displays and other components used in speech generation devices sometimes consume large amounts of power. Substantial power requirements of such components require some conventional speech generation devices to be located near an electrical outlet, thus limiting freedom of movement of the user. Other conventional speech generation devices seek to overcome this problem with the provision of a battery, but still must be recharged at periodic intervals. Substantial power requirements also can be related to issues of size, weight and excessive heat generation in a device. Because of these many concerns, a further need exists to generally reduce power requirements, size and weight of various SGD components, including display devices.

[0009] In light of the various design concerns in the field of speech generation devices, a need continues to exist for refinements and improvements to address such concerns. While various implementations of speech generation devices and associated features have been developed, no design has emerged that is known to generally encompass all of the desired characteristics hereafter presented in accordance with aspects of the subject technology.

BRIEF SUMMARY OF THE INVENTION

[0010] In general, the present subject matter is directed to various exemplary speech generation devices (SGD) having improved display devices.

[0011] For example, some exemplary speech generation devices in accordance with the presently disclosed technology include an organic light-emitting diode (OLED) display comprising an arrayed matrix of OLED components formed on a substrate. In some embodiments, the OLED components are formed to emit from both top and bottom surfaces and are formed with substantially transparent materials to form a transparent OLED (TOLED) display. OLED or TOLED displays may be integrated with other conventional displays to form hybrid displays. All such exemplary displays may be provided in framed or frameless configurations relative to other supporting structures for SGD hardware components.

[0012] In general, the disclosed OLED display devices offer numerous advantages for a speech generation device. In particular, the OLED display provided with a speech generation device is generally characterized by low activation or driving voltage, self- luminescence without requiring a backlight, reduced thickness, wide viewing angle, fast response speed, high contrast, superior impact resistance, ease of handling, etc. The reduced weight and power requirements of an OLED display provide particular advantages for a speech generation device because more lightweight and efficient devices help increase a potential user's mobility and duration of assisted communication.

In addition, durability and impact resistance provide benefits for a speech generation device display, especially for users who may have trouble controlling input force applied to a speech generation device.

[0013] To further facilitate a user's unrestricted view, a transparent display may be wired or wirelessly connected to separate hardware components of a speech generation device. For example, a transparent display may be mounted on a wheelchair or desk or outfitted in glasses or a helmet, with signals from the transparent display being

transmitted to a speech generation device provided elsewhere relative to a user.

[0014] In some exemplary embodiments of a speech generation device, a transparent

OLED display device includes a touch screen formed on one or more surfaces of the display device. In one embodiment, a touch screen is formed on a front surface facing the user. In another embodiment, a touch screen is formed on a back surface (facing away from the user) of a display device such that a user can make input selections from behind display buttons/items as opposed to on top of the buttons, thus eliminating or reducing the problem of obscuring buttons with a user's fingers or hands and allowing for more precise selection and smaller potential selection targets.

[0015] In some exemplary embodiments of a speech generation device, a display device is outfitted with eye tracking features that can sense a user's eye location and action {e.g., blinking, dwelling, etc.) such that the user can employ eye actions to cause input selection for the speech generation device. In one example, an eye gaze controller is mounted in an adjacent location to a transparent display device. In another example, eye tracking features are integrated within a frame surrounding a transparent display screen. In a still further example, eye tracking features such as photodiodes are integrated directly with the OLED devices in a transparent display.

[00 6] The display device and associated electronic components enable the SGD to transmit and receive messages to assist a user in communicating with others. For example, the SGD may correspond to a particular special-purpose electronic device that permits a user to communicate with others by producing digitized or synthesized speech based on configured messages. Such messages may be preconfigured and/or selected and/or composed by a user within a message window viewable on the display device associated with the speech generation device. A variety of other physical input devices and software interface features may be provided to facilitate the capture of user input to define what information should be displayed in a message window and ultimately communicated to others as spoken output, text message, phone call, e-mail or other outgoing communication.

[0017] In still further exemplary SGD embodiments, hardware components may also include various communications devices and/or modules and related communications functionality. For example, a wireless network adapter may be included to provide integrated Internet access. An RF device may be included as an integrated cell phone to enable the user to make, send and receive text messages and phone calls directly and speak during the phone conversation using the SGD, thereby eliminating the need for a separate telephone device. An infrared (IR) transceiver may be provided to function as a universal remote control for the SGD that can operate devices in the user's environment, for example including TV, DVD player, and CD player. A Bluetooth interface may be included to provide Bluetooth radio signals that can be used to control a desktop computer, which appears on the SGD's display as a mouse and keyboard, or other Bluetooth-enabled devices.

[0018] Additional aspects and advantages of the invention will be set forth in part in the description that follows, and in part will be obvious from the description, or may be learned by practice of the invention. The various aspects and advantages of the invention may be realized and attained by means of the instrumentalities and

combinations particularly described below.

BRIEF DESCRIPTION OF THE DRAWINGS

[0019] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate at least one presently preferred embodiment of the invention as well as some alternative embodiments. These drawings, together with the description, serve to explain the principles of the invention but by no means are intended to be exhaustive of all of the possible manifestations of the invention.

[0020] Fig. 1 provides a generally top perspective view of a first exemplary

embodiment of a speech generation device in accordance with an aspect of the present invention;

[0021] Fig. 2 provides a generally bottom perspective view of the first exemplary embodiment of a speech generation device of Fig. 1 ; [0022] Fig. 3A provides a plan view of a second exemplary embodiment of a speech generation device in accordance with an aspect of the present invention;

[0023] Fig. 3B provides a perspective view of a third exemplary embodiment of a speech generation device in accordance with an aspect of the present invention;

[0024] Fig. 4 provides a side cross-sectional view of an exemplary OLED for use in exemplary speech generation device embodiments of the present invention;

[0025] Fig. 5 provides a schematic diagram of exemplary hardware components for use within a speech generation device in accordance with an aspect of the present invention;

[0026] Fig. 6 provides a plan view of a display device having a front touch screen feature in accordance with an aspect of the present invention;

[0027] Fig. 7 provides a plan view of a display device having a rear touch screen feature in accordance with an aspect of the present invention;

[0028] Fig. 8 provides a perspective view of a display device in conjunction with a first exemplary embodiment of eye controller features in accordance with an aspect of the present invention;

[0029] Fig. 9 provides a plan view of a display device in conjunction with a second exemplary embodiment of eye controller features in accordance with an aspect of the present invention;

[0030] Fig. 10A provides a plan view of a display device in conjunction with a third exemplary embodiment of eye controller features in accordance with an aspect of the present invention;

[0031] Fig. 10B provides a magnified view of the display device of Fig. 10A taken from the circular portion B identified in Fig. 10A.

[0032] Fig. 11 provides a side view of an exemplary speech generation device in accordance with aspects of the present invention while being utilized by a user in a personal mobility device;

[0033] Fig. 12 provides a perspective view of another exemplary embodiment of a speech generation device and associated head-mounted display elements in accordance with aspects of the present invention;

[0034] Fig. 13 provides a plan view of a first exemplary hybrid display for use with exemplary speech generation device embodiments of the present technology; [0035] Fig. 14 provides a plan view of a second exemplary hybrid display for use with exemplary speech generation device embodiments of the present technology; and

[0036] Fig. 5 provides a plan view of an exemplary hybrid display for use with exemplary speech generation device embodiments of the present technology.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0037] Reference now will be made in detail to the presently preferred embodiments of the invention, one or more examples of which are illustrated in the accompanying drawings. Each example is provided by way of explanation of the invention, which is not restricted to the specifics of the examples. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment, can be used on another embodiment to yield a still further embodiment. Thus, it is intended that the present invention cover such modifications and variations as come within the scope of the appended claims and their equivalents. The same numerals are assigned to the same components throughout the drawings and description.

[0038] Referring now to the drawings, Figs. 1 and 2 provide first and second generally opposing perspective views of an exemplary speech generation device (SGD) 100. SGD 100 may include one or more outer housing, or casing components, that fit together to form a protective case and assembly for the various functional components of SGD 100. For example, SGD 100 in Figs. 1 and 2 includes a top casing component 102, a bottom casing component 104 and a speaker grille 108 that integrate together to collectively form the outer shell for SGD 100. Additional casing features (not illustrated) may be

configured to receive mounting hardware for securely affixing SGD 100 to a user location such as a desk, chair, personal mobility device or wheelchair, or other location suitable for user interaction.

[0039] Referring still to the outer shell formed by the casing components of SGD 100, top casing component 102 may be formed to define a relatively large opening or recessed area or other feature for accommodating a display device 120 and optional associated touch screen. When a display device 120 includes a touch screen, a display panel can be viewed through the touch screen. The touch screen may generally serve as an input feature for SGD 100, while the display panel may generally serve as an output feature for SGD 100. Additional details regarding such touch screen functionality are described later in more detail.

[0040] The casing components forming the outer shell of SGD 100 may also be formed to define additional openings to accommodate data input and output. For example, an opening 126 within top casing component 102 may be provided through which an LED or other light source can provide a device power indicator light. An opening 128 can provide a location for a button by which a user can toggle the power for SGD 100 in an "on" or "off" position. Additional openings 130 formed within one or more casing components can provide a location for data input/output ports. As shown in Fig. 2, speaker grille 108 may be formed to define a first opening 132 for accommodating a volume control knob to control the volume of speakers associated with SGD 100, as well as a second opening 34 to provide a location for a USB port through which various peripheral devices may be coupled to SGD 100.

[0041] One or more printed circuit boards (PCBs) also may be provided as structural components within SGD 100, and may be housed between top casing component 102 and bottom casing component 104. For example, a PCB may be provided as a mounting surface or motherboard for such hardware and/or electronics components as a computer processing unit, hard drive(s) and/or other associated memory or media device(s). One or more PCBs may also serve as a mounting surface for radio or wireless

communications modules, antennas, data buses, related integrated circuits and passive devices, and the like. One or more speakers also may be positioned among and/or securely mounted to the PCBs and/or casing components.

[0042] Various connectors may be used to establish relative positioning among the various casing components and secure the components to form an integrated assembly. For example, mating holes, pins and standoffs may be used to orient adjacent casing components to one another. Snap-fit features, threaded screws and/or other connectors may be used to secure and connect the casing components together.

[0043] The outer casing components used in exemplary SGD embodiment 100 may be molded from any substantially rigid and lightweight material. In one embodiment, one or more outer casing components are made from a material such as but not limited to plastic, thermoplastic, polymer, polyethylene, or resin material. In another embodiment, one or more outer casing components (particularly the bottom casing components) are made from magnesium or an alloy thereof. When magnesium is used in the casing components, it provides several advantages for an SGD including but not limited to additional conductive and radiated immunity, shielding from electromagnetic interference (EMI) signals, heat dissipation features for an SGD, greater structural integrity with low weight and high strength than conventional plastic casings. In particular, it permits the walls of the case to be thinner than would be the situation in a fixotropic molded plastic case in a conventional SGD.

[0044] Additional features, elements and steps that may be optionally incorporated into SGD 100 or other speech generation devices in accordance with the disclosed technology are disclosed in U.S. Provisional Patent Application entitled "Hand-Held Speech Generation Device" corresponding to USSN 617228,256, which is hereby incorporated herein by this reference in its entirety for all purposes.

[0045] In one exemplary embodiment of the presently disclosed technology, display device 120 in SGD 100 comprises an Organic Light-Emitting Diode (OLED) display.

Suitable OLED displays may be available from such manufacturing sources as Universal Display Corporation of Ewing, New Jersey, Corning Incorporated of Corning, New York, eMagin of Bellevue, Washington, Cambridge Display Technology (CDT) of Cambridge, Great Britain, Novaled of Dresden, Germany, Idemitsu Kosan of Japan, Ignis Innovation, Inc. of Montreal, Quebec, Samsung of Korea with offices in Ridgefield Park, New Jersey, Eastman Kodak of Rochester, New York and others. In general, an OLED display employs an LED capable of light-emitting in one or more layers of organic material (i.e., the emissive layer(s)) by movement and re-combination of electrons (negative charges) with holes (positive charges). When voltage potential is applied to such a device, negatively charged electrons move from a cathode layer through an intermediate layer into the emissive layer(s). At the same time, positively charged holes move from an anode layer through an intermediate layer and into the same organic light-emitting layer. When the positive and negative charges meet in the emissive layer(s) of organic material, they combine and produce photons.

[0046] With more particular reference to Fig. 4, an OLED 400 may include multiple layers of material formed on a substrate 401. Substrate 401 may comprise a metal, a coated metal, a planarized metal, glass, a plastic material, a coated plastic material, a

g thermoplastic material, a thermoset material, an elastomeric material, silicon or other suitable semiconductive material, or any combinations of such materials.

[0047] Referring still to Fig, 4, first and second generally conductive layers 402 and 410 respectively serve as an anode (positive terminal) or cathode (negative terminal) for the OLED 400. Conductive layers 402 and 410 may include materials such as but not limited to one or more metal oxides, one or more conductive organic materials, or combinations or such materials or others. In one example, anode 402 comprises a metal oxide such as indium tin oxide, tin oxide, indium zinc oxide, zinc oxide or the like which generally has a high work function, thus promoting the injection of holes into the emissive layer(s). In one example, cathode 410 comprises a metal such as aluminum or calcium or other material having a low work function that promotes injection of electrons into the emissive layer.

[0048] Referring still to Fig. 4, emissive layer 406 corresponds to the layer in which electrons and holes recombine to emit radiation having a frequency falling within the spectrum of visible light. Emissive layer 406 may consist of a single layer, multiple sublayers or a blended layer(s). Emissive layer 406 may comprise an organic material, such as a polymer, copolymer, a mixture of polymers, a low-molecular weight organic material (i.e, small molecule material) or phosphorescent material. Non-limiting

examples of suitable polymers include but are not limited to poly(p-pnehylene vinylene) or "PPV" and its derivatives, poly(n-vinylcarbazole) or "PVK" and its derivatives, polyfluorene and its derivatives, polu(paraphenylene) or "PPP" and its derivatives, polythiophene and its derivatives, polysilanes, and others. Non-limiting examples of suitable small molecule materials for use in emissive layer 406 include organo-metallic chelates and conjugated dendrimers. One or more intermediate layers 404 and/or 408 also may be provided as optional transport layers, sealant layers, or to provide other functionality as desired within the OLED 400.

[0049] It should be appreciated that the material properties of the organic emissive layer 406 and/or respective sublayers thereof may be particularly chosen to control the wavelength (and thus corresponding color) of the emissive frequencies. The color of light emitted from the OLED thus can be controlled by the selection of the organic material, or by the selection of dopants or other techniques known in the art. Different materials may be provided to generate different colors, such as red, blue and green, which may be utilized separately or mixed together simultaneously to form white light or in combinations to form secondary, tertiary, etc. colors.

[0050] In a typical OLED device, numerous OLEDs such as represented in Fig, 4 may be formed on a single substrate and arranged in groups in a regular grid pattern. Several OLED groups forming a column of the grid may share a common cathode, or cathode line. Several OLED groups forming a row of the grid may share a common anode, or anode line. The individual OLEDs in a given group emit light when their cathode line and anode line are activated at the same time. A group of OLEDs within the matrix may form one pixel in a display, with each OLED usually serving as one subpixel or pixel cell.

Particularly configured driving circuits may be provided to supply the appropriate power levels to the various groups of OLEDs within an OLED display device.

[0051] OLED displays offer numerous advantages for a speech generation device, especially relative to conventional display devices such as inorganic LED devices, liquid crystal displays (LCDs), and others. In particular, the OLED display provided with speech generation device 100 is generally characterized by low activation or driving voltage, thus providing for longer battery life, self-luminescence without requiring a backlight, reduced thickness, wide viewing angle, fast response speed, high contrast, greater brightness and color, superior impact resistance, ease of handling, etc. The reduced weight and power requirements of an OLED display provide particular advantages for a speech generation device because more lightweight and efficient devices help increase a potential user's mobility and duration of assisted communication. In addition, durability and impact resistance provide benefits for a speech generation device display, especially for users who may have trouble controlling input force applied to a speech generation device.

[0052] Additional examples of OLED devices and displays formed from such devices are disclosed in the following U.S. patents and publications, all of which are hereby incorporated by reference herein for all purposes: 5,920,080 (Jones); 5,986,401

(Thompson et al.); 6,492,778 (Segawa); 6,867,549 (Cok et al.) 6,872,473 (Song et al.); 6,903,378 (Cok); 7,061 ,175 B2 (Weaver et al.); 2007/0001937 A1 (Park et al.);

2007/0008268 (Park et al.); 2007/0063936 A1 (Jung et al.); 2007/0159432 A1 (Tseng et al.); 2007/0235730 A1 (Lee et al.); 2008/0007492 A1 (Koh et al.); 2008/0018241 A1 (Oh et al.); 2008/0018569 A1 (Sung et al.); 2008/0042549 A1 (Song et al.); and

2008/0197342 A1 (Lee et al.). [0053] In some embodiments of the presently disclosed technology, a display device for use with an SGD comprises an OLED configured as a transparent display. As used herein, a "transparent" display means a medium that is capable of transmitting at least some light so that objects or images can be seen fully or at least partially through the transparent display. Transparent OLED devices {i.e., TOLED devices) are generally capable of emitting light from both the bottom and top surfaces of a device. Such devices use transparent electrodes, substrates and other layers to ensure that the resulting display is at least partially and preferably fully transparent to a user. In some

embodiments, a TOLED display may be fully transparent in the off-state and within a range of between about 50-80% transparent during active operation. Exemplary TOLED displays also may be available from selected ones of the manufacturers identified above.

[0054] As shown in Fig. 3A, an exemplary speech generation device 300 includes a TOLED display 301 that generally includes a frame 302 substantially surrounding and supporting a transparent substrate 304 on which a matrix of TOLED devices is

configured. The driving circuitry and power supply features required to supply operating power to the TOLED components may be housed within frame 302 or within separate housing module 305. Housing 305 may include other operational features of speech generation device 300, including a central computing device and related processing components, memories, communication interfaces or modules, speakers, input and output devices, batteries or other power sources and the like, some of which may typically have been provided behind a display device, as shown in SGD 100 of Figs. 1 and 2. Housing 305 may be provided adjacent to display 301 as shown in Fig. 3A or in separate locations that are communicatively coupled to one another via wired or wireless connection.

[0055] Another exemplary speech generation device 310 is illustrated in Fig. 3B, and generally corresponds to an SGD having a frameless TOLED display 312. Display 312 corresponds to a substantially transparent substrate that does not include any structural support or frame around the outer perimeter of the substrate as shown in Fig. 3A. In one example, a support element 314 is provided along the entirety or just a portion of a single edge of the display 312. Such support element 314 may be configured merely to provide mechanical support for frameless TOLED display 312 or may house functional components related to display 312 such as but not limited to driving circuitry and power features for the TOLED devices formed in an array on the surface of the display 312.

[0056] In one example, any circuitry that is required to drive the TOLED elements within display 312 or other circuit features as may be desired can be formed directly on the surface of the transparent substrate forming display 312. For example, substantially transparent or semi-transparent conductive materials such as previously described with reference to Fig. 4 (e.g., tin oxide) and/or non-transparent conductive materials may be applied to form circuit portions 318 along one or more peripheral edges of the display 312 as shown in Fig. 3B or in other predetermined locations on the substrate forming frame!ess display 312. Circuit portions 318 may correspond to conductive traces or the like. By keeping the formation location of transparent, partially transparent or non- transparent circuit portions 318 along one or more outer edges or other designated smaller surface area portion of display 312, the majority or even substantial entirety of display 312 remains visually obstruction-less to a user. Similar to Fig. 3A, a housing 316 may include other operation features of speech generation device 310, including a central computing device and related processing components, memories, communication interfaces or modules, speakers, input and output devices, batteries or other power sources and the like. Housing 316 may be provided adjacent to display 312 and optional support element 314 as shown in Fig. 3B or in separate locations that are

communicatively coupled to one another via wired or wireless connection.

[0057] A transparent OLED (TOLED) display provides additional benefits for the user of a speech generation device. A user can interact with and use a speech generation device while eliminating or reducing potential visual restriction that would accompany a non-transparent display. A user could look at the transparent SGD display to view communication-related information, optionally make visual input selections using eye tracking features associated with the SGD and also be able to view his environment. In addition, other people in a user's environment are better able to see and interact with the user because of the transparent display. Still further, a user will be better able to engage in other activities, for example watching television, making a phone call, holding a conversation, etc. while still having access to the communication functionality of his speech generation device. [0058] Referring now to Fig. 5, electronic components intended for selective use with a speech generation device in accordance with aspects of the present invention are shown. The electronic components may include a combination of hardware, software and/or firmware elements, all of which either correspond to physical tangible apparatuses or which are embedded as instructions on a physical and tangible apparatus such as a computer-readable storage medium.

[0059] In general, the electronic components of an SGD 500 enable the device to transmit and receive messages to assist a user in communicating with others. For example, the SGD may correspond to a particular special-purpose electronic device that permits a user to communicate with others by producing digitized or synthesized speech based on configured messages. Such messages may be preconfigured and/or selected and/or composed by a user within a message window provided as part of the speech generation device user interface. As will be described in more detail below, a variety of physical input devices and software interface features may be provided to facilitate the capture of user input to define what information should be displayed in a message window and ultimately communicated to others as spoken output, text message, phone call, e-mail or other outgoing communication.

[0060] Referring now to Fig. 5, central computing device 501 may include a variety of internal and/or peripheral components. Power to such devices may be provided from a battery 503, such as but not limited to a lithium polymer battery or other rechargeable energy source. A power switch or button 505 may be provided as an interface to toggle the power connection between the battery 503 and the other hardware components. In addition to the specific devices discussed herein, it should be appreciated that any peripheral hardware device 507 may be provided and interfaced to the speech generation device via a USB port 509 or other communicative coupling. It should be further appreciated that the components shown in Fig. 5 may be provided in different

configurations and may be provided with different arrangements of direct and/or indirect physical and communicative links to perform the desired functionality of such

components.

[0061] Referring more particularly to the exemplary hardware shown in Fig. 5, a central computing device 501 is provided to function as the central controller within a SGD and may generally include such components as at least one memory/media element or database for storing data and software instructions as well as at least one processor. In the particular example of Fig. 5, one or more processor(s) 502 and associated memory/media devices 504a and 504b are configured to perform a variety of computer-implemented functions (i.e., software-based data services). One or more processor(s) 502 within computing device 501 may be configured for operation with any predetermined operating systems, such as but not limited to Windows XP, and thus is an open system that is capable of running any application that can be run on Windows XP. Other possible operating systems include BSD UNIX, Darwin (Mac OS X), Linux, SunOS (Solaris/OpenSolaris), and Windows NT (XP/Vista/7).

[0062] At least one memory/media device (e.g., device 504a in Fig. 5) is dedicated to storing software and/or firmware in the form of computer-readable and executable instructions that will be implemented by the one or more processor(s) 502. Other memory/media devices (e.g., memory/media devices 504b) are used to store data which also will be accessible by the processor(s) 502 and which will be acted on per the software instructions stored in memory/media device 504a. The various memory/media devices of Fig. 5 may be provided as a single or multiple portions of one or more varieties of computer-readable media, such as but not limited to any combination of volatile memory (e.g., random access memory (RAM), such as DRAM, SRAM, etc.) and nonvolatile memory (e.g., ROM, flash, hard drives, magnetic tapes, CD-ROM, DVD- ROM, etc.) or any other memory devices including diskettes, drives, other magnetic- based storage media, optical storage media and others. In some embodiments, at least one memory device corresponds to an electromechanical hard drive and/or or a solid state drive (e.g., a flash drive) that easily withstands shocks, for example that may occur if the SGD is dropped. Although Fig. 5 shows two separate memory/media devices 504a and 504b, the content dedicated to such devices may actually be stored in one memory/media device or in multiple devices. Any such possible variations and other variations of data storage will be appreciated by one of ordinary skill in the art.

[0063] In one particular embodiment of the present subject matter, a first portion of memory/media device 504b is configured to store input data received from a user for performing the desired functional steps associated with a speech generation device. For example, data in memory 504b may include inputs received from one or more peripheral devices, including but not limited to touch screen 506, microphone 508 and other peripheral devices 510, which indicate a user's selections of text to be spoken by the SGD or other related output actions. Memory device 504a includes computer-executable software instructions that can be read and executed by processor(s) 502 to act on the data stored in memory/media device 504b to create new output data (e.g., audio signals, display signals, RF communication signals and the like) for temporary or permanent storage in one of the memory/media devices. Such output data may be communicated to a peripheral output device, such as display device 512, speakers 514, cellular phone or RF device 516, wireless network adapter 518, or as control signals to still further components.

[0064] Computing/processing device(s) 502 may be adapted to operate as a special- purpose machine by executing the software instructions rendered in a computer-readable form stored in memory/media element 504a. When software is used, any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein. In other embodiments, the methods disclosed herein may alternatively be implemented by hard-wired logic or other circuitry, including, but not limited to application-specific circuits.

[0065] Referring still to Fig. 5, various input devices may be part of an SGD 500 and thus coupled to the computing device 501. For example, a touch screen 506 may be provided to capture user inputs directed to a display location by a user's hand or stylus. A microphone 508, for example a surface mount CMOS/MEMS silicon-based microphone or others, may be provided to capture user audio inputs. Other exemplary input devices (e.g., peripheral device 5 0) may include but are not limited to a peripheral keyboard, peripheral touch-screen monitor, peripheral microphone, eye gaze controller, mouse and the like.

[0066] In general, the different types of input devices (including optional peripheral devices) are configured with software instructions to accept user inputs in accordance with one or more access methods, including the following: a "Touch Enter", "Touch Exit", "Touch Auto Zoom", "Scanning", "Joystick", "Audio Touch", "Mouse Pause/Headtrackers", "Morse Code" and/or "Eye Tracking" access methods. In a "Touch Enter" access method, selection is made upon contact with the touch screen, with highlight and bold options to visually indicate selection. In a "Touch Exit" method, selection is made upon release as a user moves from selection to selection by dragging a finger as a stylus across the screen. In a "Touch Auto Zoom" method, a portion of the screen that was selected is automatically enlarged for better visual recognition by a user. In a "Scanning" mode, highlighting is used in a specific pattern so that individuals can use a switch (or other device) to make a selection when the desired object is highlighted. Selection can be made with a variety of customization options such as a 1 -switch autoscan, 2-switch directed scan, 1 -switch directed scan with dwell, inverse scanning, and auditory scanning. In a "Joystick" mode, selection is made with a button on the joystick, which is used as a pointer and moved around the touch screen. Users can receive audio feedback while navigating with the joystick. In an "Audio Touch" mode, the speed of directed selection is combined with auditory cues used in the "Scanning" mode. In the "Mouse Pause/Headtrackers" mode, selection is made by pausing on an object for a specified amount of time with a computer mouse or track ball that moves the cursor on the touch screen. An external switch exists for individuals who have the physical ability to direct a cursor with a mouse, but cannot press down on the mouse button to make selections. A "Morse Code" option is used to support one or two switches with visual and audio feedback. In "Eye Tracking" modes, selections are made simply by gazing at the device screen when outfitted with eye controller features and implementing selection based on dwell time, eye blinking or external switch activation.

[0067] When display device 512 corresponds to a transparent OLED display as described herein, the touch screen may correspond to one or more layers of transparent sensing material to implement a touch screen in conjunction with the TOLED display such that the combination of layers remains transparent and functional. In one example, transparent sensing material forming touch screen 506 may be applied to a front surface of a display device 512 as shown in Fig. 6 such that a user can make input selections via touch screen 506 by pressing his fingers or applying a stylus to the front of the display device 512 as shown. In another example, transparent sensing material forming touch screen 506 may additionally or alternatively be applied to a back surface of a display device 512 as shown in Fig. 7 such that a user can make input selections via touch screen 506 by pressing his fingers or applying a stylus to the rear of the display device as shown. The option depicted in Fig. 7 by which a user can provide input selections from behind display buttons/items as opposed to on top of the buttons eliminates or reduces the problem of obscuring buttons with a user's fingers or hands and allows for more precise selection and smaller potential selection targets.

[0068] Such transparent or substantially transparent embodiments of touch screen 506 may correspond to a resistive touch screen, capacitive touch screen or pressure sensitive configuration. A capacitive touch screen that uses a sensor material such as indium tin oxide (ITO) may be particularly well suited as a transparent sensing material. A capacitive touch screen also may provide such advantages as overall thinness and light weight. In addition, a capacitive touch panel requires no activation force but only a slight contact, which can be an advantage for a user who may have motor control limitations. Capacitive touch screens also accommodate multi-touch applications (i.e., a set of interaction techniques which allow a user to control graphical applications with several fingers) as well as scrolling. Suitable examples of touch screens for use with TOLED displays are disclosed in U.S. Patent Nos. 6,879,319; 6,885,157; 7,042,444; 7,106,307; 7,133,032, 7,202,856; and 7,230,608 ail to Cok, all of which are hereby incorporated herein by reference for all purposes.

[0069] Referring again to Fig. 5, SGD hardware components also may include one or more integrated output devices, such as but not limited to a display device 512 and speakers 514. Display device 512 corresponds to an OLED or TOLED display device as previously described with reference to Figs. 1-4. Speakers 514 may generally

correspond to any compact high power audio output device. Speakers 514 may function as an audible interface for the speech generation device when computer processors) 502 utilize text-to-speech functionality. In accordance with general functionality of a speech generation device, a user provides text, symbols corresponding to text, and/or related or additional information in a "Message Window" which then may be interpreted by a text-to-speech engine and provided as audio output via the speakers 514. In one embodiment, The SGD also includes an e-book reader that can be controlled by the user to read the e-book and have the e-book speak the words being read to the user. Speech output may be generated in accordance with one or more preconfigured text-to-speech generation tools in male or female and adult or child voices, such as but not limited to such products as offered for sale by Cepstral of Pittsburgh, PA, HQ Voices offered by Acapela Group of ons, Belgium, Flexvoice offered by Mindmaker of San Jose,

California, DECtalk offered by Fonix of Salt Lake City, Utah, products by Loquendo of Torino, Italy, VoiceText offered by NeoSpeech of Sunnyvale, California, AT&T's Natural Voices offered by Wizzard of Pittsburgh, Pennsylvania, Microsoft Voices, digitized voice (digitally recorded voice clips) or others. A volume control module 522 may be controlled by one or more scrolling switches or touch-screen buttons.

[0070] SGD hardware components also may include various communications devices and/or modules, such as but not limited to an antenna 515, cellular phone or RF device 516 and wireless network adapter 518. Antenna 515 can support one or more of a variety of RF communications protocols. A cellular phone or other RF device 516 may be provided to enable the user to make phone calls directly and speak during the phone conversation using the SGD, thereby eliminating the need for a separate telephone device, A wireless network adapter 518 may be provided to enable access to a network, such as but not limited to a dial-in network, a local area network (LAN), wide area network (WAN), public switched telephone network (PSTN), the Internet, intranet or ethernet type networks or others. Additional communications modules such as but not limited to an infrared (IR) transceiver may be provided to function as a universal remote control for the SGD that can operate devices in the user's environment, for example including TV, DVD player, and CD player.

[0071] When different wireless communication devices are included within an SGD, a dedicated communications interface module 520 may be provided within central computing device 500 to provide a software interface from the processing components of computer 500 to the communication device(s). In one embodiment, communications interface module 520 includes computer instructions stored on a computer-readable medium as previously described that instruct the communications devices how to send and receive communicated wireless or data signals. In one example, additional executable instructions stored in memory associated with central computing device 501 provide a web browser to serve as a graphical user interface for interacting with the Internet or other network. For example, software instructions may be provided to call preconfigured web browsers such as Microsoft Internet Explorer or Firefox® internet browser available from Mozilla software,

[0072] Antenna 515 may be provided to facilitate wireless communications with other devices in accordance with one or more wireless communications protocols, including but not limited to BLUETOOTH, WI-FI (802.1 1 b/g) and ZIGBEE wireless communication protocols. In one example, the antenna 515 enables a user to use the SGD 500 with a Bluetooth headset for making phone calls or otherwise providing audio input to the SGD. The SGD also can generate Bluetooth radio signals that can be used to control a desktop computer, which appears on the SGD's display 512 as a mouse and keyboard.

[0073] When the hardware components within an SGD embodiment, particularly the communications interface module 520, includes functional features by which the SGD can function as a Bluetooth radio, users can utilize the flexible input methods and software configurations of an SGD to control and operate a separate desktop or laptop computer. In such a fashion, the SGD appears to the desktop as a Human Interface Device (HID) device which allows the user who may not otherwise be able to type or control a mouse to operate the computer by taking advantage of the accessibility options provided by the SGD and its specialized hardware and software features. To access and control a personal computer using the Bluetooth features internal to the SGD, a user plugs a Bluetooth access control module into a USB port on the user's personal computer and performs a communication pairing procedure that establishes short-range wireless communication connectivity between the SGD and the personal computer. Similar steps may be followed to establish Bluetooth connections between an SGD and Bluetooth- enabled headset of keyboard/mouse, etc.

[0074] In addition to controlling a desktop, integrated Bluetooth features afford a user the opportunity to take advantage of several optional Bluetooth accessories. For example, a switch may be provided for users to mechanically actuate a selection on the SGD and then communicate that selection via Bluetooth protocols. Switching is often used when an SGD operates in a user-input mode where choices are scanned across the display as visual options or sequenced within an audio output and a user then can select one of the scanned options upon selection by switch. Scanning users often rely on switches located wherever the user has consistent and reliable motor control. Switches may be located on a head rest, seat, leg support, etc. Many conventional switches are hard-wired and include cables that are routed from the device around a wheelchair or other mounting location to the accessibility points. Provision of a Bluetooth- communicating input switch eliminates the potential for wire tangling, thus providing a more convenient and safer environment for the user in a wheelchair with moving parts. [0075] Another option afforded by Bluetooth communications features involves the benefits of a Bluetooth audio pathway. Many users utilize an option of auditory scanning to operate their SGD. A user can choose to use a Bluetooth-enabled headphone to listen to the scanning, thus affording a more private listening environment that eliminates or reduces potential disturbance in a classroom environment without public broadcasting of a user's communications. A Bluetooth (or other wirelessly configured headset) can provide advantages over traditional wired headsets, again by overcoming the

cumbersome nature of the traditional headsets and their associated wires.

[0076] When an exemplary SGD embodiment includes an integrated cell phone, a user is able to send and receive wireless phone calls and text messages. The cell phone component 516 shown in Fig. 5 may include additional sub-components, such as but not limited to an RF transceiver module, coder/decoder (CODEC) module, digital signal processor (DSP) module, communications interfaces, microcontroller(s) and/or subscriber identity module (SIM) cards. An access port for a subscriber identity module (SIM) card enables a user to provide requisite information for identifying user information and cellular service provider, contact numbers, and other data for cellular phone use. In addition, associated data storage within the SGD itself can maintain a list of frequently- contacted phone numbers and individuals as well as a phone history or phone call and text messages. One or more memory devices or databases within a speech generation device may correspond to computer-readable medium that may include computer- executable instructions for performing various steps/tasks associated with a cellular phone and for providing related graphical user interface menus to a user for initiating the execution of such tasks. The input data received from a user via such graphical user interfaces can then be transformed into a visual display or audio output that depicts various information to a user regarding the phone call, such as the contact information, call status and/or other identifying information. General icons available on SGD or displays provided by the SGD can offer access points for quick access to the cell phone menus and functionality, as well as information about the integrated cell phone such as the cellular phone signal strength, battery life and the like.

[0077] It should be appreciated that all graphical user interfaces and menus that display "buttons" or other features that are selectable by a user correspond to user input features that when selected trigger control signals being sent to the central computing device within an SGD to perform an action in accordance with the selection of the user buttons. Such graphical user interfaces may be displayed visually on a touch screen, for example the transparent displays and integrated touch screens shown in the examples of Figs. 6 and 7. Some exemplary graphical user interfaces correspond to conventional "QWERTY" keyboards, numeric keypads, or other customized keypads with

alphanumeric identifiers. Buttons also may include words, phrases, symbols and other information that can be customized based on user preferences, frequency or use or other parameters.

[0078] Buttons may also be provided by which a user can toggle additional menus such as preconfigured or customized compilations referred to herein as vocabulary lists or vocabulary list boxes. Vocabulary list boxes enable a user to have a wide variety of words and phrases immediately available. By listing groups of related words and phrases, vocabulary list boxes enable a user to quickly search through a wide range of text options when composing a message. For example, a user can select a particular group of words and/or phrases and associate all selected items into a new vocabulary list, which may be named and optionally assigned a unique symbol to visually represent the vocabulary list. Features also may be provided to trigger actions performed by the SGD upon selection of an item from a vocabulary list, for example, to automatically "speak" or provide as audio output the words/phrases from a vocabulary list box immediately as it is selected by a user, or to send the words/phrases from the vocabulary list box to the Message Window as it is selected by a user.

[0079] Referring now to Figs. 8, 9, 10A and 10B, exemplary embodiments of eye tracking features for use with a transparent display device in accordance with aspects of the presently disclosed technology now will be discussed. In general, when eye tracking is used as an input selection mechanism for a speech generation device of the present invention, an eye controller with one or more light sources and sensing elements may be provided relative to a display to capture a user's selections. In Fig. 8, an eye controller is mounted below a display. In Figure 9, eye tracking features are mounted within a frame around a display. In Figures 10A and 10B, eye tracking features are integrated within the display itself.

[0080] Referring more particularly to Fig. 8, an exemplary speech generation device 800 includes an eye gaze controller 820 and an associated display device 830. Display device 830 may include a display panel 833 composed of an array of transparent or non- transparent OLEDs. Additional hardware housings and corresponding components, such as processor elements and the like may be provided as part of these modules or as separate devices. The eye gaze controller 820 includes a housing, which may include a front shell 821 and an opposing rear shell 822. The front shell 821 and the rear shell 822 may be detachably connected to one another as by selectively removable mechanical fasteners such as screws. The rear shell 822 of the housing may be formed to define a plurality of data input/output connections, such as one or more USB sockets 824a, a power output port 835a, a charger port 836a, and corresponding power indicator LED 837a and charging indicator LED 837b. A power input port 835b and USB socket 824b also may be provided relative to display device 830. Processing functionality for the eye controller may be provided by a microprocessor provided within the eye gaze controller 820 or a separate peripheral processor connected to the eye gaze controller 820 via an associated input data port. Connection between the eye gaze controller 820 and the display device 830 also may be implemented via such data I/O ports. For example, USB connectors in the form of USB plugs 826 on each opposite end of a USB cable 825 can be used to connect the display device 830 and the eye gaze controller 820.

[0081] As shown in Fig. 8, the speech generation device 800 that can be controlled by the portable eye gaze controller 820 may be of a type that provides an input screen 833 that displays visual objects that the user can consider whether to select. The selection software that implements the user's decision to select an object displayed on the input screen 833 is provided with the capability of using inputs from an eye gaze controller to effect the selection of the objects displayed on the input screen 833. The selection software may include an algorithm in conjunction with one or more selection methods to select an object on the display screen 833 of the speech generation device 800 by taking some action with the user's eyes.

[0082] Optional selection methods that can be activated using the eye gaze controller 820 to interact with the display screen 833 of the speech generation device 800 include blink, dwell, blink/dwell, blink/switch and external switch. Using the blink selection method, a selection will be performed when the user gazes at an object on the input screen 833 of the speech generation device 800 and then blinks for a specific length of time. Additionally, the system also can be set to interpret as a "blink," a set duration of time during which an associated camera cannot see the user's eye. The dwell method of selection is implemented when the user's gaze is stopped on an object on the input screen 833 of the speech generation device 800 for a specified length of time. The blink/dwell selection combines the blink and dwell selection so that the object oh the input screen 833 of the speech generation device 800 can be selected either when the user's gaze is focused on the object for a specified length of time or if before that length of time elapses, the user blinks an eye. In the external switch selection method, an object is selected when the user gazes on the object for a particular length of time and then closes an external switch. The blink/switch selection combines the blink and external switch selection so that the object on the input screen 833 of the speech generation device 800 can be selected when the user's gaze blinks on the object and the user then closes an external switch. In each of these selection methods, the user can make direct selections instead of waiting for a scan that highlights the individual object on the input screen 833 of the speech generation device 800. Additionally, the system that uses the eye gaze controller 820 to interact with the input screen 833 of the speech generation device 800 can be set (at the user's discretion) to track both eyes or can be set to track only one eye.

[0083] Additional aspects of exemplary eye gaze controller 820 include eye tracker elements and associated algorithms that enable the eye tracker elements to analyze captured images of a user's eye(s). A basic eye tracker device employs a light source and a photosensor that detects light reflected from the viewer's eyes. In one particular example, a video-based gaze tracking system contains a processing unit which executes image processing routines such as detection and tracking algorithms employed to accurately estimate the centers of the subject's eyes, pupils and corneal-reflexes (known as glint) in two-dimensional images generated by a mono-camera near infrared system. The gaze measurements are computed from the pupil and glint (reference point). A mapping function - usually a second order polynomial function - is employed to map the gaze measurements from the two-dimensional image space to the two-dimensional coordinate space of the input display 833 of the speech generation device 800. The coefficients of this mapping function are estimated during a standard interactive calibration process in which the user is asked to look consecutively at a number of points displayed (randomly or not) on the input display 833. Known calibration techniques for passive eye monitoring may use a number of calibration points ranging, for example, from one to sixteen points. Once this calibration session for a particular user is completed, any new gaze measurement in the two-dimensional image will be mapped to its point of gaze on the input display 833 using an equation of this nature: (Xs, Ys) = F(Xi, Yi) with F being the mapping function, (Xs, Ys) the screen coordinates (or Point of Gaze) on the input display 833 and Xi, Yi the gaze measurement drawn from the image of the camera. In order to evaluate the success of the calibration procedure, a test desirably is conducted as follows. The user is asked again to look at some points on the input display 833, the gaze points are estimated using the mapping function, and an average error (in pixels) is computed between the actual points and the estimated ones. If the error is above a threshold, then the user needs to re-calibrate.

[0084] It should be appreciated that other types of eye tracker devices are known, and any of them can be employed in accordance with the present invention. Examples of eye tracker devices are disclosed in U.S. Patent Nos.: 3,712,716 to Cornsweet et al.;

4,950,069 to Hutchinson; 5,589,619 to Smyth; 5,818,954 to Tomono et al.; 5,861 ,940 to Robinson et al.; 6,079,828 to Bullwinkel; and 6,152,563 to Hutchinson et al.; each of which being hereby incorporated herein by this reference for all purposes. Examples of suitable eye tracker devices also are disclosed in U.S. Patent Application Publication Nos.: 2006/0238707 to Elvesjo et al.; 2007/0164990 to Bjorklund et al.; and

2008/0284980 to Skogo et al.; each of which being hereby incorporated herein by this reference for all purposes.

[0085] In the particular embodiment of Fig. 8, eye tracker elements of the portable eye gaze controller 820 desirably can include a USB video camera and corresponding focusing lens mounted relative to the central opening 821 as well as one or more light sources (e.g., a left infrared LED array 841 and a right infrared LED array 842). The focusing lens may be mounted in an adjustable lens housing and disposed in front of the video camera that is disposed with the front shell 821 and rear shell 822 of the main housing. The adjustable lens housing can be mechanically locked into position so that the focus of the lens does not change with vibration or drops. Each of the LEDs in each respective infrared LED array 841 , 842 desirably emits at a wavelength of about 880 nanometers, which is the shortest wavelength deemed suitable in one exemplary embodiment for use without distracting the user (the shorter the wavelength, the more sensitive the sensor, i.e., video camera, of the eye tracker). However, LEDs 841 , 842 operating at wavelengths other than about 880 nanometers easily can be substituted and may be desirable for certain users and/or certain environments. A plurality of LEDs (e.g., 10-50 LEDs) may be disposed in staggered, linear or other configurations in each array 841 , 842. Respective transparent protective covers may be provided over each of the LED arrays 841 , 842.

[0086] As further shown in Fig. 8, two spaced apart indicator lights 821 b, 821c may be disposed beneath the central opening 821a defined in the front shell 821. The eye gaze controller 820 is configured to illuminate each indicator light 821 b, 821 c when the eye tracker has acquired the location of the user's eye associated with that indicator light. The eye tracker's acquisition of the location of the user's eye may require using the processing power of either the microprocessor in the speech generation device 800 or of a dedicated microprocessor in the eye gaze controller 820, as the case may be. In each case for example, if the eye tracker of the eye gaze controller 820 has acquired the location of the user's left eye, then the eye gaze controller 820 is configured to illuminate the left indicator light 821 b. Similarly, if the eye tracker has acquired the location of the user's right eye, then the eye gaze controller 820 is configured to illuminate the right indicator light 821c. This feature of providing separate indicator lights 821 b, 821 c mounted on the front of the front shell 821 enables the eye gaze controller 820 to avoid using part of the display 833 of the speech generation device 800 to show the user if one or both of the user's eyes are being tracked. Accordingly, this indicator light feature of the eye gaze controller 820 conserves valuable space on the display screen 833 of the speech generation device 800. Additionally, it has also been observed that these indicators 821 b, 821 c act as a relaxation technique for otherwise hyper users.

[0087] Additional features, elements and steps that may be optionally incorporated into SGD 100 or other speech generation devices in accordance with the disclosed technology are disclosed in U.S. Provisional Patent Application entitled "SEPARATELY PORTABLE DEVICE FOR IMPLEMENTING EYE GAZE CONTROL OF A SPEECH GENERATION DEVICE" corresponding to USSN 61/217,536, which is hereby

incorporated herein by this reference in its entirety for all purposes.

[0088] Referring now to Fig. 9, another embodiment of a speech generation device 900 with eye tracking features includes a transparent display device 902 and hardware housing 910. The transparent display device 902 desirably can include an outer frame 904 and inner transparent screen 906 such as a TOLED device as previously described. The outer frame 904 may include a plurality of eye sensing elements 908 positioned within one or more peripheral surfaces of the frame 904 to detect a user's eye movement relative to the screen 906. Eye sensing elements 908 may be cameras, sensors (e.g., photodiodes, photodetectors, CMOS sensors and/or CCD sensors) or other devices. Eye sensing elements 908 may be used in eye gaze detection instead of or in conjunction with the video camera device previously described with reference to Fig. 8. Some additional features associated with the display and eye sensing elements may be housed within the outer frame 906, although most eye tracking controller elements and additional hardware components associated with SGD 900 desirably are provided within the supplemental hardware housing 910.

[0089] In a still further embodiment, as depicted in Figs. 10A and 10B, an exemplary speech generation device 1000 with eye tracking features includes a transparent display device 1002 and hardware housing 1010. The transparent display device 1002 desirably can include an outer frame 1004 and inner transparent screen 1006 such as a TOLED device as previously described. Transparent screen 1006 may be formed to include not only a matrix of TOLED devices, but sensor elements as well. For example, each element 1012 in the matrix display shown in Fig. 10A (corresponding, for example, to an OLED group, pixel or subpixel of the display device) may include an integrated

combination of transparent OLEDs 1014 and sensors 1016 as shown in the magnified view of Fig. 10B. In one example, different transparent OLEDs within each group 1012 include an OLED configured for different frequencies of operation, thus corresponding to different spectrums, such as green light, red light, blue light, near infrared light or others. In one example, sensors 1016 correspond to photodetectors, photo diodes, CMOS sensors, CCD sensors or other suitable image sensing elements. It should be

appreciated that any number and positioning of such components within display screen 1006 may be practiced in accordance with aspects of the presently disclosed technology. It should be appreciated that sensors 1016 integrated within screen 1006 can be used in eye gaze detection instead of or in addition to the video camera device previously described with reference to Fig. 8 or the sensing elements 908 previously described with reference to Fig. 9. [0090] Best performance of the optional eye controller sensing elements and other elements of a speech generation device may be achieved when the speech generation device is rigidly and securely mounted relative to a user. Figs. 11 and 12 now describe two such exemplary options for implementing a relatively fixed relationship between user and speech generation device components. As shown in Fig. 11 , a user 1100 seated to a chair, wheelchair, scooter or other personal mobility device 1 102 is positioned relative to a speech generation device, which includes a display device 1 104 and housing module 1 106. Display device 1 104 may be securely and rigidly mounted to wheelchair 1102 by a support member 1105. The exact length and positioning of display device 1104 dictated by the support member 1 105 may be chosen based on a user's height such that the user's eye location is substantially parallel with the display device 1 04 or some predetermined portion thereof. The housing module 1 106 includes additional hardware components of the speech generation device. Although housing module 1 106 is shown in Fig. 11 as provided in a distal relationship with the display device 1 104, it may be provided closer to or even adjacent to the display device 1104 as shown in the examples of Figs. 3, 6, 9, etc. Electrical connection and communication between display device 1104 and the SGD components within housing module 1 106 may be provided via a wired link or a wireless link, such as a Bluetooth, Wi-Fi or Zigbee connection or others.

Desirably, in accordance with an embodiment of the present invention, the display device 1 104 includes a transparent screen formed with an integrated combination of transparent OLEDs 1014 and sensors 1016 as shown in Fig. 10B, for example.

[0091] Additional display devices for use with speech generation devices of the present technology may be configured as head mounted displays incorporated into a helmet, sunglasses or other item able to be worn securely adjacent to a user's head. For example, Fig. 12 depicts a head-mounted display device 1200 that includes one or more transparent OLED matrices 1204 integrated within or formed on the lenses 1202 of pair of glasses or sunglasses 1206. The OLED matrices 1204 would then be capable of displaying menus, keypads or other graphical user interfaces directly on the lenses for viewing by a user. Photodiodes or other sensors can be incorporated with the OLED matrices of Fig. 12, similar to the arrays described with reference to Figs. 10A and 0B such that eye tracking can be used to control an associated speech generation device. The other SGD hardware may be provided in a separate housing module, such as separate computer provided as a stationary device (e.g., sitting on a desk, worn in a backpack or mounted on a wheelchair) or even a handheld device or pocket computer. The head-mounted display device 1200 may also optionally include one or more integrated speakers 1208 mounted to the frame of glasses 1206 and/or a microphone 1210 for enabling additional audio input and output via the head mounted display device 1200. Interface between the head-mounted display 1200 and separate SGD components may be provided via a wired or wireless link, such as a Bluetooth, Wi-Fi or Zigbee connection or others.

[0092] Referring now to Figs. 13-15, additional exemplary embodiments of displays that may be formed in accordance with aspects of the presently disclosed technology correspond to hybrid displays that utilize two or more different display technologies. As shown in Fig. 13, exemplary display 1300 includes first and second portions 1301 and 1302, at least one of which is formed with an OLED or TOLED display. The other display portion can be formed of a non-OLED technology such as but not limited to a light- emitting diode (LED) display, electroluminescent display (ELD), plasma display panel (PDP), and liquid crystal display (LCD). As shown in Fig. 14, the two different display sections do not necessarily need to be side-by-side. Instead, an exemplary display 1400 shows how a first interior display portion 1402 could be embedded within a second exterior portion 1404 substantially surrounding the first portion 1402. Either of the portions 1402 or 1404 could be an OLED or TOLED display, with the other portion being formed of a non-OLED technology as described above. Referring to Fig. 15, a hybrid display 1500 may include more than two portions as shown in Figs. 3 and 14. For example, four portions 1501 -1504, respectively, may be provided as shown.

Alternatively, a greater or fewer number of portions may be provided in the same or different arrangements to provide further hybrid display options. In still further examples (not illustrated), two or more physically separate displays may be provided in a single speech generation device such that one designated display corresponds to an OLED or TOLED display and the others variously correspond to a non-transparent display such as an LED, ELD, PDP, LCD or other suitable type.

[0093] While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the scope of the present disclosure is by way of example rather than by way of limitation, and the subject disclosure does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.