Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETERMINING ONE OR MORE LIGHT EFFECTS BY LOOKING AHEAD IN A BOOK
Document Type and Number:
WIPO Patent Application WO/2020/069979
Kind Code:
A1
Abstract:
A system for controlling at least one light device to render one or more light effects while a first portion (61) of a book is being rendered is configured to determine one or more words corresponding to a second portion (63-65) of the book. The second portion is a later portion in the book than the first portion. The system is further configured to determine the one or more light effects based on the one or more words and control the at least one light device to render the one or more light effects while the first portion is being rendered.

Inventors:
BORRA TOBIAS (NL)
ALIAKSEYEU DZMITRY (NL)
TEUNISSEN CORNELIS (NL)
Application Number:
PCT/EP2019/076085
Publication Date:
April 09, 2020
Filing Date:
September 26, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIGNIFY HOLDING BV (NL)
International Classes:
H05B37/02
Domestic Patent References:
WO2009150592A12009-12-17
WO2008135894A12008-11-13
WO2009150592A12009-12-17
Foreign References:
US20170060365A12017-03-02
US20170262537A12017-09-14
US20150269133A12015-09-24
US20150269133A12015-09-24
Attorney, Agent or Firm:
VERWEIJ, Petronella, Danielle et al. (NL)
Download PDF:
Claims:
CLAIMS:

1. A system (1) for controlling at least one light device (13-15) to render one or more light effects while a first portion (61) of an audio book is being rendered, said system (1) comprising at least one processor (5) configured to:

obtain a transcript corresponding to said audio book,

- determine one or more words from said transcript corresponding to a second portion (63-65) of said book, said second portion (63-65) being a later portion in said book than said first portion (61), and wherein a variable time period and a variable amount of tex separates said first portion (61) and said second portion (63-65),

determine said one or more light effects based on said one or more words, an - control said at least one light device (13-15) to render said one or more light effects while said first portion (61) is being rendered.

2. A system (1) as claimed in claim 1, wherein the first portion (61) and the second portion (63 - 65) are part of the same chapter of said audio book.

3. A system (1) as claimed in claim 1, wherein said book comprises a text, said text comprising said one or more words, and said first portion (61) and said second portion (63-65) are rendered on an audio output device (8) by synthesizing speech from said text. 4. A system (1) as claimed in claim 1 or 2, wherein said at least one processor i further configured to render at least the first portion of the audio book.

5. A system (1) as claimed in claim 1 or 2, wherein said at least one processor i; further configured to obtain the transcript from another device that is configured to render a least the first portion of the audiobook.

6. A system (1) as claimed in claim 5, wherein said at least one processor (5) is configured to identify an end of a section in which said first portion is located and determin said one or more words from within said section.

7. A system (1) as claimed in claim 1 or 2, wherein said at least one processor ( is configured to select at least one of said one or more light effects from a library of light effects.

8. A system (1) as claimed in claim 1 or 2, wherein said at least one processor ( is configured to determine a duration of a light effect of said one or more light effects based on a remaining duration of a section in which said first portion (61) is located and/or based on a type of said light effect.

9. A system (1) as claimed in claim 1 or 2, wherein said one or more words describe one or more events and/or one or more moods.

10. A system (1) as claimed in claim 9, wherein said one or more light effects comprise a plurality of light effects overlapping in time and said at least one processor (5) ii configured to determine a first light effect of said plurality of light effects from at least one event-related word of said one or more words and a second light effect of said plurality of light effects from at least one mood-related word of said one or more words, said first light effect having a shorter duration than said second light effect.

11. A system (1) as claimed in any of the preceding claims wherein the at least one processor is further configured to detect the intonation with which a specific sentence h the audio book is being rendered and the one or more light effects are being altered using th detected intonation.

12. A lighting system comprising the system of claim 1 and at least one light device (13 - 15) for generating the one or more light effects.

13. A method of controlling at least one light device to render one or more light effects while a first portion of an audio book is being rendered, said method comprising:

obtaining a transcript corresponding to said audio book,

determining (101) one or more words from said transcript corresponding to a second portion of said book, said second portion being a later portion in said book than said portion (61) and said second portion (63-65);

determining (103) said one or more light effects based on said one or more words; an

controlling (105) said at least one light device to render said one or more ligt effects while said first portion is being rendered.

14. A method according to claim 13, further comprising:

said one or more light effects comprise a plurality of light effects overlapping in time and said at least one processor (5) is configured to determine a first light effect of sa plurality of light effects from at least one event-related word of said one or more words and second light effect of said plurality of light effects from at least one mood-related word of said one or more words, said first light effect having a shorter duration than said second ligl effect. 15. A computer program or suite of computer programs comprising at least one software code portion or a computer program product storing at least one software code portion, the software code portion, when run on a computer system, being configured for enabling the method of claim 13 or 14 to be performed.

Description:
Determining one or more light effects by looking ahead in a book

FIELD OF THE INVENTION

The invention relates to a system for controlling at least one light device.

The invention further relates to a method of controlling at least one light device.

The invention also relates to a computer program product enabling a comput system to perform such a method.

BACKGROUND OF THE INVENTION

Technologies such as Philips Hue Sync and Philips Ambilight enable the rendering of light effects based on the content that is played on a television (Ambilight) or computer (Hue Sync). The current Philips Hue Sync system is focused on analyzing video content. Even though audio, delivered with the video, is considered in a supporting role, the focus is on extracting light effects based on the visual content. Since both the lights and the content (usually a monitor screen or TV) are in the same modality (i.e. visual) there will be instances where both will 'compete' for attention.

When the content is switched to a different modality, e.g. auditory, or when the content consists of plain text (e.g. a book instead of video), the lighting system may be able to offer an even more enhanced and immersive experience, as the visual effects are solely generated by the lighting system. For example, WO 2009/150592 Al discloses a system which recognizes keywords in speech when a person reads a book aloud and which generates atmosphere parameters suited to underline an atmosphere associated with the keywords, e.g. a parameter for setting up green light when the keyword“forest” is recognized.

US2015/269133A1 discloses a method of augmenting an electronic book (e- book) reading experience on a portable electronic device, and a system to augment are described. The method includes determining, using a processor, a current page and line of tl e-book being read. The method also includes obtaining context information associated with the current page and line, the context information including one or more of a genre of the e- features used in the augmenting based on associating the context information with the features, the features including one or more of a text color, font type, font size, music, imag and animation.

Despite its benefits, the system of WO 2009/150592 Al has as drawback tha regularly the generated visual effects do not match the portion of the book being read aloud that moment.

SUMMARY OF THE INVENTION

It is a first object of the invention to provide a system for controlling at least one light device, which allows generated visual effects to more often match a portion of a book being rendered at that moment.

It is a second object of the invention to provide a method of controlling at let one light device, which allows generated visual effects to more often match a portion of a book being rendered at that moment.

In a first aspect of the invention, a system for controlling at least one light device to render one or more light effects while a first portion of a book is being rendered comprises at least one processor configured to determine one or more words corresponding a second portion of said book, said second portion being a later portion in said book than sa first portion, determine said one or more light effects based on said one or more words, and control said at least one light device to render said one or more light effects while said first portion is being rendered. The system may be a lighting system, may be part of a lighting system or may be used in a lighting system. Said one or more words may describe one or more events and/or one or more moods, for example.

By letting the system look ahead in the book, the system knows in advance which lighting effects will optimally support the content to come. As an example, consider scenes (e.g. chapters or paragraphs) that rely on mood settings, e.g. a scene describing a traveler entering a log cabin after a long trip through the night and finding the fireplace already lit when he opens the door. Here, it will not make sense when the fireplace effect is rendered on the lamps when the words 'fireplace' are uttered by the delivery system. Ideally the lighting effects should already anticipate the mood of the scene and render the fireplace effect upon 'opening the door'.

Thus, by incorporating analysis of the book text or transcripted audio file, control of the light devices and look-ahead, the dynamic lighting system will be capable of this way, generated visual effects more often match a portion of a book being rendered at th moment. By implementing this behavior in real time, no scripting is necessary. Lights in the room can gradually dim, even before the sunset itself is narrated in the (audio) book, the onset of lightning may already be announced by soft flicker on the lamps, the onset of a nev chapter and a new day will be rendered on the lamps before it has been heard by the user.

Said book is an audio book and said at least one processor is configured to obtain a transcript corresponding to said audio book and determine said one or more words from said transcript. An audio book generally comprises recorded audio and the transcript may also be part of the audio book, may be obtained by recognizing speech in said recordec audio or may be obtained separately, e.g. from the Internet.

Said book may comprise a text, said text comprising said one or more words and said first portion and said second portion may be rendered on an audio output device by synthesizing speech from said text. As speech synthesis is improving, the experience of a book being read aloud from the text of the book using speech synthesis is also improving.

A fixed time period or a fixed amount of text may separate said first portion and said second portion. A variable time period and a variable amount of text separates said first portion and said second portion. The fixed amount of text may be one page or a few pages, for example. The fixed amount of time may be between 30 seconds and 5 minutes, ft example.

The use of a fixed amount of text or a fixed amount of time is relatively easy to implement. However, it may be beneficial to use a variable amount of time and a variable amount of text. For example, said at least one processor may be configured to identify an er of a section in which said first portion is located and determine said one or more words iron within said section. Since a next section is more likely to have a different atmosphere, it is beneficial to only look ahead in the same section. A section may refer to a chapter, a paragraph, pieces of text separated by a punctuation mark, pieces of text separated by pause during speaking out, or pieces of text separated by different intonation.

Said at least one processor may be configured to select at least one of said on or more light effects from a library of light effects, for example by mapping a word that is determined to a certain light effect. Associating existing light effects from a library with (key)words allows the system to be realized in a relatively simple manner, i.e. without the dynamic creation of light effects. A more intelligent system may be able to dynamically create light effects, e.g. by determining that a“forest” generally has a green color and that “forest” describes a mood rather than an event and therefore the corresponding light effect should have a longer duration.

Said at least one processor may be configured to determine a duration of a light effect of said one or more light effects based on a remaining duration of a section in which said first portion is located and/or based on a type of said light effect. Since a next section is more likely to have a different atmosphere, it is beneficial to ensure that the light effect has ended by the time the next section starts.

Certain types of light effects, e.g. relating to moods, preferably have a longer duration than other types of light effects, e.g. relating to events. Said one or more light effec typically comprise a plurality of light effects overlapping in time and said at least one processor may be configured to determine a first light effect of said plurality of light effects from at least one event-related word of said one or more words and a second light effect of said plurality of light effects from at least one mood-related word of said one or more word; said first light effect having a shorter duration than said second light effect.

In a second aspect of the invention, the method of controlling at least one lig] device to render one or more light effects while a first portion of a book is being rendered comprises determining one or more words corresponding to a second portion of said book, said second portion being a later portion in said book than said first portion, determining sai one or more light effects based on said one or more words, and controlling said at least one light device to render said one or more light effects while said first portion is being renderec Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.

Moreover, a computer program for carrying out the methods described hereir as well as a non-transitory computer readable storage-medium storing the computer prograr are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.

A non-transitory computer-readable storage medium stores a software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for controlling at least one light device to rend ' one or more light effects while a first portion of a book is being rendered comprises controlling at least one light device to render one or more light effects while a first portion ( a book is being rendered on an audio output device, said executable operations comprising: determining one or more words corresponding to a second portion of said book, said second light effects based on said one or more words, and controlling said at least one light device render said one or more light effects while said first portion is being rendered.

As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product.

Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micr code, etc.) or an embodiment combining software and hardware aspects that may ah generally be referred to herein as a "circuit", "module" or "system." Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocesso of a computer. Furthermore, aspects of the present invention may take the form of a comput program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.

Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optic fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that c < contain, or store, a program for use by or in connection with an instruction execution systen apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as par of a carrier wave. Such a propagated signal may take any of a variety of forms, including, b not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java(TM), Smalltalk, C++ or the like, conventional procedural programming languages, such as the "C" programming language or similar programming languages, and functional programming languages such as Scala, Haskell or the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection ma be made to an external computer (for example, through the Internet using an Internet Servic Provider).

Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and compute program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by comput program instructions. These computer program instructions may be provided to a processor in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer program instructions may also be loaded onto a computer, oth< steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specific logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other aspects of the invention are apparent from and will be furthe elucidated, by way of example, with reference to the drawings, in which:

Fig. 1 is a block diagram of an embodiment of the system of the invention; Fig. 2 is a flow diagram of an embodiment of the method of the invention; Fig. 3 shows examples of a first portion of a book and different possibilities for corresponding second portions of said book; and

Fig. 4 is a block diagram of an exemplary data processing system for performing the method of the invention.

Corresponding elements in the drawings are denoted by the same reference numeral.

DETAIFED DESCRIPTION OF THE EMBODIMENTS

Fig. 1 shows an embodiment of the system of the invention: mobile device 1. Mobile device 1 is connected to a wireless LAN access point 17. A bridge 11 is also communicate wirelessly with the bridge 11, e.g. using the Zigbee protocol, and can be controlled via the bridge 11 , e.g. by the mobile device 1. The bridge 11 may be a Philips Hr bridge and the light devices 13-15 may be Philips Hue lights, for example. In an alternative embodiment, light devices are controlled without a bridge. The wireless LAN access point ; is connected to the Internet 18. An Internet server 19 is also connected to the Internet 18. Tl mobile device 1 may be a mobile phone or a tablet, for example.

The mobile device 1 is suitable for controlling the light devices 13-15 to render one or more light effects while a first portion of a book is being rendered. The mobil device 1 comprises a processor 5, a transceiver 3, a memory 7, an audio output device 8, an a display 9. The processor 5 is configured to determine one or more words corresponding tc second portion of the book. The second portion is a later portion in the book than the first portion. The processor 5 is further configured to determine the one or more light effects based on the one or more words and control the light devices 13-15 to render the one or moi light effects while the first portion is being rendered.

Thus, the mobile device 1 analyzes content of (audio) books in real time witf look ahead to optimally enhance user experience of delivery of the book. The mobile device 1 is capable of rendering effects on the light devices 13-15, preferably both pre-determined (e.g. from a library of lighting effects, with optional parameters such as duration and intensity) as well as on the fly, for example single colors. Preferably, both mood of the content and events in the content are analyzed and rendered on the light devices 13-15.

In the embodiment of Fig. 1 , the book may be an audio book or a book witho audio (but instead comprising a text which comprises the one or more words). To handle th< former, the processor 5 is configured to obtain a transcript corresponding to an audio book and determine the one or more words from the transcript. To handle the latter, the processor is configured to render the first portion and the second portion on the audio output device 8 by synthesizing speech from the text. In the embodiment of Fig. 1, the first portion of the book is being rendered by the mobile device 1, e.g. by using the audio output device 8 or th display 9. In an alternative embodiment, the first portion of the book is being rendered by another device or by a human who reads the book aloud. Hence, the rendering of the at leas first portion of the audio book may be performed by the mobile device 1 , or it may be performed by a separate device, not part of the mobile device 1. The processor 5 of the mobile device 1 may be configured to either recognize separate words or a string of words, both, and to determine one or more light effects based on a separate word, multiple separate multiple separate words or a string of words on one or more of light effects. For example, a library maps a separate word, multiple separate words or a string of word on at least one lig effect, and the processor 5 selects a light effect from the library.

In the embodiment of the mobile device 1 shown in Fig. 1, the mobile device comprises one processor 5. In an alternative embodiment, the mobile device 1 comprises multiple processors. The processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from Qualcomm or ARM-based, or an application-specific processor. The processor 5 of the mobile device 1 may run an Android or iOS operating system for exampl The memory 7 may comprise one or more memory units. The memory 7 may comprise soli state memory, for example. The memory 7 may be used to store an operating system, applications and application data, for example. The audio output device 8 may comprise on< or more speakers and/or an output for transmitting audio to a headphone, for example.

The transceiver 3 may use one or more wireless communication technologies such as Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 17, for example. In an alternative embodiment, multiple transceivers are used instead of a single transceiver. In the embodiment shown in Fig. 1, a receiver and a transmitter have been combined into a transceiver 3. In an alternative embodiment, one or more separate receiver components and one or more separate transmitter components are used. The display 9 may comprise an LCD or OLED panel, for example. The display 9 may be a touch screen. The mobile device 1 may comprise other components typical for a mobile device such as a batte and a power connector. The invention may be implemented using a computer program running on one or more processors.

In the embodiment of Fig. 1, the system of the invention is a mobile device. 1 an alternative embodiment, the system of the invention is a different device, e.g. a smart speaker.

An embodiment of the method of the invention is shown in Fig. 2. With the method, at least one light device is controlled to render one or more light effects while a firs portion of a book is being rendered, e.g. using speech. A step 101 comprises determining or or more words corresponding to a second portion of the book. The second portion is a later portion in the book than the first portion. A step 103 comprises determining the one or more light effects based on the one or more words. A step 105 comprises controlling the at least one light device to render the one or more light effects while the first portion is being rendered, e.g. using speech. A fixed time period or a fixed amount of text may separate the first portion and the second portion. Alternatively, a variable time period and a variable amount of text may separate the first portion and the second portion. When content is scanned with a variable look ahead, this look ahead can be based on delimiters in the audio stream (e.g. lon pauses, for example at least one second, or at least two seconds, or at least three second, or ; least four seconds, or at least five seconds between two consecutive words), on time (e.g. 5 minutes ahead), on interpreting keywords (for example keywords indicating a different seer setting or a different mood), or on a (publisher provided) transcription of the content, or on differences in intonation during speaking out, or on differences in sound level, for example. The transcription will make segmentation of the content straightforward, not only for chapters, but for paragraphs and even punctuation as well.

Fig. 3 shows a few examples of a second portion of a book corresponding to certain first portion of the book, i.e. a few examples of determining the look ahead. Fig. 3 shows five pages 77-81. The first portion 61 on page 77 is currently being rendered. If the first portion 61 is being rendered using the audio output device 8 of the mobile device 1 of Fig. 1, then no additional steps need to be taken to identify the first portion 61. If the first portion 61 is being rendered using an audio output device of another system or by a human reading the first portion 61 out loud, then speech recognition may be used to identify the fir; portion 61.

If the first portion 61 is being rendered on the display 9 of the mobile device of Fig. 1, then the mobile device 1 knows which page the user is reading. The mobile devics 1 may identify the portion that the user is reading more precisely by using an estimation of the user’s reading speed and/or by using a camera, e.g. of the mobile device 1, to detect whi part of the display 9 the user is looking at using, e.g. eye tracking. If a look ahead of five pages is used, then the one or more words may be determined from the second portion 65 oi page 81.

Since page 81 is part of another chapter then page 77, it is beneficial to use a step 101 which comprises identifying an end of a section in which the first portion is locate· and determining the one or more words from within the section. The end of a section may b identified based on a change of the context (based on keyword detection) or a start of the ne segment/chapter, for example. For example, if the section (in this case a chapter) ends befoi the five-page look ahead, the one or more words may be determined from the last paragraph of the same section, which is second portion 64 on page 79 in the example of Fig. 3. The content may be scanned for keywords conforming to predetermined lighting effects, such as lightning, fireplace, sunset, or alarm clock. Based on the distance tc the keyword (distance e.g. expressed in time, determined by both the reading or delivering speed and the number of words), and the keyword itself, the effect will be triggered either ahead of time, or on precisely the time the keyword is narrated. If the first portion 61 does n comprise a relevant keyword and the next relevant keyword is found in portion 63 on page 78, this keyword may be used to determine one or more light effects and the scanning may pause until the next portion of the book is rendered.

The duration of a light effect of the one or more light effects may be determined based on a remaining duration of a section in which the first portion 61 is locate and/or based on a type of the light effect, for example.

The one or more words may describe an event as described above (e.g.

lightning, fireplace, sunset, or alarm clock). Furthermore, an overall light ambiance may be determined based on the assessment of the mood of the content where any kind of additiona effects such as mentioned above can be rendered on top of the“mood ambiance”. The mooc of the content can be assessed by scanning for specific keywords that describe emotions (happy, sad, etc.) and actions (running, lying down, etc.). A light effect determined from an event-related word preferably has a shorter duration than a light effect determined from a mood-related word, especially if the light effects overlap in time.

It may not only be possible to use the method of Fig. 2 for books whose speech is rendered by the system that determines the light effects, but it may also be possibl to apply the same for a text book that is being read out loud (e.g. to children). For example using a voice smart assistant: (1) before starting to read the book, the reader might give a command to the voice smart assistant (e.g. Amazon Echo) to“enhance the reading with light”; (2) the voice system by listening to the first sentence(s) that are read out loud could identify the book, download the transcript and on the fly create light script/light effects; (3) based on the pace of reading it could then trigger the effects in the same way as described ir above. Additionally, the system might detect the intonation with which a specific sentence i being read and use it to alter or fine tune the light effect. For example, if, based on the transcript, the system is ready to activate a fire effect and the reader whispers, then the system will create a very quiet and dimmed fire effect and if then the reader suddenly increases his voice, the system will increase the intensity of the same fire effect.

Fig. 4 depicts a block diagram illustrating an exemplary data processing As shown in Fig. 4, the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the dati processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via e system bus 306. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciate however, that the data processing system 300 may be implemented in the form of any systei including a processor and a memory that can perform the functions described within this specification.

The memory elements 304 may include one or more physical memory device such as, for example, local memory 308 and one or more bulk storage devices 310. The loci memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing systen 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.

Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, ; microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Inp and/or output devices may be coupled to the data processing system either directly or throiq intervening I/O controllers.

In an embodiment, the input and the output devices may be implemented as e combined input/output device (illustrated in Fig. 4 with a dashed line surrounding the input device 312 and the output device 314). An example of such a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display

A network adapter 316 may also be coupled to the data processing system to and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.

As pictured in Fig. 4, the memory elements 304 may store an application 318 In various embodiments, the application 318 may be stored in the local memory 308, the on or more bulk storage devices 310, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 300 may further execute ai operating system (not shown in Fig. 4) that can facilitate execution of the application 318. The application 318, being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to executing the application, the data processing system 300 may be configured to perform on< or more operations or method steps described herein.

Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product defin functions of the embodiments (including the methods described herein). In one embodimen the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non- writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chi or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on th processor 302 described herein.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless tl and/or "comprising," when used in this specification, specify the presence of stated features integers, steps, operations, elements, and/or components, but do not preclude the presence o addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

The corresponding structures, materials, acts, and equivalents of all means oi step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.