Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS, METHODS, AND DEVICES FOR GENERATION OF NOTIFICATION SOUNDS
Document Type and Number:
WIPO Patent Application WO/2017/045696
Kind Code:
A1
Abstract:
A computer-implemented method includes selecting a sound element from a sound group associated with an event type, said sound group comprising a plurality of sound elements, wherein each sound element corresponds to a sound that can be rendered by a sound rendering device; and playing the selected sound element. The method is operable to produce ringtone or alerts on computer devices, mobile phones, and smart phones.

Inventors:
DAY AARON (DE)
Application Number:
PCT/EP2015/070940
Publication Date:
March 23, 2017
Filing Date:
September 14, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
WIRE SWISS GMBH (CH)
International Classes:
H04M1/724; H04M1/72454; H04M19/04; H04M1/72442
Foreign References:
US20040204146A12004-10-14
US20050107075A12005-05-19
US20060060069A12006-03-23
US20060130636A12006-06-22
US201462039979P2014-08-21
Attorney, Agent or Firm:
BOXALL, Sarah (GB)
Download PDF:
Claims:
WHAT IS CLAIMED:

I claim:

1. A computer- implemented method, implemented by hardware in combination with software, the method comprising:

(A) selecting a sound element from a sound group associated with an event type, said sound group comprising a plurality of sound elements, wherein each sound element corresponds to a sound that can be rendered by a sound rendering device; and

(B) playing the selected sound element.

2. The method of claim 1 wherein said selecting in (A) is performed when an event of the event type occurs.

3. The method of claim 1 wherein the selecting in (A) is performed in advance of an event of the event type occurring.

4. The method of any one of the preceding claims wherein acts (A) and (B) are repeated.

5. The method of claim 4 wherein the selecting in (A) is associated with a current event, and wherein the selecting in (A) attempts to avoid selection of a previously selected sound element for the current event.

6. The method any one of the preceding claims wherein the sound in or corresponding to at least one sound element comprise a pre-recorded sound.

7. The method any one of the preceding claims wherein the sound in or corresponding to at least one sound element comprises a set of parameters that control a synthesis engine that outputs pulse coded modulation (PCM) data.

8. The method of claim 7 wherein the PCM data comprise MIDI (Musical Instrument Digital Interface) data for parametrically controlling a software

synthesizer.

9. The method of any one of the preceding claims wherein the sound in or corresponding to at least one sound element comprises a loop group.

10. The method of any one of the preceding claims wherein the selecting of the sound element in (A) is based on a modulation function of one or more factors.

11. The method of any one of the preceding claims wherein the playing of the particular sound element in (B) is based on a modulation function of one or more factors.

12. The method of any one of claims 10-11 wherein the one or more factors comprise a static factor.

13. The method of any one of claims 10-11 wherein the one or more factors comprise a dynamic factor.

14. The method of any one of the preceding claims wherein the selecting of a sound element in (A) is based on a random function application to a parameter within the sound group.

15. The method of any one of the preceding claims wherein the selecting in (A) is performed offline.

16. The method of any one of the preceding claims wherein the selecting in (A) is performed when a notification sound is needed.

17. The method of claim 13 wherein the selecting in (A) is performed in substantially real-time when said notification sound is needed.

18. The method any one of the preceding claims wherein said method is implemented on a device.

19. The method of claim 18 wherein said device is selected from the group comprising: computer devices, mobile phones, and smart phones.

20. The method of claim 15 or 16 wherein said selected corresponds to a ringtone or an alert.

21. The method as in any one of the preceding claims wherein there are multiple sound groups associated with a corresponding multiple event types, each sound group comprising a corresponding plurality of sound elements.

22. A system comprising:

(a) hardware including memory and at least one processor, and

(b) one or more mechanisms running on said hardware, wherein said one or more mechanisms are configured to: perform the method of any one of claims 1-21.

23. A computer program product having computer readable instructions stored on non-transitory computer readable media, the computer readable instructions including instructions for implementing a computer-implemented method, said method operable on one or more devices comprising hardware including memory and at least one processor and running one or more services on said hardware, said method comprising: the method of any one of claims 1-21.

24. A device comprising:

(a) hardware including memory and at least one processor, and

(b) one or more mechanisms running on said hardware, wherein said one or more mechanisms are configured to: perform the method of any one of claims 1-21.

25. The device of claim 24 wherein said device is selected from the group comprising: computer devices, mobile phones, and smart phones.

Description:
SYSTEMS, METHODS, AND DEVICES FOR GENERATION OF NOTIFICATION SOUNDS

COPYRIGHT STATEMENT

[0001] This patent document contains material subject to copyright protection. The copyright owner has no objection to the reproduction of this patent document or any related materials in the files of the United States Patent and Trademark Office, but otherwise reserves all copyrights whatsoever.

RELATED APPLICATION

[0002] This application is related to U.S. Provisional Patent Application No. 62/039,979, filed August 21, 2014, titled "Systems, Methods, And Devices For Generation Of Notification Sounds," the entire contents of which are hereby fully incorporated herein by reference for all purposes, and which is attached as Appendix A hereto.

FIELD OF THE INVENTION

[0003] This invention relates to sound generation, and, more particularly, to automatic generation of notification sounds.

BACKGROUND & OVERVIEW

[0004] Notification sounds (e.g., ringing phones, computer alerts, incoming mail alerts, etc.) have become ubiquitous - to the point where they are often missed or ignored. Getting someone's attention with sound can be done in many ways. For example, the use of loud sounds with many harmonics and strong resonant frequencies (a car horn or bright bell) is an effective method.

However, unless the use-case is mission critical (e.g., a warning for a pilot about to land that his landing gear haven't been deployed), such methods often serve primarily to annoy the user. This approach can ruin the user experience of high frequency use-cases such as, e.g., incoming email or message alerts, action confirmations, etc.

[0005] Lowering the volume of an audio notification, however, is not necessarily the solution. If a notification sound (e.g., an SMS or keypress or other sound) is played too quietly then it might blend into the background noise.

[0006] It is desirable, and an object of this invention, to provide systems, methods, and devices that produce audio notifications that are not repetitious. BRIEF DESCRIPTION OF THE DRAWINGS

[0007] Other objects, features, and characteristics of the present invention as well as the methods of operation and functions of the related elements of structure, and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification.

[0008] FIG. 1 depicts an overview of a device according to exemplary embodiments hereof;

[0009] FIG. 2(a) shows aspects of a data structure used by the device of FIG. 1, according to exemplary embodiments hereof;

[0010] FIG. 2(b) shows exemplary sound groups associated with particular events, according to exemplary embodiments hereof;

[0011] FIG. 3 shows exemplary processing in the system of FIG. 1, according to exemplary embodiments hereof;

[0012] FIG. 4 depicts an overview of a device according to exemplary embodiments hereof; and

[0013] FIG. 5 is a schematic diagram of a computer system.

DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EXEMPLARY

EMBODIMENTS

GLOSSARY AND ABBREVIATIONS

[0014] As used herein, unless used otherwise, the following terms or abbreviations have the following meanings:

[0015] AIFF means Audio Interchange File Format

[0016] MIDI means Musical Instrument Digital Interface;

[0017] PCM means pulse coded modulation; and

[0018] a "mechanism" refers to any device(s), process(es), routine(s), service(s), or combination thereof. A mechanism may be implemented in hardware, software, firmware, using a special-purpose device, or any combination thereof. A mechanism may be integrated into a single device or it may be distributed over multiple devices. The various components of a mechanism may be co-located or distributed. The mechanism may be formed from other mechanisms. In general, as used herein, the term "mechanism" may thus be considered to be shorthand for the term device(s) and/or process(es) and/or service(s). BACKGROUND

[0019] The inventor has realized that, given that humans tend to ignore repetition and respond to change, it follows that by changing a sound for an audio notification each time or while it plays we can leverage our capacity to notice change while keeping overall sound pressure levels for a ringtone or other alert low.

[0020] As used herein an "audio notification" refers to any sound or combination of sounds that is/are/may be used to try to notify someone of the occurrence or non-occurrence of an event. The event may be any event and any kind of event, including, e.g., an incoming phone call, arrival of an email, a warning, a confirmation, or the like. The event may be an event caused or generated by the user receiving the notification (e.g., a key press, a camera sound, etc.) or it may be caused by an action taken by another (e.g., an incoming phone call, a text message, etc.). An audio notification may be used alone or in conjunction with other notifications to the person. For example, an audio notification may be combined with a visual notification and/or a vibration notification. An audio notification may be rendered by one or more devices using general or special hardware on the device(s). An audio notification may be stored on one or more devices and/or generated, in whole or in part, on the fly, in substantially real time.

[0021] In order to do this we need a system that plays back some number of sound files

(e.g. , PCM data) or other files that contain parametric control data (e.g. MIDI controlling software sound synthesis) simultaneously as audio tracks that are then mixed and displayed at a physical output or saved to a file for later use. Each of these files is organized into groups of files. An implementation of this might contain groups of files with shared qualities, e.g., different notes that are fragments of a given chord other implementations might not. Another implementation might have each group of sound files start with the same pattern each time, but are then followed by variations.

DESCRIPTION

[0022] As shown in FIG. 1, a device 100 according to exemplary embodiments hereof includes one or more sound rendering devices 102 that can render sounds 104 stored on the device. The term "play" is also used herein to refer to the process of rendering a sound. The device may be any kind of device, including, e.g., a computer device, a mobile phone such as a smart phone or the like, etc. The sounds 104 may comprise one or more sound files. Those of ordinary skill in the art will realize and appreciate, upon reading this description, that format of a sound file will depend, at least in part, on type(s) of sound rendering device(s) 102. For example, a sound file may be or comprise a set of parameters that control a synthesis engine that outputs PCM data, e.g., MIDI data parametrically controlling a software synthesizer. In some aspects, a sound may be considered to be a sound file or parametric control data controlling a software synthesis engine. A sound file preferably uses a known format, e.g., MP3, wav, AIFF or the like. Although the term "file" is used to describe a sound file, it should be appreciated that sound files need not be organized or structured according to any file system or underlying operating system.

[0023] The sounds 104 on a device are preferably organized into one or more sound groups.

FIG. 1 shows the sound files organized into M sound groups for some number M (denoted "Sound Group 1 ", "Sound Group 2," ... "Sound Group M" in the drawing). Each sound group contains one or more sound files. As shown in FIG. 1, Sound Group 1 comprises P sound files, for some number P (denoted Si,i, S li2 .. . S I , P in the drawing). It should be appreciated that the sound groups do not all have to have the same number of sound files and the sound files need not all be of the same format or length.

[0024] FIG. 2(a) shows an exemplary organization of sounds 104 on a device 100

according to exemplary embodiments hereof. As shown in the drawing in FIG. 2(a), sounds 104 includes M sound groups, with sound group 1 having P sound files (denoted Sound i , Sounds ... Sound p in the drawing), sound group 2 having Q sound files (denoted Sound2,i, Sound2,2■■■ Sound2,Q in the drawing) ... and sound group M having N sound files(denoted SoundM , SoundM,2 ... me drawing). As noted above, there is no requirement that P=Q=N, although some or all of the sound groups may have the same number of sound files in some cases.

[0025] Each sound file has a length (L) corresponding to the duration of the actual sound represented by the sound file. There is no requirement that the sound files have the same length, although in some implementations the sound files in some sound groups may have the same length.

[0026] Sound files (sounds) may be generated on the fly or made beforehand, e.g. , with commercially available software (such as Ableton Live, Avid Pro-Tools, Apple Logic, etc.) There is no requirement that the sound files on any particular device or in any particular sound group be made or generated in the same way.

[0027] A presently preferred implementation of the sound rendering device(s) 102 uses playback and modulation of PCM data. In some implementations the sound rendering device(s) 102 may use MIDI control of a software synthesizer that responds to parametric input. Other variations may combine synthesis and playback of PCM data.

[0028] In exemplary embodiments hereof the sound groups may correspond to or be associated with events or types of events that may occur on or be associated with the devices. Thus the sound files in a sound group may be used for notifications associated with events or types of events. For example, if the device is a telephone, then one or more sound groups may be associated with ringtones or the like used by the telephone. It should be appreciated that there may be more than one sound group associated with each type of event. Example events for a smartphone or computer device include incoming phone calls, incoming or outgoing text messages, error messages, key presses, powering on/off, etc.

[0029] For example, FIG. 2(b) shows exemplary sound groups associated with particular events ("new message", "error", "confirmation", "camera"). In this example the sound group "new message" has P sound files, the sound group "Error" has Q sound files, the sound group

"confirmation" has J sound files, and the sound group "camera" has N sound files. As with all examples herein, the sounds shown in FIG. 2(b) are provided only by way of example and are not intended to limit the scope hereof in any way. In an exemplary operation of a system with the sounds shown in FIG. 2(b), with reference to the exemplary flow chart in FIG. 3, when a sound is needed for a "new message" event (as determined by "determine event type" at 302, FIG. 3), the "new message" sound group is selected (at 304). The device then selects a sound from the sound group (at 306), e.g., using a random function. Suppose, e.g., that the first sound selected (at 306) is the sound "new message 2". The device then renders the selected sound (in this case "new message 2") (at 308). The device then determines (at 310) whether or not to terminate the sound. The sound may be terminated, e.g., because a user of the device has responded to an event or has taken some other action to cause the sound to terminate. The device may receive an externally generated signal to terminate the sound. If the sound is not to terminate (as determined at 310), then processing continues with the selection of another sound from the sound group (at 306). Suppose, e.g., that the sound is not to terminate and that the next selected sound is the sound "new message ' (for some value j in the range 1 to P). The next sound is played or rendered (at 308), and processing continues (at 310) to determine whether or not to terminate. Processing may continue until some event occurs to cause termination. In some cases sound associated with an event may occur a preset duration, a preset number of times, or indefinitely (until stopped by a signal or event occurring).

[0030] It should be understood that if / when the sound selection function is random (or pseudo random), then the same sound file may be selected multiple times in succession. If this outcome is undesirable then the system may maintain a history of recently played sound files in order to enforce non-repetition of sound files within certain limits. For example, a device may try to avoid repetition of sound files more than every k plays, for some number k. In other examples, the device may require, for certain sound groups, that certain sound files are repeated at least every j plays, for some number j. For example, for a "new message" event, the system may require that "new message 1" be played first and at least once every three plays. Different and/or other rules may be imposed on the sound selection within sound groups.

Example

[0031] The following table summarizes various events and sounds played in real time on an exemplary device according to exemplary embodiments hereof.

Sound Loops

[0032] A sound file may be or comprise a sound loop as described in co-owned and copending U.S. patent application no. 62/039,979, filed August 21, 2014, the entire contents of which have been fully incorporated herein by reference for all purposes, and which is incorporated herein as Appendix A.

Sound Selector

[0033] The device 100 includes a sound selector (or sound selector mechanism) 106 that is constructed and adapted and operates to select one or more sound files from sounds 104 to be rendered by sound rendering device(s) 102. The sound selector 106 may be invoked by other mechanisms (not shown) on the device 100 when the device needs to render a sound.

[0034] Exemplary operation of embodiments of the sound selector 106 are described in greater detail with respect to the flowchart in FIG. 3.

[0035] As shown in FIG. 2(a), in some cases the selection of a sound group may depend, at least in part, on the type of event for which the sound is to be played. Accordingly, when a sound is required to be played on the device in connection or association with an event, the sound selector 106 determines the type of the event (at 302). Based at least in part on the event type (determined at 302), the sound selector 106 then selects a sound group (at 304). The sound selector then selects a sound from the selected sound group (at 306) and the selected sound is then played (at 308) by the sound rendering device(s) 102.

[0036] The selection of a sound from the selected sound group (at 306) may be based, e.g., on function referred to herein as a selection function. The selection function may be a function that randomly selects a sound file from the list of files in the selected sound group. For example, for the sound group 2 in FIG. 2(A), the selection function may randomly generate a number in the range 1 to Q (where Q is the number of sound files in the sound group). The selection function may use or comprise a pseudorandom number generator that outputs random values over time. The function's distribution may be implemented using a simple function {e.g. Gaussian) or more complex implementation {e.g., a discrete-time Markov chain).

[0037] In some cases the selection function may sequentially select the sound files in the selected sound group, it being understood that in these cases the device will preferably maintain a record of the previous sound file played for each sound group.

[0038] In general the selection function may make a selection of a sound file from the sound group based on one or more of: information about the user of the device, information about the device {e.g., the type of device, etc.), external information {e.g., time of day, temperature, location, etc.). Those of ordinary skill in the art will realize and appreciate, upon reading this description, that these inputs to the sound selection device are merely exemplary and are not intended to be limiting in any way.

[0039] In some cases the device 100 may need to continue playing a sound after a selected sound has completed playing. This may occur, e.g., when the sound is being played as a notification that requires acknowledgment {e.g. , a notification of an incoming phone call and the like). In these cases, as shown in FIG. 3, after the sound has been played (at 308), the device determines if the sound should terminate (at 310). If the sound should not terminate then the device may select and render another sound from the selected sound group (at 306, 308). This process may be repeated a fixed number of times or until the device indicates that the sound should terminate {e.g., when the caller hangs up or when the phone is answered).

MODULATION FRAMEWORK

[0040] As described above, a sound group may be played using some function {e.g., a random function) that selects the sound elements to play. However, in some implementations the selection of sound groups and/or sound elements may be affected or modulated by other factors {e.g., static or dynamic factors).

[0041] In some exemplary embodiments hereof, as shown in FIG. 4, the sound selector mechanism 106' and/or the sound rendering 102' may be affected or influenced by one or more modulators 108.

[0042] Thus, e.g., the selection of a sound (or a sound group) by sound selector 106' may be affected by one or more factors {e.g., values) provided by modulator(s) 108. Similarly, the rendering of a selected sound by sound rendering device(s) 102' may be affected by one or more factors (e.g., values) provided by modulator(s) 108.

[0043] Modulator(s) 108 may be used to select sounds based on static and/or dynamic information, and the information may be determined or derived from information external to the device, information from another device, or any other source. For example, a modulator 108 may provide a value based on one or more of: the time of day, day or week, date, current temperature, current weather, identity of device user, identity of incoming caller, identity of incoming message sender.

[0044] In some aspects hereof, the concept of modulation uses a so-called "source" and a so-called "target," where a source may be or comprise any kind of information at any resolution of a static variable (e.g., a given day of the week) or a time varying function (e.g., the temperature between two different times of the day). A source may also comprise a pseudorandom number generator.

[0045] A source may be mapped to a target using any kind of function, including a linear mapping and an exponential mapping. Thus, e.g., a source may map to a target in a manner such that any change in the source is associated with a change in the target, where the change may be linear, exponential or any other function. A source may, e.g. , be an evenly weighted random function applied to some parameter within a given sound group (e.g., volume or individual sound; e.g., order of sounds within a group, etc.)

[0046] A target may be or comprise any parameter of a system, sound group or sound (e.g. , volume, sound group number, sound number, cutoff frequency of a low-pass filter applied to individual sound group output, summed output (a summed mix of each sound group), etc.)

[0047] A source may affect multiple targets and vice versa.

Example 1:

[0048] Some random function (source) determines which sound within a sound group

(target) plays. As used herein, "Some random function" may refer to a pseudorandom number generator that outputs random values over time. The function's distribution may be simple (e.g. Gaussian) or more complex (e.g. a discrete-time Markov chain), and the system is not limited by the function's distribution.

Example 2:

[0049] Some random function with persistence (sources) determines the amplitude of the next sound (target) to play. As used herein, the term "persistence" generally refers to a value or values sampled from the previous system, sound group, or sound's, parametric state e.g. the last value assigned to the amplitude of a given sound.

Example 3:

[0050] Some random function with external modulation (sources) affects some aspect of

(target), where, as used herein "External modulation source" refers to variable or time varying function, besides a pseudorandom number generator that, supplies data that can be applied.

Example 4:

[0051] Some random function (sourcei) with external modulation source (source 2 ) with persistence (source 3 ) affect some aspect of (targeti)(target 2 )(target 3 ).

Example 5:

[0052] Using feedback, some or all of the parametric values at a given time from a system, sound group or sound may be applied to the parametric control of values (same or different) of targets of the consecutive (or parallel, thus providing cross-modulation) system, sound group, or sound.

Example 6:

[0053] A user taps the capacitive touch screen of a device such as a smartphone. A sound associated with the user's tapping is modulated (varied) based on one or more factors such as frequency, velocity, and pressure of the user's tapping. The sound may be rendered on the user's device or on another device (e.g., on a device associated with a different user).

[0054] Thus a user's interactions with one or more input mechanisms of a device may be treated as a modulation source applied to sounds associated with those interactions. Modulation may thus be applied based on a user's interaction (e.g., direct interaction) with a device (such as the frequency of touches and/or the velocity and/or pressure of those touches, e.g., applied to a capacitive touch screen). For example, the harder and faster a user "knocks" or "pings" on a device the more the sound changes. The device effectively becomes a near real-time transducer for interactions that may change the way sounds are represented. Thus, e.g., the user's interaction with a device as a modulation source that can be applied to the sounds themselves.

[0055] Thus, e.g., in some aspects the system may provide for automatic generation of and modulation by random or external real time input such as frequency and or velocity and or pressure of touch events. End of Example 6

[0056] It should be appreciate that that above list of modulations is merely exemplary, and different and/or other (or no) modulations may be used in some cases.

COMPUTING

[0057] Various mechanisms including the sound rendering device(s) 102 and sound selector

106 may be implemented as specialized devices and/or as programs operating on a computer system, as described herein. Programs that implement such methods (as well as other types of data) may be stored and transmitted using a variety of media (e.g. , computer readable media) in a number of manners. Hard- wired circuitry or custom hardware may be used in place of, or in combination with, some or all of the software instructions that can implement the processes of various embodiments. Thus, various combinations of hardware and software may be used instead of software only.

[0058] FIG. 5 is a schematic diagram of a computer system 500 upon which embodiments of the present disclosure may be implemented and carried out.

[0059] According to the present example, the computer system 500 includes a bus 502 (i.e., interconnect), one or more processors 504, one or more communications ports 514, a main memory 506, removable storage media 510, read-only memory 508, and a mass storage 512.

Communication port(s) 514 may be connected to one or more networks by way of which the computer system 500 may receive and/or transmit data.

[0060] As used herein, a "processor" means one or more microprocessors, central processing units (CPUs), computing devices, microcontrollers, digital signal processors, or like devices or any combination thereof, regardless of their architecture. An apparatus that performs a process can include, e.g. , a processor and those devices such as input devices and output devices that are appropriate to perform the process.

[0061] Processor(s) 504 can be (or include) any known processor, such as, but not limited to, an Intel® Itanium® or Itanium 2® processor(s), AMD® Opteron® or Athlon MP®

processor(s), or Motorola® lines of processors, and the like. Processor(s) may include one or more graphical processing units (GPUs) which may be on graphic cards or stand-alone graphic processors.

[0062] Communications port(s) 514 can be any of an RS-232 port for use with a modem based dial-up connection, a 10/100 Ethernet port, a Gigabit port using copper or fiber, or a USB port, and the like. Communications port(s) 514 may be chosen depending on a network such as a Local Area Network (LAN), a Wide Area Network (WAN), a CDN, or any network to which the computer system 500 connects. The computer system 500 may be in communication with peripheral devices (e.g., display screen 516, input device(s) 518) via Input / Output (I/O) port 520. Some or all of the peripheral devices may be integrated into the computer system 500, and the input device(s) 518 may be integrated into the display screen 516 (e.g., in the case of a touch screen).

[0063] Main memory 506 can be Random Access Memory (RAM), or any other dynamic storage device(s) commonly known in the art. Read-only memory 508 can be any static storage device(s) such as Programmable Read-Only Memory (PROM) chips for storing static information such as instructions for processor(s) 504. Mass storage 512 can be used to store information and instructions. For example, hard disks such as the Adaptec® family of Small Computer Serial Interface (SCSI) drives, an optical disc, an array of disks such as Redundant Array of Independent Disks (RAID), such as the Adaptec® family of RAID drives, or any other mass storage devices may be used.

[0064] Bus 502 communicatively couples processor(s) 504 with the other memory, storage and communications blocks. Bus 502 can be a PCI / PCI-X, SCSI, a Universal Serial Bus (USB) based system bus (or other) depending on the storage devices used, and the like. Removable storage media 510 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Versatile Disk - Read Only Memory (DVD-ROM), etc.

[0065] Embodiments herein may be provided as one or more computer program products, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. As used herein, the term "machine -readable medium" refers to any medium, a plurality of the same, or a combination of different media, which participate in providing data (e.g., instructions, data structures) which may be read by a computer, a processor or a like device. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.

Non- volatile media include, for example, optical or magnetic disks and other persistent memory. Volatile media include dynamic random access memory, which typically constitutes the main memory of the computer. Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor. Transmission media may include or convey acoustic waves, light waves and electromagnetic emissions, such as those generated during radio frequency (RF) and infrared (IR) data communications.

[0066] The machine -readable medium may include, but is not limited to, floppy diskettes, optical discs, CD-ROMs, magneto-optical disks, ROMs, RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine -readable medium suitable for storing electronic instructions. Moreover, embodiments herein may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., modem or network connection).

[0067] Various forms of computer readable media may be involved in carrying data (e.g. sequences of instructions) to a processor. For example, data may be (i) delivered from RAM to a processor; (ii) carried over a wireless transmission medium; (iii) formatted and/or transmitted according to numerous formats, standards or protocols; and/or (iv) encrypted in any of a variety of ways well known in the art.

[0068] A computer-readable medium can store (in any appropriate format) those program elements that are appropriate to perform the methods.

[0069] As shown, main memory 506 is encoded with application(s) 522 that support(s) the functionality as discussed herein (an application 522 may be an application that provides some or all of the functionality of one or more of the mechanisms described herein). Application(s) 522 (and/or other resources as described herein) can be embodied as software code such as data and/or logic instructions (e.g., code stored in the memory or on another computer readable medium such as a disk) that supports processing functionality according to different embodiments described herein.

[0070] During operation of one embodiment, processor(s) 504 accesses main memory 506 via the use of bus 502 in order to launch, run, execute, interpret or otherwise perform the logic instructions of the application(s) 522. Execution of application(s) 522 produces processing functionality of the service(s) or mechanism(s) related to the application(s). In other words, the process(es) 524 represents one or more portions of the application(s) 522 performing within or upon the processor(s) 504 in the computer system 500.

[0071] It should be noted that, in addition to the process(es) 524 that carries(carry) out operations as discussed herein, other embodiments herein include the application 522 itself (i.e., the un-executed or non-performing logic instructions and/or data). The application 522 may be stored on a computer readable medium (e.g. , a repository) such as a disk or in an optical medium.

According to other embodiments, the application 522 can also be stored in a memory type system such as in firmware, read only memory (ROM), or, as in this example, as executable code within the main memory 506 (e.g., within Random Access Memory or RAM). For example, application 522 may also be stored in removable storage media 510, read-only memory 508, and/or mass storage device 512. [0072] Those skilled in the art will understand that the computer system 500 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources.

[0073] Embodiments herein may be provided as a computer program product, which may include a machine -readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. As used herein, the term "machine -readable medium" refers to any medium, a plurality of the same, or a combination of different media, which participate in providing data (e.g. , instructions, data structures) which may be read by a computer, a processor or a like device. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.

Non- volatile media include, for example, optical or magnetic disks and other persistent memory. Volatile media include dynamic random access memory, which typically constitutes the main memory of the computer. Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor. Transmission media may include or convey acoustic waves, light waves and electromagnetic emissions, such as those generated during radio frequency (RF) and infrared (IR) data communications.

[0074] The machine -readable medium may include, but is not limited to, floppy diskettes, optical discs, CD-ROMs, magneto-optical disks, ROMs, RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine -readable medium suitable for storing electronic instructions. Moreover, embodiments herein may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., modem or network connection).

[0075] Various forms of computer readable media may be involved in carrying data (e.g. sequences of instructions) to a processor. For example, data may be (i) delivered from RAM to a processor; (ii) carried over a wireless transmission medium; (iii) formatted and/or transmitted according to numerous formats, standards or protocols; and/or (iv) encrypted in any of a variety of ways well known in the art.

[0076] A computer-readable medium can store (in any appropriate format) those program elements that are appropriate to perform the methods.

[0077] Those skilled in the art will understand that the computer system 700 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources. [0078] As discussed herein, embodiments of the present invention include various steps or operations. A variety of these steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the operations.

Alternatively, the steps may be performed by a combination of hardware, software, and/or firmware. The term "module" refers to a self-contained functional component, which can include hardware, software, firmware or any combination thereof.

[0079] One of ordinary skill in the art will readily appreciate and understand, upon reading this description, that embodiments of an apparatus may include a computer/computing device operable to perform some (but not necessarily all) of the described process.

[0080] Embodiments of a computer-readable medium storing a program or data structure include a computer-readable medium storing a program that, when executed, can cause a processor to perform some (but not necessarily all) of the described process.

[0081] Where a process is described herein, those of skill in the art will appreciate that the process may operate without any user intervention. In another embodiment, the process includes some human intervention (e.g., a step is performed by or with the assistance of a human).

Real time

[0082] Those of ordinary skill in the art will realize and understand, upon reading this description, that, as used herein, the term "real time" means near real time or sufficiently real time. It should be appreciated that there are inherent delays in network-based and computer

communication (e.g., based on network traffic and distances), and these delays may cause delays in data reaching various components. Inherent delays in the system do not change the real-time nature of the data. In some cases, the term "real-time data" may refer to data obtained in sufficient time to make the data useful for its intended purpose. Although the term "real time" may be used here, it should be appreciated that the system is not limited by this term or by how much time is actually taken to perform any particular process. In some cases, real time computation may refer to an online computation, i.e., a computation that produces its answer(s) as data arrive, and generally keeps up with continuously arriving data. The term "online" computation is compared to an "offline" or "batch" computation.

[0083] Although many of the examples presented herein involve specific combinations of method acts or system elements, it should be understood that those acts and those elements may be combined in other ways to accomplish the same objectives. With regard to flowcharts, additional and fewer steps may be taken, and the steps as shown may be combined or further refined to achieve the methods described herein. Acts, elements and features discussed only in connection with one embodiment are not intended to be excluded from a similar role in other embodiments.

[0084] As used herein, whether in the written description or the claims, "plurality" means two or more.

[0085] As used herein, whether in the written description or the claims, the terms

"comprising", "including", "having", "containing", "involving", and the like are to be understood to be open-ended, that is, to mean including but not limited to. Only the transitional phrases "consisting of and "consisting essentially of, respectively, are closed or semi-closed transitional phrases with respect to claims.

[0086] As used herein, "and/or" means that the listed items are alternatives, but the alternatives also include any combination of the listed items.

[0087] As used in this description, the term "portion" means some or all. So, for example,

"A portion of X" may include some of "X" or all of "X". In the context of a conversation, the term "portion" means some or all of the conversation.

[0088] As used herein, including in the claims, the phrase "at least some" means "one or more," and includes the case of only one. Thus, e.g., the phrase "at least some ABCs" means "one or more ABCs", and includes the case of only one ABC.

[0089] As used herein, including in the claims, the phrase "based on" means "based in part on" or "based, at least in part, on," and is not exclusive. Thus, e.g., the phrase "based on factor X" means "based in part on factor X" or "based, at least in part, on factor X." Unless specifically stated by use of the word "only", the phrase "based on X" does not mean "based only on X."

[0090] As used herein, including in the claims, the phrase "using" means "using at least," and is not exclusive. Thus, e.g., the phrase "using X" means "using at least X." Unless specifically stated by use of the word "only", the phrase "using X" does not mean "using only X."

[0091] In general, as used herein, including in the claims, unless the word "only" is specifically used in a phrase, it should not be read into that phrase.

[0092] As used herein, including in the claims, the phrase "distinct" means "at least partially distinct." Unless specifically stated, distinct does not mean fully distinct. Thus, e.g., the phrase, "X is distinct from Y" means that "X is at least partially distinct from Y," and does not mean that "X is fully distinct from Y." Thus, as used herein, including in the claims, the phrase "X is distinct from Y" means that X differs from Y in at least some way.

[0093] As used herein, including in the claims, a list may include only one item, and, unless otherwise stated, a list of multiple items need not be ordered in any particular manner. A list may include duplicate items. For example, as used herein, the phrase "a list of XYZs" may include one or more "XYZs".

[0094] It should be appreciated that the terms "first", "second", "third," and so on, if used in the claims, are used to distinguish or identify, and not to show a serial or numerical limitation. Similarly, the use of letter or numerical labels (such as "(a)", "(b)", and the like) are used to help distinguish and / or identify, and not to show any serial or numerical limitation or ordering.

Specifically, use of ordinal terms such as "first", "second", "third", etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.

[0095] The foregoing is merely illustrative and not limiting, having been presented by way of example only. Although examples have been shown and described, it will be apparent to those having ordinary skill in the art that changes, modifications, and/or alterations may be made.

[0096] Thus are described and provided systems, methods, and devices for producing audio notifications.

[0097] While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.