Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR MANAGEMENT AND PRESENTATION OF ALTERNATE MEDIA
Document Type and Number:
WIPO Patent Application WO/2021/222001
Kind Code:
A1
Abstract:
A system and method for the automatic management of the presentation of information from two or more media sources. This automatic management includes the selective viewing of video information on a prescribed screen, screen window or screen configuration, as well as the selective provision of audio information to a particular port or appliance. This management is performed in response to and a s a function of consumer preferences, as well as the source, type and content of the video and audio information. The management, system may be entirely located within the consumer's residence, or reside in whole or in part in a connected network or cloud. The system can initiate video/audio management in an entirely autonomous manner, or initiate only in response to user input (keypad, graphical user interface, voice, etc.).

Inventors:
DEL SORDO CHRISTOPHER (US)
ELCOCK ALBERT (US)
HARDT CHARLES (US)
Application Number:
PCT/US2021/028697
Publication Date:
November 04, 2021
Filing Date:
April 22, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ARRIS ENTPR LLC (US)
DEL SORDO CHRISTOPHER S (US)
ELCOCK ALBERT FITZGERALD (US)
HARDT CHARLES R (US)
International Classes:
G06F13/00; G06F3/00; G11B27/10; G11B27/34; H04N5/272; H04N5/45; H04N5/781; H04N5/932
Foreign References:
US20140195675A12014-07-10
US20200089469A12020-03-19
US20170332125A12017-11-16
Attorney, Agent or Firm:
MARLEY, Robert P. et al. (US)
Download PDF:
Claims:
CLAIMS

1. A media management system comprising: at least one vector adapted to present a primary digital content; a controller, comprising at least one processor and at least one memory, wherein: the memory stores information identifying: a plurality of alternate content suitable for preseutatiou via the at least one vector; an identifier associated with the at least one vector; user preference information, specific to at least one user, indicative of content and vector preferences; and a syntax for interpreting user commands; wherein the at least one processor is adapted to: receive from at the least one user a representation of a command conforming to the stored syntax; and identify, based upon the information stored in the memory, at least one alternate content to be presented and the vector for said presentation; and present the identified alternate content upon the identified vector.

2. The system of claim 1 wherein the received command is a voice command.

3. The system of claim 1 wherein the at least one memory is further adapted to store user identification information and the identifying of the least one alternate content is based, at least in part, upon the stored user identification information.

4. The system of claim 1 wherein the user preference information comprises at least one of the following: content provider information; internet provider information; social media account information; video conference account information; and mobile device information.

5. The system of claim 1 further comprising a graphical user input adapted for the entry of the command conforming to the stored syntax.

6. The system of claim 1 wherein at least one of the primary content and the alternate content is comprised of both video content and audio content and, wherein, based upon based upon the stored syntax and the received command, the audio content is not presented via the same vector as the video content.

7. The system of claim 1 wherein the at least one processor is further adapted to execute at least one pre-configured routine when identifying, based upon the information stored in the memory, at least one alternate content to be presented and the vector for said presentation.

8. The system of claim 1 wherein, based upon the stored syntax and the received command, the alternate content is presented via the at least one vector concurrently with the primary content.

9. The system of claim 1 wherein the at least one memory is further adapted to store video and audio content.

10. The system of claim 9 wherein the primary content and the alternate content are each comprised of at least one of the following: streaming video; streaming audio; live video; stored digital images;

Stored digital video; stored digital audio; and a pre-configured routine.

11. A method for managing a media system comprising: at least one vector presenting a primary digital content; a controller, comprising at least one processor; and at least one memory, wherein the memory stores information identifying a plurality of alternate content suitable for presentation via the at least one vector; an identifier associated with the at least one vector; user preference information, specific to at least one user, indicative of content and vector preferences; and a syntax for interpreting user commands; the method comprising the steps of: receiving from at the least one user a representation of a command conforming to the stored syntax; identifying, based upon information stored in the memory, at least one alternate content to be presented and the vector for said presentation; and presenting the identified alternate content upon the identified vector.

12. The method of claim 11 wherein the received command is a voice command.

13. The method of claim 11 wherein the at least one memory is further adapted to store user identification information and the step of identifying the least one alternate content is based, at least in part, upon the stored user identification information.

14. The method of claim 11 wherein the at least one memory is further adapted to store user identification information and the at least one alternate content is identified based, at least in part, upon the stored user identification information.

15. The method of claim 11 wherein the user preference information comprises at least one of the following: content provider information; internet provider information; social media account information; video conference account information; and mobile device information.

16. The method of claim 11 wherein at least one of the primary' content and the alternate content is comprised of both video content and audio content and, wherein, based upon based upon the stored syntax and the received command, the step of presenting comprises the audio content not being presented via the same vector as the video content.

17. The method of claim 11 wherein the step of identifying, based upon information stored in the memory', at least one alternate content to be presented and the vector for said presentation comprises the execution execute of at least one pre-configured routine.

18. The method of claim 11 wherein, based upon the stored syntax and the received command, the step of presenting the alternate content comprises presenting, via the at least one vector, the alternate content concurrently with the primary content.

19. The method of claim 11 wherein the at least one memory is further adapted to store video and audio content.

20. The method of claim 11 wherein the primary' content and the alternate content are each comprised of at least one of the following: streaming video; streaming audio; live video; stored digital images; stored digital video; stored digital audio; and a pre -configured routine.

Description:
SYSTEM AND METHOD FOR

MANAGEMENT AND PRESENTATION OF ALTERNATE MEDIA

CROSS-REFERENCE TO RELATED APPLICATION

[0001 ] This application claims the benefit of U.S. Provisional Patent Application No.

63/017,272, filed April 29, 2020.

BACKGROUND OF THE INVENTION

[0002] The availability of an ever-increasing number and type of video and visual media sources, the increasing size and definition of digital televisions (“DTVs”), and the almost ubiquitous availability of residential high-speed bidirectional data/video communication are combining to change the consumer media viewing experience. In particular, it has become more feasible than ever tor consumers to simultaneously access multiple media presentations. Audio and video can be sourced from cable, optical fiber, smartphones, tablets, security devices, video doorbells, computers and digital assistants. Video can be streamed to one or more large format displays (such as DTVs or video monitors), as well as smaller format displays (tablets, smartphones, etc.). Audio can be streamed to headsets and smart speakers. For example, it is not uncommon for a consumer to be able to have independent audio streaming from a media controller to Bluetooth©-enabled headsets, while a different audio and/or video presentation is sent by that controller to an HDMI port. Consumers are also able to view multiple video windows on DTVs using picture -in-picture (“PIP) technology, splitscreen or mosaic video presentation formats. All these allow for more than one media presentation to be rendered for simultaneous viewing.

[0003] The types of visual media available to consumers is also not limited to broadcast or video-on-demand services that are received at or streamed into a residence. Consumers have a wide variety of alternate video and audio sources available to them, including the aforementioned smartphones, tablets, security devices, video doorbells, computers and digital assistants. Consequently, when a consumer is viewing a presentation from a given video source, there may be a host of reasons why that consumer would want, or need to temporarily access an alternate media (audio and/or video) presentation. It would not be unusual for a consumer viewing broadcast content on in a primary window on a DTV, to temporarily want to view' video from a video doorbell via a secondary ' PIP window' upon the DTV, and temporarily stream only audio from the video doorbell to the headset of a the consumer viewing the DTV (while still rendering audio associated with the broadcast content via ETDMT to the DTV). Once the consumer video doorbell interaction terminated (the individuai(s) at the door either entered the or left the vicinity), the consumer would close the PIP window and switch back to the broadcast audio in their headphones.

[0004] It is also a common practice for consumers to switch between two media viewing experiences over a limited time period. For example, when a consumer is watching a given broadcast or streaming program and a commercial break occurs, they often switch to another channel or video source so as to watch something of interest and then switch back to the initial broadcast/streaming program at a time at which they estimate or guess the commercial break has terminated. This usually results in a consumer switching several times back and forth until the commercial is actually over.

[0005] There exists a need for system and method that provides a consumer with a convenient and automatic means of controlling automated rendering of multiple sources of video and audio streams and sources based upon the type of video and/or audio as w ell as predetermined consumer preferences., the disclosure of which is hereby incorporated herein by reference.

[0006] BRIEF SUMMARY OF THE INVENTION

[0007] A system and method for the automatic management of the presentation of information from two or more media sources. This automatic management includes the selective viewing of video information on a prescribed screen, screen window or screen configuration, as well as the selective provision of audio information to a particular port or appliance. This management is performed in response to and a s a function of consumer preferences, as well as the source, type and content of the video and audio information. The management system may be entirely located within the consumer’s residence, or reside in whole or in part in a connected network or cloud. The system can initiate video/audio management in an entirely autonomous manner, or initiate only in response to user input (keypad, graphical user interface, voice, etc.),

[0008] BRIEF DESCRIPTION OF THE DRAWINGS

[0009] The aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings in which:

[0010] FIG. 1 is a functional diagram of a system supporting a first preferred embodiment of an alternate media management and presentation.

[0011] FIG. 2 provides a table representative of media stored in the system of FIG. 1.

[0012] FIG. 3 provides a table representative of available vectors within the system of

FIG. 1. [0013] FIG. 4 provides a table representative of user-specific provider and account information associated with the system of FIG. 1.

[0014] DETAILED DESCRIPTION

[0015] FIG. 1 provides a functional diagram of a system (100) tor the management and presentation of alternate media. As shown, the system includes digital media controller 102 which serves as the nexus for the system. Digital media controller 102 includes processor 104 (which includes at least one digital processor), and memory 106. This digital media controller may be implemented via a general-purpose personal computer, a dedicated appliance (such as a set-top box or other consumer premises equipment), or via an off-site device connected to the wireless interface and other peripherals via private or public networks.

[0016] Digital media controller 102 is shown to be interfaced with digital televisions

108a and 108b, Wi-Fi interface 110, cable/optical media provider 112, laptop computer 114, Internet providers 116a and 116b, Bluetooth® transceiver 118 and telecom provider 120. In addition, mobile devices (smartphones) 122a and 122b, and security video camera (124) are adapted to interface with digital media controller 102 via a bidirectional Wi-Fi connection supported by Wi-Fi interface 110, and wireless headphones 126a and 126b interface with digital media controller 102 via Bluetooth transceiver 118. Remote control 128 is shown to be communicatively linked to digital media controller 102 via a wireless connection. This wireless connection can be optical (infrared) or radio frequency ( " RF " ). if the digital media controller is located off-site it will be understood that a network connection to an optical or RF transceiver could be utilized to relay remote control commands to the off-site digital media controller.

[0017] Processor 104 serves to control and direct incoming and outgoing media to and from digital media controller 102, including video, image and audio information stored in memory 106. In addition, processor 104 is responsive to user-specific information stored in memory 106, as well as user commands received by digital media controller 102. Such commands may be issued by users via laptop computer 114, mobile de vices 122a and 122b, remote control 12.8, or digital assistant 130. As shown, remote control 128 includes a keypad that may he actuated by a user to generate a command, or a user may issue a voice command (132) which the remote-control forwards to digital media controller 102 for voice recognition and processing. This received voice command can also be forwarded by processor 104, via media controller 102 and Internet provider 116a, to an off-site server (134) for recognition and processing. A voice command could also be received and forwarded in a similar manner by digital assistant 130. Memory 106 is capable of storing digital video and audio which may viewed or listened to via the various peripherals (108a, 108b, 114, 122a, 122b, 126a, 126b) interfaced with digital media controller 102.

[0018] As mentioned above, memory 106 stores video, image and audio information.

The video stored information can consist of recorded programs received via cable/optical media provider 112, or Internet providers 116a and 116b (memory 106 functioning as a DVR), downloaded video and images from computers, mobile devices, residential video cameras., etc., as well as downloaded music files. A user can also identify or tag these stored files within memory 106 so as to designate a particular name, genre, event, or an association with a particular user. Processor 104 is adapted to receive and process user commands for such tagging via the graphical user interface (“GUI”) provided by laptop computer 114, or mobile devices 122a and 122b, as well as via remote control 128.

[0019] For example, as shown in table 200 of FIG. 2, there are four identified users

(Andrew, Beth, Charles and Donna). Particular photo, video, voice message and music files are associated with each of these users. A video file labeled “Band Gigs” is shown to be associated with the user named Andrew, as is a music file labeled “Classical Mix”. To make such an association, a user could have utilized a GUI to a) enter the name “Andrew'” in a field associated with the video and music files at issue, or b) used a mouse, stylus or touch-screen associated with the GUI to designate the files as associated with Andrew ' . A user (via remote control 128, digital assistant 130, or mobile devices 122a and 122b) may also tag files using a set of voice commands that system 100 is adapted to recognize and respond to. For, example, a user could recite a command such as “Associate the music file Ramones with Beth” or “The image file Charles’ School Photos belongs to Charles”.

[0020] Memory 106 is also adapted to store information identifying the various peripherals within system 100 that are available as vectors for the presentation of streaming, live or stored media, as well as any association between particular users and those peripherals. For example, DTV 108a is designated as the primary video display for all users. Laptop computer 114 has been designated as Andrew’s secondary video display and headset 126a as his Bluetooth headset. DTV 108b is designated as the secondary video display for users Beth and Donna. Information associating an additional headset (126b) as well as two mobile devices (122a and 122b) is also stored in memory 106 (as shown in Table 300). Bluetooth pairing of devices with the system is done in the normal manner via user interface such as that provided by laptop computer 114, DTVs 108a and 108b, or mobile devices (smartphones) 12.2a and 122b. The association between various Bluetooth peripherals and users can be made using GIJIs or voice commands in a manner similar to that described with respect to tagging media fdes.

[0021] In addition, memory 106 also stores information that associates various media providers and account information with users. Such stored information is represented in Table 400 of FIG. 4. As shown, when applicable, the various users are associated with a cable/optical media provider, an Internet provider, a social media account, a video conferencing account, as well as streaming media accounts. Memory 106 also stores any required passwords associated with the providers and accounts, so as to enable system 100 to gain access.

[0022] The ability of the system to associate provider, account and password information with a user permits the execution of pre-eonfigured routines that enable users to easily access personal media from a host of sources. For example, the command '‘Show Andrew’s internet Media Account A Holiday Photos”. As evidenced by Table There is nothing stored in memory 106 that has been indexed or labeled as “Holiday Photos” associated with Andrew, Rather as the command stated, these particular photos are associated with Andrew’s internet Media Account A. Rather, information would be stored in Memory' 106 so as to pre- configure the systems response when the “Andrew ’s internet Media Account A” was recognized. This pre-configuration routine, entered and stored by a user via a GUI, would instruct the system to access and retrieve the requisite ID and password from memory 106, and utilize internet Provider 116a to access the requested photos (memory 106 contains information associating this provider with Andrew - sec table 400).

[0023] The voice command responsiveness of the system is enabled by processor 104 and media controller 102 function to recognize, process and execute a predetermined set of commands based upon user input and the information stored in memory 106. These commands can be entered by a user via GUI or as voice commands. Examples of syntax for such commands are shown below:

[0024] · Show [sporting event] with commercials on [user name]’ s mobile device.

[0025] * Show [broadcast program] and [user namej’s [stored image files].

[0026] * Show ' [user nanie]’s [social media account] and [Internet media].

[0027] · Show [sporting event] and [security camera] and commercials.

[0028] · Show' [stored video file] on [user namel’s mobile device and route audio to

[user name]’ headphones.

[0029] · Show [event] on all displays.

[0030] · Show' [social event] on all displays.

[0031] * Play [user name] ’s [music file] on [user namej’s headphones. [0032] · Play [music file] on [peripheral],

[0033] · Play [video file] on [peripheral],

[0034] · Switch to [user image file] for [interval] .

[0035] · Switch to [music file].

[0036] * Switch to [alternate provider or media] during commercials,

[0037] * Switch to [sporting event] during commercials,

[0038] * Switch to [user name]’s text messages during commercials,

[0039] · Switch to [user name]’s video conference call with [name].

[0040] · Switch to [stored image files] from [start time] to [end time].

[0041 ] · Switch to [user name] ’s primary' display,

[0042] · Also show [event] with audio routed to [user name]’s headphones for the next [interval],

[0043] * Also show Eagles Game on [vector] .

[0044] * Also show ' [device],

[0045] The syntax has the basic fonmat of “action ’ (showy play, switch, also), “content”

(video, image, account, internet) and “vector” (primary/secondary display, headphones, mobile device). The initiating words of the command phrases (“Show”, “Play”, “Switch”, “Also”) serve to instruct system 100 as to base action being requested.

[0046] “Show'” is indicative of the rendering of visual content; For example, the command “Show the Eagles Game on Beth’s TV” would instruct the system to route a video stream of that sporting event to the DTV identified in the system as Beth’s primary video display. Whereas the command “Show the Eagles Game” would instruct the system to route a video stream of that sporting event to whatever display was active, as no particular vector was specified.

[0047] “Play” is indicative of audio content. For example, the command “Play the

Eagles game on Andrew ’s Headphones” would cause system 100 to play the audio portion of the video stream of that sporting event on the headphones associated with Andrew in memory

106.

[0048] “Switch” indicates that the command is requesting a change in content or vector.

For example, “Switch to the Eagles game during commercials” would instruct system 100 to show the video stream of that sporting event whenever a commercial break occurred in the program that was presently being displayed. The command “Switch to Andrew’s mobile device” would instruct system 100 to cease showing/playing content on presently active vector and show/play that content on Andrew' ’s mobile device. [0049] As shown above, the syntax also permits user names to he employed as modifiers within the commands. A user name can modify content (“Show Charles" video; Play Beth’s voice messages”) or a vector (Beth’s mobile device).

[0050] Time constraints, either specific (start time, stop time, fixed interval), or tied to a particular condition (“during commercials”) are also permitted within the syntax.

[0051] The conjunctions “also” and “with”serve to permit a user to command that more than a single content be simultaneously presented on a given vector. For example, in response to the command “Also show Eagles Game on DTV 108a” processor 104 can be preprogrammed to respond by causing media controller 102 to display the video stream of that sporting event in a picture-in-picture (“PIP”) window overlaid upon whatever content was already being displayed on DTV 108a. Whereas processor 104 can be pre-programmed to respond to the command “Show Eagles game and security camera on Andrews mobile device” by instructing media controller 102 to display, upon Andrew’s mobile device, the video stream of that sporting event on a split-screen side-by-side with a live feed from the security camera. [0052] The above commands and syntax can be utilized to create tailored media experiences that can incorporate broadcast video, live streaming video, as well as stored media. For example, a user could recite the following commands:

“Show tiie Eagles game on DTV 108a and switch to

Andrew ' s Vacation Photos during commercials. '

“Also show live feed from Andrew’s social media account”

The first command would result in the system routing a live feed of the Eagles game, sourced from cable/optical media provider 112, to DTV 108a and switching to a slideshow of the images stored in the file “Andrew’s Vacation Photo’s”,, sourced from memory 106, when a commercial breaks occurs during that sporting event. The second command would cause the system to open a PIP window within the display of the Eagles game on DTV 108a, and display the video stream currently associated with Andrew’s social media account therein. All of the requisite information and connectivity to establish this live feed is available to the system. As shown in FIG. 1, digital media controller interfaces with Internet provider 116b and Table 400 of FIG. 4 indicates that memory' 106 contains information associating provider 116b with Andrew . In addition, table 400 shows that memory 106 also contains information indicates that social media account “@Andrew” is also associated with Andrew and provides the required password (“1112”) for accessing that account.

Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. For example, in addition to the various interfaces specifically mentioned in as providing GUls in the above described system (laptop computer, smartphone), a dedicated appliance, a personal computer or a tablet could also serve this function. Similarly, although DTVs, a laptop computers and smartphones were described as vectors for displaying or playing media, any type of visual or audio device capable of reasonably reproducing the particular type of media being accessed by a user would be a suitable vector (tablet, analog TV , projector, audio system, etc.), in addition, the particular syntax of the voice commands disclosed above is not intended to be limiting. Technology supporting the recognition of and response to such commands is well-known in the art and continually advancing - It will be understood that the principles of the disclosed embodiments can be applied to this advancing technology without departing from the scope of the invention.