Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS OF PROVIDING A REAL-TIME AND INTERACTIVE ENGAGEMENT DATA CLASSIFICATION FOR A MULTIMEDIA CONTENT
Document Type and Number:
WIPO Patent Application WO/2023/183595
Kind Code:
A1
Abstract:
A computer-implemented method is provided herein. The method includes: transmitting a live stream of an event using a computer application; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.

Inventors:
WELTON JOSHUA (US)
Application Number:
PCT/US2023/016271
Publication Date:
September 28, 2023
Filing Date:
March 24, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VURPLE INC (US)
International Classes:
G06Q30/0203; H04N21/45; G06F16/9535; G06F40/30; G06Q50/00
Foreign References:
US20150350730A12015-12-03
US20050237378A12005-10-27
US20140289226A12014-09-25
US20160180361A12016-06-23
Attorney, Agent or Firm:
GAMBINO, Darius et al. (US)
Download PDF:
Claims:
CLAIMS

1. A computer-implemented method, comprising: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.

2. The method of claim 1, further comprising: receiving individual demographic information from each viewer of the computer application.

3. The method of claim 2 wherein the receiving individual demographic information is completed prior to the live stream.

4. The method of claim 1 , wherein the viewer engagement data includes one or more selected from the group consisting of: chat data, comment reactions, and polling data.

5. The method of claim 1, wherein the sentiment analysis includes determining one or more selected from the group consisting of: moods, sentiments, and perceptions.

6. The method of claim 1, wherein the transmitting includes a latency adjustment such that a viewer is viewing a delayed transmission.

7. The method of claim 1, wherein the transmitting includes responding to voice commanded functions from the user.

8. The method of claim 7, wherein the voice commanded functions provide a displayed result instantaneously from the perspective of the viewer.

9. The method of claim 7, wherein the voice commanded functions include accessing a search engine from an internet service.

14

41167541 .3

10. The method of claim 1, wherein voice commanded functions from the user are completed instantaneously from the perspective of the viewer.

11. A computer-implemented method of providing a real-time and interactive engagement data classification for a multimedia content comprising: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from the users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.

12. A system configured to implement the method of claim 1, comprising: a personal computing device of the user including a mobile application; a plurality of servers in electronic communication with the personal computing device, the plurality of servers being configured and adapted to conduct the sentiment analysis in real time; and a plurality of personal computing devices of the viewers; wherein the personal computing device of the user is configured and adapted to transmit an audio and video stream to the plurality of personal computing devices of the viewers; wherein the personal computing device of the user is configured and adapted to receive viewer engagement data from the plurality of personal computing devices of the viewers.

15

411675413

Description:
SYSTEMS AND METHODS OF PROVIDING A REAL-TIME AND INTERACTIVE ENGAGEMENT DATA CLASSIFICATION FOR A MULTIMEDIA CONTENT

CROSS-REFERENCE TO RELATED APPLICATION

The present application claims priority to U.S. Provisional Patent Application No. 63/323,764, filed March 25, 2022, which is incorporated herein by reference in its entirety.

FIELD OF THE INVENTION

The present invention relates to systems and methods for identifying and analyzing engagement data (reactions including such as, for example, comments, moods, sentiments, and perceptions) from users during live streaming of a multimedia content, linking the engagement data to demographic information of the users, and generating reports illustrating classification of the engagement data with respect to users’ demographic information.

BACKGROUND

Live streaming is an important way for politicians to reach their constituents. For a representative democracy to function effectively, a representative or politician should be able to understand the policy issues most important to their constituency. Presently available platforms fail in providing effective constituent feedback to representatives or politicians.

SUMMARY

One aspect of the invention provides a computer-implemented method. The method includes: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.

Another aspect of the invention provides a computer-implemented method of providing a real-time and interactive engagement data classification for a multimedia content. The method includes: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from users during live streaming of the multimedia content; linking the engagement data to demographic information of the users;

1

41 167541 1 and generating a report depicting classification of the engagement analytics with respect to the demographic information.

Another aspect of the invention provides a system configured to implement the methods described herein. The system includes a personal computing device of the user including a mobile application. The system includes a plurality of servers in electronic communication with the personal computing device, the plurality of servers being configured and adapted to conduct the sentiment analysis in real time. The system includes a plurality of personal computing devices of the viewers. The personal computing device of the user is configured and adapted to transmit an audio and video stream to the plurality of personal computing devices of the viewers. The personal computing device of the user is configured and adapted to receive viewer engagement data from the plurality of personal computing devices of the viewers.

DEFINITIONS

The instant invention is most clearly understood with reference to the following definitions.

As used herein, the singular form "a," "an," and "the" include plural references unless the context clearly dictates otherwise.

Unless specifically stated or obvious from context, as used herein, the term "about" is understood as within a range of normal tolerance in the art, for example within 2 standard deviations of the mean. "About" can be understood as within 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1%, 0.5%, 0.1%, 0.05%, or 0.01% of the stated value. Unless otherwise clear from context, all numerical values provided herein are modified by the term about.

As used in the specification and claims, the terms "comprises," "comprising," "containing," "having," and the like can have the meaning ascribed to them in U.S. patent law and can mean "includes," "including," and the like.

Unless specifically stated or obvious from context, the term "or," as used herein, is understood to be inclusive.

Ranges provided herein are understood to be shorthand for all of the values within the range. For example, a range of 1 to 50 is understood to include any number, combination of numbers, or sub-range from the group consisting 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,

2

411675413 41, 42, 43, 44, 45, 46, 47, 48, 49, or 50 (as well as fractions thereof unless the context clearly dictates otherwise).

BRIEF DESCRIPTION OF THE DRAWINGS

For a fuller understanding of the nature and desired objects of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawing figures wherein like reference characters denote corresponding parts throughout the several views.

The following detailed description of specific embodiments of the invention will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there are shown in the drawings specific embodiments. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities of the embodiments shown in the drawings.

FIG. 1 is a schematic summarizing systems and steps involved in identifying and analyzing engagement data from users during live streaming of a multimedia content, linking the engagement data to demographic information of the users, and generating a report depicting classification of the engagement analytics with respect to users’ demographic information according to embodiments of the present invention.

FIG. 2 illustrates a schematic outlining usefulness of the present invention to citizens, ambassadors and/or government officials according to embodiments of the present invention.

FIGS. 3A-3J illustrate images of screens showing details related to creating an account on the App (called “Vurple”), which is an application for live streaming of multimedia content and comprises an engine (called the “Vurplytics” engine) for identifying, analyzing and classifying the users’ engagement data based on their demographic information according to embodiments of the present invention.

FIGS. 4A-4C illustrate images of screens showing a user’s profile as it appears on the App.

FIGS. 5A- 5C illustrate images of screens showing details related to password recovery processes for an exemplary account on the App.

FIGS. 6A- 6E illustrate images of screens showing that a user can use the App for live streaming of a multimedia content even without providing their demographic information.

3

41 167541 1 FIGS. 7A-7B illustrate images of screens showing users’ reactions (engagement data in the form of comments, questions likes, upvotes, hearts, etc.) to the multimedia content being streamed.

FIGS. 8A-8D illustrate images of screens showing that a user can anonymously (in “Vhosf ’ mode) post their reactions during live streaming of multimedia content on the App.

FIGS. 9-48 illustrate images of screens showing additional processes and features related to an application (i.e., an “app”) employing certain embodiments of the present disclosure.

FIG. 49 illustrates an exemplary flow diagram of data flow, in accordance with exemplary embodiments of the present disclosure.

FIG. 50 illustrates an exemplary flow diagram implementing a latency adjustment, in accordance with exemplary embodiments of the present disclosure.

DETAILED DESCRIPTION

The present disclosure provides a computer-implemented method of providing a realtime and interactive engagement data classification for a multimedia content, and systems implementing the same. In one aspect, the method includes: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user. In another aspect, the method includes: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.

An application-based software program (an “App”) can be used to live stream an event, and comments made by users (e.g., viewers) during the live stream can be assessed in real time to identify the users’ reactions (including such as, for example, moods, sentiments, and perceptions) with respect to the live stream. There can be a calculus/tabulation of the individual demographic information that a user inputs via a profile within the App, and that information (along with other information) can be used to display reports of crowd reactions to the live stream. In certain embodiments, the reports of crowd reaction to the live stream can be further used for developing predictive analytical systems and methods such as, for

4

41 167541 3 example, systems and methods that use artificial intelligence (Al) for predicting an outcome of an election.

In certain embodiments, a system for implementing certain embodiments of the present disclosure is provided. The system can include an analytical digital engine. An example of such an analytical digital engine is illustrated in FIG. 49. The engine can be configured to collect certain analytical data extrapolated from user generated info (UGI) during a mobile phone application sign up process as well as database (DB) information, coded by artificial intelligence (Al) and/or machine learning (ML). The cross referenced UGI and DB smart data is processed to provide a real time, one button selection of smart info to the user during a live stream. Smart data can include data collected from UGI or DB that is processed with mathematical equations coded to reflect a medium of information rather than just raw data. Smart data can include calculative measurements on data sets or data that is already processed by the analytical engine (e.g., “Vurplytics engine”). The live stream can consist of a video and/or audio stream. The DB information can be a combination code (AI/ML) of chat summarization, smart count data and UGI.

In certain embodiments, a sentiment analysis can be conducted in real-time. The realtime sentiment analysis can be an ML technique that automatically recognizes and extracts sentiment in a “chat” field (e.g., a live text chat), whenever it occurs. The real-time sentiment analysis can be used to analyze mentions, positive/negative comments, and word summarizations. This process can use several ML tasks such as natural language processing (NLP), text analysis, and/or semantic clustering to identify opinions/statements about the live chat and extract intelligence for an analytical process.

Certain embodiments can include a smart process for trending analytics including phrases and words which can be processed through ML giving a “smart” count instead of an average score. A smart count can be characterized using a plurality of different parameters. For example, a time-weighted score can be given, where a particular phrase, word, or symbol is weighted differently during a particular section of a speech. Trending phrases can also be processed using ML and similarity scores crossed referenced with relevancy to the topic. Both words and phrases can share matrix scores. Trending phrases can be targeted to accurately track phrases or words used in the chat ML and smart filtering can be used to remove repetitive jargon.

The general chat can be analyzed in a plurality of ways. For example, certain rules can be implemented, such as rules related to detection of certain words, phrases, or sentiments. Other rules can be implemented to detect and analyze the frequency of a word,

5

41 167541 3 phrase, or other symbol (e.g., an emogi). The general chat can be analyzed to understand the cosine similarity of words, phrases, and symbols (e.g., using a similarity matrix). An NLP chat for any live broadcast can include smart computation similarities. Rules can include removal of repetitive words and ML to understand relevancy of words assigned to the topic. In implementing such rules, accurate summarization and sentiment real time heat mapping can be provided.

During or after a live streaming event, a summarization can be provided to a user portal. The summarization can include data from emails and chat from the live stream An incoming data stream of the entire chat can be: a few paragraphs (block of characters set by certain parameters) that can be paired down to a few sentences with video time stamping for comment recall. The summarization can include video time stamping, which can include word or audio recognition (e.g., of the broadcasting user). A voice command can be used to find comments and pull video “clips” related to the topic.

Referring now to the drawings, FIG. 1 illustrates a system 100 in accordance with an exemplary implementation of the present invention. System 100 is illustrated including a device 102 implementing an application 104. FIG. 1 illustrates “Event Stream Usage” 106 (e.g., using AWS Kinesis). Event streams can be used to capture user engagement activities such as users submitting questions, interacting with questions or comments (e.g., “reacting”, “liking”, “loving”, “upvoting”, “hearting”, etc.), commenting, etc. A single interaction or a plurality of interactions can be taken from any person in the application. In one exemplary embodiment, the event stream is a stream of a politician or government official giving a speech on a specific topic. In one example, AWS Kinesis can be used to take different events from different users on different platforms and feed them to the analytics engine 108 (e.g., AWS Kinesis Analytics Engine). In such an example, AWS Kinesis Analytics can separate out the information that is text based (e.g., comments) and pass it on to natural-language processing (NLP) service 110 (e.g., AWS Comprehend) where it is analyzed for “sentiment.” At least part of that dataset can be passed back to analytics engine 108 (e.g., AWS Kinesis Analytics).

FIG. 1 also illustrates “Event Analytics” engine 108 (e.g., using AWS Kinesis Analytics). Event analytics engine 108 can be used to compute information in real timebased on incoming information. This analysis can be dumped back into the stream to make it available for downstream systems. Event analytics engine 108 (e.g., using AWS Kinesis Analytics Engine) can take raw information about direct user actions and calculate derived

6

41 167541 3 information from the raw information. Event analytics engine 108 combines such information with incoming information (e.g., from AWS Comprehend).

FIG. 1 also illustrates machine learning 114 (e.g., using AWS SageMaker) can be used to analyze all of the data in the Columnar Datastore to look for additional intelligence or metrics that are useful to the user (e.g., politicians). Some of the information can be reformatted and displayed back into the App (e.g., mobile App), as well as a separate dashboard. Machine learning models can be used to take “sentiments” and turn them into “perceptions.” The perceptions can be used to identify engagement opportunities with citizens that are open to the right kind of conversation. These machine learning models will be regression based auto-tuning models. This data and/or analysis can be sent to “Columnar Datastore” 112. Columnar Datastore 112 (e.g., using AWS Redshift) can be used by the system 100 to store all the information that comes in from the runtime system. This includes demographic data that is provided by the user or sourced from external systems. New demographic data can be fed directly into columnar datastore 112 (e.g., into AWS Redshift) such that the new demographic data can be used as part of the data models and present the information back to a user (e.g., an elected official) as part of an analytical summarization (illustrated on the rightmost side of FIG. 1). The combined information (demographic data, engagement data, sentiment analysis, perception analysis) are all combined into a real time interaction data presentation widget that provides the most relevant data to the user (e.g., elected official) with a simple request (e.g., a single click).

Referring now to FIG. 49, a system 4900 is illustrated implementing certain methods of the present disclosure. System 4900 includes a personal computing device 4902 (e.g., a smart phone) implementing an application 4904. Application 4904 is illustrated being displayed on a graphical user interface (GUI) 4906. Application 4904 is illustrated displaying a plurality of trending words 4908 and a plurality of trending phrases 4910. Personal computing device 4902 is illustrated in electronic communication with a server 4912 implementing Amazon Elastic Compute Cloud (EC2). Server 4912 is configured to received information from personal computing device 4902 and provide a data set 4914 including User Generated Content (UGC). Once a stream is generated and sent to a server/system (e.g.. Elemental (AWS)) for additional processing, the data set can be considered UGC by a Transcoding server. User interaction data 4916 (e.g., user chat, user comments, user questions, etc.) is illustrated being extracted from data set 4914. System module 4918 is illustrated implementing an analytical process on the data. System module 4918 can implement AI/ML, smart processing NLP analysis, clustering intelligence code (e.g.,

7

411675413 clustering Vurple Intelligence code, etc). The analyzed data can be used to update the display on GUI 4906 of personal computing device 4902 (e.g., updating trending words, updating trending phrases, etc ).

Voice Command

A voice command can be used in connection with an application implementing certain methods of the present disclosure. For example, a phrase like “VIYAH” can be used to activate (e.g., wake up) an application such that the application is prepared and able to receive a voice command. This voice command can be used to request or command automated feature sets of the data or analytics (e.g., data processed accessible through a user dashboard). A voice command can be used in connection with an intelligent assistant (which can have voice activation optionality) and can give suggestions, information, and learned data points to aid in the success of user profiles. During a live stream, the voice command can be used for users (e.g., approved or verified live streaming profiles) to use systems and methods of the present disclosure. For example, a live broadcaster can use a voice command like “HEY VIYAH” followed by a series of commands and requests which can trigger macro code features within the application. For example, a live broadcaster can say: “Thanks everyone for asking that important question, I plan on touching upon that on Friday during my next LIVE (pause/beat) - HEY VIYAH, can you please send a reminder to my Vurple message box, to open Friday’s LIVE with addressing the new mask mandate? THANK YOU VIYAH!” In another example, a live broadcaster can say: “Thanks everyone for asking that important question, I plan on touching upon that on Friday during my next LIVE (pause/beat) - HEY VIYAH, can you please schedule the next Vurple LIVE for Friday October 1st at 10:00am Central time? - Thanks VIYAH.” In the previous example, the “HEY VIYAH” voice commands can trigger internal macro code features which can work in the background to give real time data to the broadcaster (e.g., on screen), place data or information into folders, and populate the user portal or dashboard with the requested data.

In certain embodiments, a voice command function can be implemented by a user (e.g., a broadcaster of a live stream) to access an internet connection during a live stream. The voice commanded function can include accessing a search engine from an internet service and/or synthesizing results to provide and display a singular answer. The voice command can implement a macro command and/or feature set to search the world wide web. For example, a user can access the internet to collect and/or display infomiation to viewers of the live stream. Such information can be used in connection with additional analytical information (e.g., Vurplytics). For example, a broadcaster can say: “HEY VIYAH - What

8

411675413 time does the state of the union air next week?" In another example, the broadcaster can say: “HEY VIYAH - Who was the 23rd President of the USA?" In another example, the broadcaster can say: “HEY VIYAH - what's 70 x 7?” In each of these examples, the question can be answered by information collected from the internet and requested information can be displayed to the viewers of the live stream (e.g., automatically).

Optimization of Live Stream Communications

Audio recognition capabilities can be implemented during a live broadcast. The audio of the live stream can be recorded and the actual audio that the listener (e g., viewing users) hears is the alternate track being played with a slight purposeful latency integrated. This integrated latency can allow the original audio to be analyzed continuously in the audio mixer (i. e. , using a superpowered engine software instance mixer, such as a 4-channel virtual mixer). Thus, indicated key words (e.g., voice commands) can have macro features which push code to conduct certain analytical functions. Thus, live analyzation of audio can be conducted during a live stream event. A continuum of live audio analyzation can aid in predictive measures for live speech articulation (e.g., repetitive rhetoric can be sent via live notification to a live streaming host or user). After live streaming, the analyzation can combine summarization, sentiment, and audio of the speech to gauge audience response (e.g., including a visualization graph showing the characteristics of recognized audio) to help correct future engagements. In certain embodiments, a signal (e.g., auditory, visual, etc.) can be provided to the live streaming host or user. For example, jingles of audio can play through an audio channel to the host signifying certain events or various cues (e.g., repetitive words by the host, lack of engagement by viewers, etc.). Certain viewers can be filtered out during live streaming such that the sentiment or response of a certain demographic can be determined. For example, viewer data (e.g., responses) can be filtered based on a specific single data feature, such as viewers aged 18-21.

Certain methods of the present disclosure can best be described in connection with FIG. 50. A live audio broadcast automation command can be used in connection with the voice command (e.g., “Hey VIYAH”). The live audio broadcast automation command can be used to allow for simultaneous audio to be processed for specific word recognition. Audio input for a mobile device 000 can be used to record the audio of the broadcasting user. The audio of a video and audio stream 5002 can be processed into a 4-channel virtual mixer 5004, splitting the input (e.g., device microphone input) audio into “Channel 1” and “Channel 2.” Channel 1 can pass through as an output, stitched to a video, and transfer (e.g., in ALAC-24- bit 48kHz) to an interactive video service (e.g., amazon IVS Player) and/or a live video

9

411675413 encoding application 5006 (e.g., AWS Elemental). Channel 2 audio can be used as a digital instance (e.g., a copy of audio from Channel 1). Channel 2 audio can be analyzed continuously by a player built into virtual mixer 5004 and can look for a multi signal reference audio fingerprint with wave forms matching the voice command (e.g., “HEY VIYAH”). A database of fingerprinted algorithms (e.g., general sine curves) can be stored via low latency server 5010 (e.g., where data pull has only milliseconds of latency).

The output of Channel 1 audio can have a 3000-millisecond purposeful delay on the output side. Video can be matched for audio synchronization. Such synchronization can be done so that analyzation (e.g., using code within the application) and processing of the voice command (e.g., the words “HEY VIYAH”) have ample time to respond, pull from the server an audio fingerprint (e.g., using sine matching) and respond with a set of given commands. Such synchronization and processing will ensure more of a real time experience for those watching and those broadcasting. Time spent waiting for the voice command can be cut from the live stream in real time (e.g., from the perspective of the viewer). From the perspective of the viewer, voice commanded functions by the broadcaster can be completed in real-time and displayed nearly instantaneously on the viewer’s device 5008. An additional 2 audio channels can be used as well for additional live audio analyzation enhancements such as repetitive rhetoric recognition, intensity of words, and volume increase during a live broadcast. Such analyzation can ultimately be used as analytics in the user dashboard (e.g., by understanding and comparing speech between live broadcasts).

User Dashboard (“V-Portal”)

A user dashboard (sometimes referred to as a “V -Portal”) can be displayed on a mobile device, a personal computing devise, a web browser, or a similar electronic system. The user dashboard can display summarizations and related analytical data from a mobile application. Chat highlights, trending words, and trending phrases can be collected into a database. Multiple live streams can be analyzed and combined together to get one data set of information from the live streams. Information from DB can be processed for predictive measures, summarization with parsing capabilities for future solo mode selecting. During a live broadcast, “heat mapping” (e.g., of trending phrases and topics) and real time sentiment analysis (e.g., from ML/ Al) can be pushed to V-Portal and crossed referenced with prior live streams (e.g., to compare current audience emotions or sentiments with related prior speeches or performances). Smart analytics across all user features (e.g., chat posts, polls, trending words, trending phrases, trending symbols, etc.) can be collected into a database and

10

41 167541 3 processed through summarization and/or a predictive engine. Summarizations and “smart analyses” of all analytical data can be conducted using a “solo mode” (e.g., parsing features or filtering data based on a single criteria) for all databases and live broadcasts. The “solo mode” feature can allow specific single data features to be analyzed at the exclusion of other data sets in order to extract certain information based on the entirety of the chat. In effect, other data sets are muted to single out specifics based on the individual data set or demographic (e.g., filtering out data outside of a certain age group, racial group, ethnic group, professional group, geographic region, etc ). In certain embodiments, an application of the present disclosure can scrape data from social media (e.g., from linked social media accounts) with filter parameters using sourcing AI/ML and analyze all data.

ENUMERATED EMBODIMENTS

The following enumerated embodiments are provided, the numbering of which is not to be construed as designating levels of importance.

Embodiment 1 provides a computer-implemented method, including: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.

Embodiment 2 provides the computer-implemented method of embodiment 1, further including: receiving individual demographic information from each viewer of the computer application.

Embodiment 3 provides the computer-implemented method of any one of embodiments 1-2, wherein the receiving individual demographic information is completed prior to the live stream.

Embodiment 4 provides the computer-implemented method of any one of embodiments 1-3, wherein the viewer engagement data includes one or more selected from the group consisting of: chat data, comment reactions, and polling data.

Embodiment 5 provides the computer-implemented method of any one of embodiments 1-4, wherein the sentiment analysis includes determining one or more selected from the group consisting of: moods, sentiments, and perceptions.

11

41 167541 3 Embodiment 6 provides the computer-implemented method of any one of embodiments 1-5, wherein the transmitting includes a latency adjustment such that a viewer is viewing a delayed transmission.

Embodiment 7 provides the computer-implemented method of any one of embodiments 1-6, wherein the transmitting includes responding to voice commanded functions from the user.

Embodiment 8 provides the computer-implemented method of any one of embodiments 1-7, wherein the voice commanded functions provide a displayed result instantaneously from the perspective of the viewer. Embodiment 9 provides the computer-implemented method of any one of embodiments 1-8, wherein the voice commanded functions include accessing a search engine from an internet service.

Embodiment 10 provides the computer-implemented method of any one of embodiments 1-9, wherein voice commanded functions from the user are completed instantaneously from the perspective of the viewer.

Embodiment 11 provides a computer-implemented method of providing a real-time and interactive engagement data classification for a multimedia content including: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from the users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.

Embodiment 12 provides the computer-implemented method of any one of embodiments 1-10, further including: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from the users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.

12

411675413 Embodiment 13 provides a system configured to implement the method of any one of embodiments 1-12, including: a personal computing device (e.g., a smart phone, a personal computer, etc.) of the user including a computer application (e.g., a mobile application, a web-based application, etc.); a plurality of servers in electronic communication with the personal computing device, the plurality of servers being configured and adapted to conduct the sentiment analysis in real time; and a plurality' of personal computing devices (e.g., smart phones, personal computers, etc.) of the viewers; wherein the personal computing device of the user is configured and adapted to transmit an audio and video stream to the plurality of personal computing devices of the viewers; wherein the personal computing device of the user is configured and adapted to receive viewer engagement data from the plurality of personal computing devices of the viewers.

EQUIVALENTS

Although the invention has been described in terms of exemplary embodiments, it is not limited thereto. Rather, the appended claims should be construed broadly to include other variants and embodiments of the invention which may be made by those skilled in the art without departing from the scope and range of equivalents of the invention. This disclosure is intended to cover any adaptations or variations of the embodiments discussed herein.

INCORPORATION BY REFERENCE

The entire contents of all patents, published patent applications, and other references cited herein are hereby expressly incorporated herein in their entireties by reference.

13

411675413