Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONSUMPTION OF A VIDEO FEED FROM A REMOTELY LOCATED CAMERA DEVICE
Document Type and Number:
WIPO Patent Application WO/2023/129561
Kind Code:
A1
Abstract:
Technologies are provided for consumption of a live video feed from a remotely located camera device. The live video feed can be consumed via a mobile application or a client application installed in a user device. Some of the technologies also permit interacting with a live video feed and analyzing video feeds (live video feeds or time-shifted video feeds, or both). Various types of analyses can be performed, resulting in generation of recommendations for reservations for space at a particular location, or recommendation for items that can be consumed at a location where a video feed is originated. Some of the technologies also permit supplying directed content assets to user devices. Directed content asset can be provided as a push notification or can be presented with the mobile application.

Inventors:
WOODS JEREMIAH (US)
LOVELACE BRIAN (US)
Application Number:
PCT/US2022/054100
Publication Date:
July 06, 2023
Filing Date:
December 27, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
WOODS JEREMIAH (US)
LOVELACE BRIAN (US)
International Classes:
H04N7/18; H04N23/63; H04N23/661
Foreign References:
US10270959B12019-04-23
US20200183975A12020-06-11
US20090252383A12009-10-08
US20110007159A12011-01-13
US20140313341A12014-10-23
US20130286153A12013-10-31
Other References:
ALI GHULAM, ALI TARIQ, IRFAN MUHAMMAD, DRAZ UMAR, SOHAIL MUHAMMAD, GLOWACZ ADAM, SULOWICZ MACIEJ, MIELNIK RYSZARD, FAHEEM ZAID BIN: "IoT Based Smart Parking System Using Deep Long Short Memory Network", ELECTRONICS, vol. 9, no. 10, pages 1696, XP093078539, DOI: 10.3390/electronics9101696
RICHARDS PAUL: "Using PTZ Cameras for Remote Production", STREAMGEEKS, 6 December 2021 (2021-12-06), XP093078541, Retrieved from the Internet [retrieved on 20230904]
Attorney, Agent or Firm:
BROWN, Charley, F. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computing system, comprising: at least one processor; and at least one memory device having computer-executable instructions stored thereon that, in response to execution by the at least one processor, cause the computing system to: receive, from a client device, selection data identifying a subscriber account corresponding to a location having installed therein one or more camera devices, the location being remotely located relative to the computing system; receive, from the client device, positioning data defining a vantage point of a first camera device of the one or more camera devices, the first camera device being placed in a defined area of the location; cause the first camera device to rotate about at least one axis according to the positioning data; and provision a user device to receive a stream of video data from the first camera device.

2. The computing system of claim 1, wherein the positioning data comprises one or more of first data defining a pan angle of the first camera device or second data defining a tilt angle of the first camera device.

3. The computing system of claim 1, the at least one memory device having further computer-executable instructions stored thereon that, in response to execution by the at least one processor, further cause the computing system to, select a first location from multiple location comprising the location by applying a machine-learning model to respective video feeds from the multiple locations; and determining space availability at defined time intervals at the selected first location.

4. The computing system of claim 1, the at least one memory device having further computer-executable instructions stored thereon that, in response to execution by the at least one processor, further cause the computing system to adjust at least one of a keyword or a key-phrase in a subscriber profile according to a defined criterion for searchability of the subscriber profile, the subscriber profile contained in the subscriber account. The computing system of claim 1, the at least one memory device having further computer-executable instructions stored thereon that, in response to execution by the at least one processor, further cause the computing system to, configure a directed content campaign using data from client device, the data comprising first data defining a first attribute of the directed content campaign and second data defining a second attribute of the directed content campaign; and cause the user device to present directed content according to the directed content campaign during presentation of at least a portion of the stream of video data. The computing system of claim 4, wherein configuring the directed content campaign comprises configuring a type of messages to be received by the user device at a communication address of the user device, a message including at least a portion of the directed content. The computing system of claim 5, wherein the type of messages comprises push notification, the at least one memory device having further computer-executable instructions stored thereon that, in response to execution by the at least one processor, further cause the computing system to, detect a defined person within the location by applying a face-recognition model to a portion of the stream of video data; and send a push notification conveying directed content to a second user device pertaining to the defined person. The computing system of claim 1, the at least one memory device having further computer-executable instructions stored thereon that, in response to execution by the at least one processor, further cause the computing system to generate one or more datasets individually indicative of interaction of the user device with the directed content. The computing system of claim 1, the at least one memory device having further computer-executable instructions stored thereon that, in response to execution by the at least one processor, further cause the computing system to access, via one or more application programming interfaces (APIs), a third-party computing subsystem remotely located relative to the computing system, the third-party computing subsystem being one of a third-party service subsystem or a third-party social network subsystem.

Description:
CONSUMPTION OF A VIDEO FEED FROM A REMOTELY LOCATED CAMERA

DEVICE

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Application No. 63/294,244 filed December 28, 2021, which is incorporated herein in its entirety.

BACKGROUND

[0002] A venue or location may be related to activities or services. The activities or services may be unavailable for viewing on a computing device. Issues may arise without available recordation of the activities or services.

SUMMARY

[0003] Technologies are provided for consumption of a live video feed from a remotely located camera device. The live video feed can be consumed via a mobile application or a client application installed in a user device. Some of the technologies also permit interacting with a live video feed and analyzing video feeds (live video feeds or time-shifted video feeds, or both). Various types of analyses can be performed, resulting in generation of recommendations for reservations for space at a particular location, or recommendation for items that can be consumed at a location where a video feed is originated. Some of the technologies also permit supplying directed content assets to user devices. Directed content asset can be provided as a push notification or can be presented with the mobile application.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] FIG. 1 illustrates an example of an operational environment in accordance with one or more embodiments of this disclosure.

[0005] FIG. 2A illustrates examples of subsystems that constitute backend server devices in accordance with one or more embodiments of this disclosure.

[0006] FIG. 2B illustrates an example of components of a user device in accordance with one or more embodiments of this disclosure.

[0007] FIG. 3A illustrates an example of another operational environment, in accordance with one or more embodiments of the disclosure. [0008] FIG. 3B illustrates an example of a machine-learning model (e.g., a deep learning neural network) for video analysis, in accordance with one or more embodiments of the disclosure.

[0009] FIG. 4A illustrates an example of a user interface, in accordance with one or more embodiments of the disclosure.

[0010] FIG. 4B illustrates another example of a user interface, in accordance with one or more embodiments of the disclosure.

[0011] FIG. 4C illustrates an example of a process flow for directed content placement, in accordance with one or more embodiments of the disclosure.

[0012] FIG. 5 illustrates another example of an operational environment, in accordance with one or more embodiments of the disclosure.

[0013] FIG. 5A illustrates an example of a machine-learning model, in accordance with one or more embodiments of the disclosure.

[0014] FIG. 5B illustrates an example of a model used to incrementally update a predictive model, in accordance with one or more embodiments of the disclosure.

[0015] FIG. 6 illustrates yet another operational environment, in accordance with one or more embodiments of this disclosure.

[0016] FIG. 7 illustrates an example of another operating environment to implement various aspects of this disclosure in connection with consumption and/or interaction with a video feed (live video feed or otherwise).

DETAILED DESCRIPTION

[0017] Technologies are provided for consumption of a live video feed from a remotely located camera device. The live video feed can be consumed via a mobile application or a client application installed in a user device. Some of the technologies also permit interacting with a live video feed and analyzing video feeds. The video feeds can be live video feeds or time-shifted video feeds, or both. Various types of analyses can be performed, resulting in generation of recommendations for reservations for space at a particular location, or recommendation for items that can be consumed at a location where a video feed is originated. Some of the technologies also permit supplying directed content assets to user devices. Directed content asset can be provided as a push notification or can be presented with the mobile application.

[0018] The technologies described herein can be embodied in computing systems, computing devices, computing-implemented methods, computer program products and the like. The various functionalities described herein can be accomplished by one or more technologies, individually or in combination.

[0019] FIG. 1 illustrates an example of an operational environment 100 in accordance with one or more embodiments of this disclosure. The operational environment 100 includes multiple camera devices placed at different locations. Specifically, a first group of cameras devices 104 can be placed at a first location 102. The first group of cameras 104 can include a first camera device 104a, a second camera device 104b, and a third camera device 104c. In some cases, as is illustrated in FIG. 1, camera devices in the first group of camera devices 104 can be placed at respective sections of the first location 102. Thus, the first camera device 104a can be placed in a first area 106a, the second camera device 104b can be placed in a second area 106b, and the third camera device 104c can be placed in a third area 106c. Depending on the footprint of the first location 102, the respective areas can cover one or several indoor spaces or one or several outdoor spaces, or a combination of indoor space(s) and outdoor space(s). The first location 102 can be a restaurant, a dog kennel, a homeimprovement store, or similar.

[0020] The operational environment 100 further includes a second group of cameras devices 114 that can be placed at a second location 112. The second group of cameras 114 can include a first camera device 114a, a second camera device 114b, and a third camera device 114c. In some cases, as is illustrated in FIG. 1, camera devices in the second group of camera devices 114 can be placed at respective sections of the first location 112. Thus, the first camera device 114a can be placed in a first area 116a, the second camera device 114b can be placed in a second area 116b, and the third camera device 114c can be placed in a third area 116c. Depending on the footprint of the second location 112, the respective areas can cover one or several indoor spaces or one or several outdoor spaces, or a combination of indoor space(s) and outdoor space(s). The second location 112 can be a restaurant, a dog kennel, a home-improvement store, or similar.

[0021] Each camera device of the first group of camera devices 104 can send video data to one or more service devices 108 housed within premises or dedicated storage in the first location 102. To that end, at least one of the server device(s) 108 can be functionally coupled to the first group of cameras 104. The server device(s) 108 and the first group of cameras 104 can be functionally coupled in numerous configurations, such as a one-to-many configuration or a many-to-many configuration. Regardless of the coupling configuration between server device(s) 108 and the first group of camera devices 104, the server device(s) 108 include a first server device, such as a server device 108a. In some embodiments, the first server device can be functionally coupled to each camera device of the first group of camera devices 104. For example, the server device 108a can be functionally coupled to each one of the first camera device 114a, the second camera device 114b, and the third camera device 114c. Accordingly , in such embodiments, a first camera device of the first group of camera devices 104 can send first video data to the first server device, and a second camera device of the first group of camera devices 104 can send second video data to the first server device.

[0022] Further, each camera device of the second group of camera devices 114 also can send video data to one or more service devices 118 housed within premises or dedicated storage in the first location 102. To that end, at least one of the server device(s) 118 can be functionally coupled to the first group of cameras 104. The server device(s) 118 and the first group of cameras 114 can be functionally coupled in numerous configurations, such as a one- to-many configuration or a many-to-many configuration. Regardless of the coupling configuration between server device(s) 118 and the second group of camera devices 114, the server device(s) 118 include a first server device, such as a server device 118a. In some embodiments, the first server device can be functionally coupled to each camera device of the second group of camera devices 114. For example, the server device 118a can be functionally coupled to each one of the first camera device 114a, the second camera device 114b, and the third camera device 114c. Accordingly, in such embodiments, a first camera device of the second group of camera devices 114 can send first video data to the first server device, and a second camera device of the second group of camera devices 114 can send second video data to the first server device.

[0023] The server device(s) 108 can send video data to one or more first server devices of multiple backend server devices 130. To that end, the server device(s) 108 can be functionally coupled to the first server device(s) of the multiple backend server devices 130 by means of at least one network of one or more networks 120. The at least one network can be embodied in a wireless network or a wireline network, or a combination of both. The one or more first server devices can embody respective one or more first gateways of multiple gateways 134. The first gateway(s) can send received video data to at least one of multiple subsystems 138.

[0024] A first subsystem of the multiple subsystems 138 can relay video data 152 to a user device 160. In some embodiments, as is illustrated in FIG. 2A, the first subsystem can be embodied in a content distribution subsystem 210. The user device 160 is represented by a mobile smartphone simply for the sake of illustration. The user device 160 can be embodied in other types of user devices, such as a tablet computer, a laptop computer, a gaming console, a personal computer, or similar devices.

[0025] The user device 160 can execute a client application (such as a mobile application or a web browser) to consume video data 152. The user device 160 can present the video stream of an area of a location (imaged by a camera device) in a user interface 170. A display device (not depicted in FIG. 1) can present the user interface 170. The user interface 170 can include a viewing pane 174 that presents the video stream. The user interface can be referred to as streaming business profile. In some embodiments, as is illustrated in FIG. 2B, the user device 160 can execute a mobile application 274 to cause a display device 260 to present the user interface 170. The mobile application 274 can be retained in one or more memory devices 270. The disclosure is not limited to the mobile application 274. In some embodiments, the user device 160 can execute another type of client application, such as web browser, to cause the display device 260 to present the user interface 170.

[0026] One or more of the subsystems 138 also can provide a rich audio streaming experience at the user device 160 in connection with a video stream from a particular location (e.g., the location 102 or the location 104). A broadcaster can stream high-quality original audio via mixer hardware that is physically present at the particular location of the stream. The broadcaster can be a business broadcaster or an individual broadcaster. To stream the high-quality original audio, the content distribution subsystem 210 (FIG. 2A) can receive audio signal from the mixer hardware, and can stream the audio signal in real-time or essentially in real-time. The audio signal can be synchronized with a live video stream from the particular location. The mixer hardware can permit a broadcaster (or a subscriber account associated therewith) to control the levels of the audio output signal corresponding to the video stream. The mixer hardware also can permit controlling audio output signal from individual instruments used to generate audio at the particular location during a live event. Controlling the audio output signal can result shape that signal in a desired fashion. As a result, the audio delivered by the content distribution subsystem 210 to an end-user can, in some cases, be better than the audio experienced at the live event. The content distribution subsystem 210 can deliver the shaped audio output signal by embedding one or multiple more audio channels within the video data 152.

[0027] As an illustration, the particular location can be club, and a band performing at the club can incorporate source audio output signal into a video stream of the performance. A device (such as a webcam or a smartphone) that generates video data defining the video stream can have mixer hardware integrated thereon or functionally coupled thereto. Thus, instead of using a microphone integrated into the device, the band can used the mixer hardware to control volume levels and, in some cases, other waveform features, of each instrument used by the band in the performance. Such a live (or, in some cases, time-shifted) audio experience may not be achieved using existing streaming technologies.

[0028] The user device 160 also can receive digital content 154 that can augment the video stream. The user device 160 can present the digital content 154 in the viewing pane 174. In some cases, the digital content 154 can be presented as an overlay on video shown in the viewing pane 174. A second subsystem of the multiple subsystems 138 can provide the digital content 154. In some embodiments, as is shown in FIG. 2A, the second subsystem can be embodied in a directed content subsystem 220. To provide the digital content 154, the second subsystem can be functionally coupled to multiple backend storage devices 140 including a media repository. The media repository can be included within multiple data repositories 148. In some embodiments, the digital content 154 can include directed content. In this disclosure, directed content refers to digital media configured for a particular audience, or a particular outlet channel (such as a website, a streaming service, or a mobile application), or both. Directed content can include, for example, digital media of various types, such as advertisement; promotional content (e.g., a coupon or a discount); surveys or other types of questionnaires; motion pictures, animations, or other types of video segments; audio segments of defined durations (e.g., a product or service review; and similar media.

[0029] In some embodiments, instead of augmenting the video stream, the second subsystem can send digital content 154 as a push notification. As a result, directed content can be presented at the user device 160 even when the user interface 170 is not actively presented. An end-user can interact with such directed content, causing the user device 160 to present another user interface (e.g., web browser or the client application) in order to consume the directed content more fully.

[0030] Execution of the client application also can provide other functionality. At least some of that functionality can be accessed by means of selectable visual elements 178 included in the user interface 170 that is presented at the user device 160. Selection of one of the selectable visual elements 178 can cause the user device 160 to send control signaling 156 (represented by control 156 in FIG. 1). The control signaling 156 can be sent in response to execution of the client application. In some embodiments, a selection of a visual element of the selectable visual elements 178 can result in viewing, saving, or redeeming promotional content shown in the viewing pane 174. [0031] In addition, or in other embodiments, selection of a visual element of the selectable visual elements 178 also can permit adding a video feed for a particular location. Selection of that visual element can cause the user device 160 to present a query composition interface. The query composition interface can be presented in a stand-alone fashion (e.g., as a new user interface) or as an overlay on the user interface 170. Regardless of type of presentation, the query composition interface can permit generating a query for sources of video data according to one or more of geolocation (“Dog kennels near me”), city, state, ZIP Code, country, industry/category, business name, or keyword(s). Such video data (and video feeds represented by such data) can be retained within the data repository 148. The video data can be organized by channel category , for example. Examples of channel categories include gaming; tutorials; how-to/DIY; education; music; band; dance; cooking; technology; podcasts; beauty; health and fitness; diet and nutrition; vLog; comedy; sports; news/joumalism; lifestyle; travel; art; events; weddings; funerals; and resorts. In some embodiments, an end-user can enter a new channel category within the query composition interface. One or more of the subsystems 138 can utilize the new channel category as a suggestion for a channel category. The content distribution subsystem 210 can procure video content based on that suggestion. In some cases, the query composition interface can present a menu of categories in order to permit or otherwise facilitate the generation of the query. [0032] In response to the query, the user device 160 can present a listing of locations having available video feeds. The listing of locations can be presented in a second user interface. The second user interface can be presented in a stand-alone fashion or as an overlay on the user interface 170.

[0033] To obtain data defining the listing of locations, the user device 160 can send control signaling 156 including the generated query. A source of video data can be an entity that subscribes to a service for delivery of video feeds, among other functionalities. A particular subsystem (e.g., the search subsystem 230) of the subsystems 138 can receive the query and, in response, can resolve the query using data within subscriber accounts 146 retained within one or more memory devices 144 (generically referred to as account repository 144). The account repository 144 can be retained within the multiple backend storage devices 140 functionally coupled to at least one of the backend service devices 130. In response to resolving the query, the particular subsystem can send the data defining the listing of locations to the user device 160.

[0034] The second interface that presents the listing of locations having corresponding video feeds can include, for each listed location, one or more selectable visual elements that permit accessing information pertaining to a subscriber (e.g., a dog kennel) corresponding to the listed location. A first selectable visual element of the selectable visual elements can permit accessing location information, such as distance from a reference location to a subscriber. To access such information, the user device 160 can send the control signaling 156 including a query for location of the subscriber. A particular subsystem (e.g., the search subsystem 230) can resolve such a query using the account repository 144, and can send location information to the user device 160. The user device 160 can present a new user interface, either as a stand-alone user interface or an overlay, including the location information.

[0035] Further, or in yet other embodiments, a second selectable visual element can permit accessing a subscriber profile of the subscriber. The subscriber profile can include a description of the subscriber, services provided by the subscriber (e.g., daycare, grooming, and boarding), and/or hours of operation of the subscriber. To access the subscriber profile, the user device 160 can send the control signaling 156 including a query for such a profile. The particular subsystem (e.g., the search subsystem 230) can resolve that query using the account repository 144, and can send data defining the subscriber profile to the user device 160. The user device 160 can present a new user interface, either as a stand-alone user interface or an overlay, including the subscriber profile or one or more elements thereof (e.g., hours of operation).

[0036] Further, or in still other embodiments, the selectable visual element(s) presented in conjunction with the listing of locations can include a third selectable visual element that permits selecting one or more video feeds for the subscriber corresponding to a listed location. In response to that selectable visual element being selected, the user device 160 can send request data within the control signaling 156, the request data identifying a subscriber corresponding to a listed location. The subsystems 138 can include a particular subsystem that can receive the request data and can provision one or more of camera devices pertaining to the subscriber as respective sources of video data.

[0037] In some embodiments, at least one of the selectable visual elements 178 conversation posts on a video feed presented in the viewing pane 174.

[0038] FIG. 3 A illustrates an example of an operational environment 300 for configuration of functionality of a subscriber that provides video feed(s) and/or services, in accordance with one or more embodiments of the disclosure. The functionality can be configured using a client device 310 executing a client application 316 retained in one or more memory devices 314. The client device 310 can be a mobile computing device (such as a laptop computer or a table computer) or a tethered computing device (such as a PC or a blade server). In some cases, the client device 310 can be embodied in the server device 108a or the server device 118a.

[0039] Execution of the client application 316 can cause the client device 310 to present a user interface 320 that provides a management portal. The user interface 320 can include a title 322 and a menu of configuration options 324. In some cases, the user interface 320 can embody a landing page of a web-based portal. An example of the user interface 320 is shown in FIG. 4A. The menu of configuration options 324 can be embodied in multiple selectable visual elements, each providing access to a configuration option. Selection of a particular selectable visual element can cause the client device to present another user interface (an overlay, for example) that provides access to one or more of the subsystems 138 included in the backend server devices 130. To that end, the client device 310 can be functionally coupled to the backend server devices 130 via at least one of the network(s) 120.

[0040] The menu of configurations 324 can include a first configuration option that permits generating a subscnber account within the account repository 144. In some cases, the first configuration option can be presented as a selectable visual element 416 (FIG. 4A). Selection of the selectable visual element 416 can cause the client device 310 to present a user interface that permits entering subscriber information and submitting the subscriber information to an account creation subsystem 330. The subscriber information can include location of the subscriber and associated profile information, for example. The account creation subsystem 330 can receive the subscriber information and can create a subscriber account 336 that can be retained with extant subscriber accounts 144. The subscriber account 336 that has been created can include a subscriber profile 338 including various information characterizing the subscriber; e.g., description, hours of operation, services provided, and similar information. The subscriber profile 338 can be embodied in, or can constitute, a stand-alone business profile pages with search-engine friendly uniform resource locators (URLs). For purposes of illustration, a search-engine friendly URL can include an entity name (such as a company name or another type of business name) and an address of the entity instead of a string of characters (numbers, letters, symbols, or a combination thereof) that defines an identification (ID) code for the entity. An example of a search-friendly URL includes https://livlivestream.com/stream/getswolefitness-1234mainstr eet-city-state-54321. By using a search-engine friendly URL, a business profile may receive a higher ranking in a search on a search platform (custom or commercially available). [0041] The account creation subsystem 330, in some embodiments, can adjust the subscriber profile 316 to introduce keywords or other terms (such as keyphrases) that can provide a satisfactory (e.g., optimal or nearly optimal) search performance. The account creation subsystem 330 can introduce a keyword or a keyphrase by replacing term(s) and/or adding other term(s) in the received subscriber information.

[0042] The menu of configurations 324 can include a second configuration option that permits configuring access to a video feed. Such an option can permit defining periods during which a video feed is available for consumption. In some cases, the second configuration option can be presented as a selectable visual element 418 (FIG. 4A). Selection of the selectable visual element 418 can cause the client device 310 to present a user interface that permits entering one or more periods within a day and/or within a week, for example, during which video data generated by a camera device at a particular location can be available for consumption by a user device.

[0043] Such a user interface also can permit submitting data identifying such period(s) to an access control subsystem 340. The access control subsystem 340 can receive the data and can update access control data 344 corresponding to the subscribed account being managed. That subscriber account can be the subscribed account 336, for example.

[0044] The menu of configurations 324 also can include a third configuration option that permits configuring orientation and/or zoom functionality of a camera device within a defined location corresponding to a subscribed account. In some cases, the third configuration option can be presented as a selectable visual element 420 (FIG. 4A). Selection of the selectable visual element 418 can cause the client device 310 to control the camera device at the defined location. For instance, the camera device can be camera device 104b at section 106b of the location 102. In particular, yet not exclusively, the selection of the selectable visual element 420 can permit changing pan and/or tilt of the camera device, thus changing the vantage point of the camera device. In addition, or in other cases, selection of the visual element can permit controlling zoom functionality of the camera device.

[0045] To that end, the client device 310 can present another user interface in response to selection of the camera option 420. That user interface can include a viewing pane presenting images generated by the camera device (e.g., camera device 104b) and respective selectable UI elements for pan and tilt of the camera device. In some embodiments, the user interface can include a single selectable UI element that permits adjusting an orientation of the camera device. In some cases, the user interface also can include one or several other selectable UI elements that permit controlling presentation of the images. User interface 450 shown in FIG. 4B is an example of such a user interface. The user interface 450 includes a viewing pane 460 that presents a stream of images generated by the camera device (denoted as “Camera #1” in FIG. 4B). The user interface 450 includes a selectable UI 470 that sen es as a digital j oystick that permits adjusting the orientation of the camera device. The user interface 450 also includes a selectable UI element 474, and a selectable UI element 478 that permit stopping and resuming the presentation of the stream of images, respectively. The user interface 450 also includes a selectable UI element 472 that, when selected, cause the stream of images to be presented continuously.

[0046] In response to selection of one or more of the selectable UI elements, the client device 310 can send control signaling to a camera control subsystem 350. Such a subsystem can provide one or several application programming interfaces (APIs) defining control instructions and other functionality to control a camera device. The control signaling can include a control instruction to configure orientation of the camera device and address data defining a network address of the camera device. The network address can be an IP address, for example. The camera control subsystem 350 can receive such control signaling via a particular gateway (not depicted in FIG. 3 A for the sake of clarity) of the gateways 134 (FIG. 1). The camera control subsystem 350 can send the control instruction to the addressed camera device to cause a change in orientation and/or zoom configuration of the addressed camera device.

[0047] The menu of configurations 324 also can include a fourth configuration option that permits configuring a directed content campaign for real-time directed content during delivery of video data from a camera device. In some cases, the fourth configuration option can be presented as a selectable visual element 422 (FIG. 4A). Selection of the selectable visual element 422 can cause the client device 310 to present another user interface that permits defining attributes of the directed content campaign and/or providing directed content asset, e.g., advertisement or other digital images, a jingle or other audio segment, or similar content. The attributes can include time availability (e.g., period of time during which an impression is permitted); number of impressions during a defined time interval per user device; duration of an impression; name of directed content campaign; value and number of incentives; a number of times that an incentive can be redeemed; a number of times that a particular combination of incentives can be redeemed; a combination thereof; or similar. In some embodiments, an attribute can define a threshold number of incentives that can be redeemed across two or more directed content campaigns. An attribute can be defined by creating the attribute, modifying the attribute, or deleting the attribute. Deleting the attribute can define the attribute as a void attribute that is no longer applicable to the directed content campaign, for example.

[0048] The user interface that is presented in response to selection of the directed content option 422 also can permit sending data defining the attnbute(s) and/or media asset(s) to the directed content subsystem 220. The directed content subsystem 220 can update campaign data 354 to retain received attribute(s) and media asset(s). Such an update can configure a current directed content campaign for a particular subscriber account.

[0049] Other functionality related to a directed content campaign also can be configured in some embodiments. For example, the user interface that is presented in response to selection of the directed content option 422 can permit configuring a type of directed content notification that the user device 160 can receive. That user interface can permit providing a communication address corresponding to the user device 160, such as telephone number or an email address. The user interface also can permit providing a type of message to be sent to the communication address. Examples of types of messages include push notification, short message service (SMS) message, multimedia messaging service (MMS), iMessage, email, and the like.

[0050] In some configurations, selection of the directed content option 422 can permit creating and sharing an event at a particular location corresponding to a subscriber. To that end, selection of the directed content option 422 can cause the client device 310 to present another user interface. Again, the other user interface can be presented in a stand-alone format or as overlay on the user interface 320. That other user interface can permit providing data that defines the event, and also can permit sending the data to the directed content subsystem 220. The directed content subsystem 220 can then send data defining the event to the user device 160 within digital content 154. The directed content subsystem 220 also can send such data to other user devices that receive video data from the particular location.

Simply as an illustration, the particular location can be the location of a dog kennel that offers canine daycare service. An administrator device that embodies the client device 130 can present the user interface responsive to selection of the directed content option 422. The administrator device also can receive input data that defines an event at the dog kennel. An example of the event can be a group dog walk at a park located near the dog kennel. After receiving the input data, the directed content subsystem 220 can send data identifying the event to multiple user devices that receive one or more video feeds from the dog kennel.

[0051] The menu of configurations 324 also can include a fifth configuration option that permits viewing information on viewership of a video feed for a particular subscriber. In some cases, the fifth configuration option can be presented as a selectable visual element 424 (FIG. 4A). Selection of the selectable visual element 424 can cause the client device 310 to present another user interface including multiple visual elements (e.g., a dial diagram and a chart) presenting respective datasets identifying aspects of viewership. That other user interface can be embodied in a dashboard, for example. As an illustration, a first dataset can be indicative of historical viewership, a second dataset can be indicative of number of impressions of directed content for the video feed; a third dataset can be indicative of coupons or other incentives redeemed during presentation of the video feed; and a fourth dataset can be indicative of a number of distinct user account that consume the video feed. Such a number can be colloquially referred to as “number of fans” of the subscriber that provides the video feed. For example, a dog boarding facility can stream video feeds of areas in the facility, and customers of the dog facility can be fans of that facility.

[0052] Specifically, selection of the analytics option 424 can cause the client device 310 to send a request for viewership datasets to an analytics subsystem 360. In response, the analytics subsystem 360 can access such datasets from one or more of the data repositories 148 (FIG. 1) and can then send the datasets to the client device 310. Although in some embodiments the client application 316 can include formatting information to present a viewership dataset, the analytics subsystem 360 can send formatting information corresponding to a dataset sent to the client device 310.

[0053] The disclosure is not limited to viewership data. In some embodiments, the dashboard presented at the user device 160 also can present key performance indicators (KPIs) as a function of time. The dashboard also present KPI trends. Other analytics metrics also can be presented. The user device 160 can obtain the datasets defining the KPIs, for example, from the analytics subsystem 360.

[0054] In addition, or in some cases, selection of the analytics options 424 can cause the client device 310 to send a request for aggregated viewership data and/or aggregated video data to the analytics subsystem 360. In response, the analytics subsystem 360 can access viewership data and/or video data, and can operate on such data to generate the aggregated viewership data or the aggregated video data, or both data. The video data can be accessed from one or more repositories of the data repositories 148 (FIG. 1). Such one or more repositories can server as a video feed cache. The analytics subsystem 360 can then send the aggregated viewership data and/or the aggregated video data to the client device 310.

[0055] There are several forms of aggregation of video data generated by one or several camera devices corresponding to a location. In one embodiment, the analytics subsystem 370 can train a machine-learning model to detect a human or a pet (a dog, a cat, or a pig, for example) within video feeds. In other embodiments, the analytic subsystem 370 can obtain a machine-learning model already trained from such a detection. Regardless of how a trained machine-learning model is accessed, the analytic subsystem 370 can apply the trained machine-learning model to the video data generated by the camera device(s).

[0056] In addition, or in some embodiments, the analytics subsystem 360 can use detected individuals (human or otherwise) to generate occupancy count. The analytics subsystem 360 also can generate “busy” metering using the occupancy count. That is, the analytics subsystem 360 can generate a metric identifying a fraction of the full occupancy of a location. Further, or in yet other embodiments, the analytics subsystem 360 also can use the detected individuals to generate a predictive model (such as a machine-learning model) to predict occupancy of an area of a location or the location as a whole.

[0057] In addition to determining occupancy count, the analytics subsystem 360 also can perform image analysis to detect tables or other spaces that are open within a location (e.g., a restaurant). Over time, the predictive model can leam about busy rimes and slow times, and can generate recommendations for one or more locations (e.g., restaurants) based on historical data and occupancy trends. To that end, the predictive model can analyze video feeds corresponding to respective locations. The video feeds can include live video feeds or historical video feeds, or a combination of both. As part of the analysis, the predictive model can detect humans present in images that constitute the video feeds. The predictive model can yield count data identifying a count of humans and can store the count data as a data set. The count data is referenced against day and time. The output of such analysis, including human counting, is a predicted busy time based on number of humans detected present at any given time.

[0058] Other types of analysis can be applied to scenes defined by video data generated at a particular location. To that end, the analytics subsystem 360 can apply one or more models (such as machine-learning models) to determine various aspects of individuals within a scene and/or objects within the scene. In some embodiments, the analytics subsystem 360 can apply one or more first models to video data in order to evaluate sentiment and demographic analysis of scenes defined by the video data. In some embodiments, as is shown in FIG. 3B, at least one of the first model(s) can include two concatenated deep convolutional neural networks (CNNs), including a first three-dimensional (3D) deep CNN and a two-dimensional (2D) deep CNN. By applying the first model(s), the analytics subsystem 360 can interpret emotional expressions of a human detected within a scene. As such, the analytics subsystem 360 can determine, with a defined degree of confidence, that the detected human is happy, sad, or surprised. Further, by applying at least one of the first model(s), the analytics subsystem 360 can determine, within a defined degree of confidence, demographic attributes of the detected human, such as gender and/or ethnicity, from a facial image of the detected human. More specifically, in one example, at least one of tire first model(s) can include a human facial recognition model that can be applied to the scenes defined by the video data. The application of the human facial recognition model can determine several attributes, such as mouth shape, skin color, and similar facial elements. Data identified values of such attributes can be stored as a data set. Data sets are assigned a confidence class based on analyzation and the prediction is the output.

[0059] In addition, or in other embodiments, the analytics subsystem 360 can perform a facial search within one or more scenes defined by video data (e.g., images, stored videos, and streaming videos) for faces that match defined faces stored in a container known as a face collection. A face collection is an index of faces. A search that yields a defined face in the face collection can permit other subsystems to personalize the distribution of digital content (such as directed content) to a user device corresponding to an end-user identified in a user profile corresponding to the defined face.

[0060] Further, or in yet other embodiments, the analytics subsystem 360 can detect adult content and/or violent content in a stream of images and in stored videos. To that end, the analytics subsystem 360 can obtain metadata to filter inappropriate content based on business needs. An example of a business need is preservation of video content within a defined rating. Simply as an example, the defined rating can be one of the ratings established by the Motion Picture Association (MPA); e.g., General Audiences (G) rating or Parental Guidance Suggested (PG) rating. For that example business need, filtering inappropriate content can include blurring particular elements of a scene or multiple scenes. The disclosure is not limited to ratings from te MPA, and the defined rating can be one of many ratings corresponding to TV Parental Guidelines, Recording Industry 7 Association of America (RIAA), and/or the Entertainment Software Rating Board (ESRB). Another example of a business need is compliance with privacy rules. In that example need, filtering inappropriate content can include blurring customer faces.

[0061] The metadata defines at least one keyword and/or at least one keyphrase indicative of respective categories of unsafe content. Beyond flagging an image based on the presence of unsafe content, the analytics subsystem 360 can generate a hierarchical list of labels with confidence scores. In some embodiments, that list can be generated by executing one or multiple function calls based on a third-party' API, for example. The keyword(s) and keyphrase(s) can indicate specific categories of unsafe content, which can enable granular filtering and management of large volumes of user-generated content (UGC). The granular filtering can include moderating video content using one or more confidence scores of respective keywords or keyphrases. In some cases, moderating the video content in such a fashion can include quarantining video content for further analysis when a confidence score exceeds a first threshold value. Such analysis can be performed by an autonomous agent or a human agent. In addition, or in other cases, moderating the video content can include rejecting the video content when the confidence score exceeds a second threshold value. Such threshold values can be referred to as moderation thresholds and can configurable.

[0062] Further, or in still other embodiments, the analytics subsystem 360 can customize the detection of humans to a particular cohort, such as celebrities within a defined category (e.g., politics, sports, business, entertainment, media, science, or other categories). In some cases, the analytics subsystem 360 can apply one or more third models to video data generated in a single location or in multiple locations to identify individual(s) within a particular cohort in a stream of images and in stored videos. After a particular individual (e.g., a celebrity chef or a rock star) in the particular cohort is identified, the analytics subsystem 360 can cause the content distribution subsystem 210 (not depicted in FIG. 3A) to add a visual tag in a video feed supplied to the user device 160.

[0063] As mentioned, besides detection of humans and pets, the analytics subsystem 360 also can detect objects and/or markings in an object. More specifically, the analytics subsystem 360 can identify and extract textual content from a stream of images and stored videos. To that end, the analytics subsystem 360 can include a recognition module that supports detection of many fonts, including highly stylized ones. In some cases, the recognition module can be a third-party' module that can be accessed as a service from a third-party platform, for example. The analytics subsystem 360 can detect, using the recognition module, for example, text and numbers in different orientations, such as those commonly found m banners, posters, box artwork, or similar. In image sharing and social media applications, the text that has been extracted can be used to enable visual search based on an index of images that contain same keywords. The search subsystem 230 can permit implementing such a visual search. In media and entertainment applications, such as the mobile application 274 (FIG. 2B), videos can be catalogued based on particular text on screen, such as advertisement, news, sport scores, captions, a combination thereof, or similar text. Additionally, or as an alternative, the analytics subsystem 360 can identify objects and scenes in images that are specific to business needs. For example, particular products can be identified on store shelves.

[0064] Thus, in some embodiments, detection of a particular human, an object and/or markings (text or numbers) can cause the directed content subsystem 220 to send a media asset (an advertisement or a coupon, for example) to one or more user devices present at the location where the particular object is detected. As an example, a particular human can be detected at a supermarket the supplies video feeds of various aisles of the supermarket. The particular human can be a consumer of the video feeds. A particular brand of dry pasta also can be detected on a shopping cart used by the particular human. In response, the directed content subsystem 220 can send a coupon to a user device (e.g., user device 160) of the particular human, where the coupon applies to the particular brand of dry pasta.

[0065] The analytics subsystem 360, for example, can obtain activity data from user devices that consume video feeds. The activity data can identify video feeds consumed by the user devices. For instance, the analytics subsystem 360 can receive first activity data from the user device 160, where the first activity data identifies video feeds consumed by the user device 160. The analytics subsy stem 360 can categorize the activity data according to demographic data available for the user devices. As a result, the analytics subsystem 360 can group activity data in several categories, such as gender, age, income level, employment status, academic advancement, or similar categories. The analytics subsystem 360 can obtain activity data over time and also can categorize the activity data over time. Hence, the analytics subsystem 360 can identify changes to one or multiple categories corresponding to a particular group of video feeds. Each one of those changes constitute a customer profile trend for a particular category (e.g., gender). The analytics subsystem 360 can generate reports that identify one or multiple trends, and can make the reports available to one or multiple subscriber accounts.

[0066] In addition to generating reports identifying one or several trends, the analytics subsystem 360 can generate a wide variety of custom reports for video content and related activity data corresponding to a subscriber account. Generation of custom reports can be controlled by means of the user interface 320 or another type of management portal, for example.

[0067] Data contained in a custom report and a report identifying a trend can be visualized in a viewer user interface, e.g., a new user interface or an overlay onto the user interface 320. The client device 310 can present such a viewer user interface. In one example, a subscriber account can generate a report, via the analytics subsystem 360, all paid access sold to female users for a particular time interval. Results that constitute such a report can be visualized in the viewer user interface via plots of various types, table(s), or a combination thereof.

[0068] The analytics subsystem 360 can use user profiles to generate predictions of customer order and cross-promotions. The customer order can include a food order, a beverage order, or similar order. As an example, the food order can include a particular dish at a restaurant, and a beverage order can include a particular soft drink at a bar. A user profile contains various types of information identifying an end-user of the mobile application 274 and one or more user devices having the mobile application 274 stored thereon. The user device(s) are associated with that end-user. Information contained in the user profile also can include data defining various user attributes of the end-user, such as identification attributes (anonymized or otherwise), user preferences, user favorites, a combination thereof, or the like. An example of a user favorite can be a type of food, such as vegan dishes, or a particular dish (e.g., top sirloin steak with chimichurri sauce). Another example of a user favorite can be a type of beverage, such as non-alcoholic beverages, or a particular type of drink (e.g., a Martini or Caipirinha). At least some of the user attributes can be specific to one or more locations (e.g., location 102 (FIG. 1) and location 112 (FIG. 1)). Accordingly, a first subset of the user attributes can correspond to a first location, and a second subset of the user attributes can correspond to the second location. The user profiles can be retained in the backend storage devices 140, with the accounts repository 144, for example.

[0069] The analytics subsystem 360 can generate a recommendation for a customer order by applying a predictive model to information retained in a user profile. The recommendation can be specific to a location, in some cases. The predictive model can be embodied in a machine-learning model that accesses input data (user profile favorites, for example), analyzes the data, assigns a confidence class, and generates a prediction output. In one example, a user profile can include a favorite attribute indicative of Martini as a favorite drink. Based on such a favorite attnbute, the predictive model can yield a recommendation for other drinks that fit that type of drink. In another example, a user profile can include a favorite attribute indicative of a particular food dish (e.g., a vegan dish). Based on that particular dish, the predictive model can yield a recommendation for other food dishes that fit that type of food dish. In some cases, the predictive model can yield a recommendation for both a beverage and food dish, such as a wine and a pasta dish.

[0070] The menu of configuration options 324 also can include a sixth configuration option that permits administering a subscription to the platform that maintains the backend server devices 130. In some cases, the sixth configuration option can be presented as a selectable visual element 426 (FIG. 4A). Selection of the selectable visual element 426 can cause the client device 310 to present another user interface that can permit entering and submitting various types of administrative information. For example, the administrative information can define an account manager for subscriber. Other administrative information can include streaming hours and business profile information (administrator account, directed content campaigns, notifications, and the like, for example). In some embodiments, the user interface that is presented in response to selection of the selectable visual element 426 can include UI elements from other user interfaces presented in response to selection of other selectable elements present in the user interface 320 (FIG. 4A). Accordingly, in one example, the user interface shown in FIG. 4B, can include multiple UI elements 480 that permit configuring a streaming schedule.

[0071] Although not shown in FIG. 4 A, the menu of configuration options 324 can include a configuration option that permits configuring one or multiple aspects of a subscriber account. In one example, a subscriber account can correspond to a restaurant. Selection of a selectable visual element corresponding to such a configuration option can cause the client device 310 to present a user interface (which can be referred to as menu creator) that permits creating and uploading a culinary menu to a profile corresponding to the subscriber account. The culinary menu can be specific to a particular location identified in that profile. More specifically, the user interface can permit entering input data and selecting visual elements, aural elements, and/or other design elements that, individually or in combination, can define the culinary menu. The input data can be entered into one or more fillable fields or selectable pane(s). The client device 310 can then send such input data and/or design elements to the content provisioning subsystem 240 (FIG. 2A). The content provisioning subsystem 240 can retain the input data and/or data identifying the design elements in the data repository 148. After the culinary menu has been configured, the content distribution subsystem 210 can cause the user device 160 to present the culinary menu in the user interface 170, in conjunction with a video feed of the particular location or a section thereof. Subsequent access to the menu creator can permit modifying an extant culinary menu or creating another culinary menu.

[0072] Other types of foodstuff menus can be created in accordance with aspects described herein. For example, a cocktail menu or a wine menu can be created. The content distribution subsystem 210 can cause the user device 160 to present the cocktail menu or the wine menu, or both, in conjunction with a video feed of a bar area within the particular location. The content distribution subsystem 210 can cause the user device 160 to present the culinary menu in conjunction with a video feed of a dining room within the particular location.

[0073] The directed content subsystem 220 can provide other functionality in some embodiments. In some cases, the directed content subsystem 220 can permit subscribers to place directed content assets and/or monetize space for a directed content asset (e.g., an advertisement) via: paid sponsored streams (algorithm based, for example); paid placement via category (algorithm based, for example); paid placement via geolocation (algorithm based, for example); paid placement via city, state, ZIP code (algorithm based, for example); paid placement via user profde criteria (algorithm based, for example); a combination thereof, or similar. An example of a process for placement of directed content assets is illustrated in FIG. 4C. As is described herein, the illustrated process uses user information, such as user profiles and/or user accounts retained in a data repository (referred to as application user database(s)). Here, “application” refers to the mobile application 274 (FIG.

2) or another client application that can be used to consume and/or interact with a video feed (live video or time-shifted video feed, for example) in accordance with aspects described herein.

[0074] Directed content assets can be placed within a section of a user interface (UI 170 (FIG. 1), for example) that presents a stream of images generated by a camera device remotely located relative to the user device. The directed content subsystem 220 can update a placed directed content asset in real-time or essentially real-time. Directed content assets also can be presented as part of a push notification to a user device that executes the mobile application 274 while such a stream of images is not being presented at the user device.

[0075] In some embodiments, as the user device 160 moves from one location to another location, the user device 160 can cross a geofence corresponding to a particular geographic region. The user device 160 can supply location data to the directed content subsystem 220 as the user device 160 moves. Accordingly, the directed content subsystem 220 can have access to real-time location data or essentially real-time location data of the user device 160. Thus, the directed content subsy stem 220 can determine that the user device 160 is present in the particular geographic zone and, in response, can identify one or a restaurant or a bar, for example, that has a subscriber account included in the subscriber accounts 146. As a result, the directed content subsystem 220 can then send a push notification to the user device 160. The push notification includes a directed content asset and/or marking indicative of the directed content asset, where the directed content asset corresponds to the identified restaurant or bar. In addition, or in some cases, the push notification can include text or other markings indicating that a promotion represented by directed content asset is nearby, at the restaurant or bar. Therefore, in some cases, the directed content subsystem 220 can provide directed content assets to the user device 160 based on both real-time location of the user device 160 and presence of a business within a geofenced region identified using the realtime location.

[0076] In addition, or in other embodiments, the directed content subsystem 220 also can implement a similar or same technique to cause a user device to present directed content assets at a user interface while executing the mobile application 274. The user interface also can present a stream of images generated by a camera device remotely located relative to the user device. The user interface can be embodied in, or can include, the UI 170.

[0077] The subsystems 138 can comprise one or more other subsystems that can provide additional functionality to that described hereinbefore. FIG. 5 illustrates an example of an operational environment 500 for access to a service, in accordance with one or more embodiments of the disclosure. The user device 160 can execute a client application (such as the mobile application 274 (FIG. 2B) or a web browser) to consume video data from one or more camera devices placed at various locations. In some cases, those locations can correspond to respective service providers that can allocate space (e.g., a venue or a dining table) for a defined period of time. For example, a service provider can be a restaurant. In response to execution of the client application, the user device 160 can access several video feeds corresponding to respective locations and an end-user can peruse such video feeds. The video feeds can be consumed using a user interface 510 presented in response to execution of the client application. Using the video feeds, the user device 160 can select a particular location that satisfies one or more criteria, e.g., available space, conformity to one or more user preferences (vegan menu, sustainability of food sources, etc.), and the like. That particular location can be the location 102. In some embodiments, to select the particular location, the user device 160 can apply a machine-learning model to features of one or more video frames of several video feeds for respective locations in order to generate a score for each one of the respective locations.

[0078] Thus, the user device 160 can generate a first score for a first location, a second score for a second location, and so forth up until a defined number of the respective locations has been evaluated. Each one of the generated scores can represent a matching level between a location and end-user preferences and/or availability. The user device 160 can generate a ranking of the generated scores and can select a score having a defined placement within the ranking (e.g., top-ranked score). The user device can then select the location corresponding to the selected score.

[0079] After a location has been selected, the user device 160 can present a video feed from a camera device at the selected location (e.g., camera device 104c) and multiple visual elements 518 that can permit making a reservation at the selected location. To that end, in response to selection of a first selectable visual element of the selectable visual elements 518, the user device 160 can send control signaling 156 including query for available time slots for service at the selected location. As is illustrated in FIG. 5, the subsystems 138 can include a reservations subsystem 520 that can receive the query and, in response, can resolve the query using availability data for a subscriber account corresponding to the location. The subscriber account can correspond to a bar or restaurant, for example. The availability data can be retained in one of the data repositories 148. The reservation subsystem 520 can then send content data 154 including the availability data to the user device 160. In some cases, the reservations subsystem 520 also can send formatting information that the user device 160 can use to present the availability data.

[0080] The user device 160 can receive the availability data and can present another user interface (either as a stand-alone user interface or an overlay on the user interface 510) that permits providing selection data for configuring a reservation of a selected available time slot and, in some case, a particular space (such as a table, a room at a pet boarding facility, or a venue) within the selected location. For example, that other user interface that is presented can include one or more UI elements identifying respective time slots available for a reservation. The user device 160 can send control signaling 154 including the selection data to the reservation subsystem 520. In response to receiving the selection data, the reservation subsystem 520 can generate reservation data identifying the selected available time slot and, in some cases, the particular space within the selected location. In addition, or in some embodiments, the reservation subsystem 520 can send a confirmation message as part of content 154 sent to the user device 160. In other embodiments, the reservation subsystem 520 can send a message to a communication address corresponding to the user device 160. The communication address can be embodied in an email address, for example.

[0081] The reservation subsystem 520 also can generate recommendations for time slots for reservation of space within a location corresponding to a service provider that has a particular subscriber account of the subscriber accounts 146. A time slot can be defined in terms of a date and time interval. To generate a recommendation for a time slot, the reservation subsystem 520 can analyze video feeds originated from the location (location 102, for example). As part of the analysis, the reservation subsystem 520 can analyze images that constitute a video feed. The video feed can be a historical video feed or a live video feed. The images can be analyzed by applying a predictive model to the images. As part of the analysis, the predictive model can detect presence of human(s) and particular objects within those images. The particular objects can include, for example, a table, a booth, a bar stool, or a combination thereof. Thus, the reservation subsystem 520 can determine tables or other spaces that are open within a location. The predictive model can yield count data identifying a count of humans and can store the count data as a data set. The count data is referenced against day and time. The output of such analysis, including human counting, is a predicted busy time based on number of humans detected present and any given time. Over time, the predictive model can learn about busy times and slow times, and can generate recommendations for one or more available time slots based on predicted occupancy and available spaces within the location. In some embodiments, to generate recommendations for time slots, the reservation subsystem 520 can include the analytics subsystem 360 (FIG. 3A) described herein. For purposes of illustration, FIG. 5A presents an example of the predictive model embodied in a long term short memory (LTSM) machine-learning model. FIG. 5B illustrates an example of an update model used to incrementally update the predictive model, in accordance with one or more embodiments of the disclosure. The update model can be based on incremental learning time frequency network model (IL-TFNet).

[0082] The reservation subsystem 520 can send a push notification to a user device (e.g., user device 160) including text and/or or other indicia indicative of one or more recommended time slots for a reservation. In addition, or in some cases, the reservation subsystem 520 can send a message to a communication address (e.g., an email address) corresponding to the user device. The message can include text and/or other indicia indicative of the recommended time slot(s).

[0083] The reservation subsystem 520 also can generate reservation data in several other ways. In some cases, the viewing pane 514 can present indicia, such as marking and/or images, conveying an event at the location 102 (e.g., a dog kennel or a restaurant). The selectable visual elements 518 can include a visual element that, in response to being selected, can permit generating a registration for the event at the location 102 (e.g., a dog pool party). The reservation subsystem 520 can generate the registration and can send confirmation data to the user device 160, via the client application, for example, or to a communication address corresponding to the user account associated with the user device 160. In some embodiments, for example when the location 102 is a restaurant or a party hall, instead of generating a registration, the reservation subsystem 520 can generate booking for one or more areas of the location 102. The reservation subsystem 520 can process a purchase transaction for a registration, a booking, and/or event tickets/passes for a particular event. As such, in some embodiments, the reservation subsystem 520 can server a credit-card processor and/or can serve as an interface to a third-party credit-card processor.

[0084] As is shown in FIG. 5, the subsystems 138 also can include a digital concierge subsystem 530 that can connect an agent (an autonomous agent or a human agent) to the user device 160 by creating a chat session, for example, within the user interface 510. In some cases, the chat session can be presented in an overlay pane on the viewing pane 514. The connection with the agent can be effected during a live presentation of video content — that is, during a live video feed. The digital concierge subsystem 530 can be functionally coupled to the reservation subsystem 520. Thus, in some cases, the chat session can be used to complete a booking in real-time. The disclosure is not limited to digital concierge subsystem 530 creating a chat session. One or more other subsystems of the subsystem 138 can provide a real-time on-scree chat to interact with a video feed supplied by the content distribution subsystem 210 (FIG. 2A).

[0085] Embodiments of this disclosure can be integrated with third-party service providers. FIG. 6 illustrates an operational environment 600 for integration with third-party subsystems 610, in accordance with one or more embodiments of this disclosure. Several types of integrations can be implemented. As is illustrated in FIG. 6, one example integration can include integration of the subsystems 138 with a third-party service provider subsystem 620. In one embodiment, the third-party service provider subsystem 620 can be embodied in a ride-share sendee subsystem. The gateways 134 can include a third-party gateway that permits exchanging data and/or signaling with the third-party service provider subsystem 620. In some cases, the user device 160 can present a user interface (e.g., UI 510; not depicted on FIG. 6) that includes one or more selectable visual elements that permit a one- click reservation from that user interface. As mentioned, such a user interface can be referred to as streaming business profile.

[0086] In another embodiment, the third-party service provider subsystem 620 can be embodied in, or can include, a food-delivery service subsystem. In some cases, the user device 160 can present a user interface (e.g., the streaming business profile) that includes one or more selectable visual elements that permit viewing menus and/or placing one-click orders directly from the streaming business profile. Accordingly, the streaming business profile can permit requesting food delivery from a desired eatery (restaurant, diner, food truck, for example) via the food delivery service using the mobile application 274 (FIG. 2B) by means of API integration. Thus, in one example, the user interface that permits viewing menus and placing one-click orders can be accessed via a function call to an API. The mobile application 274 can provide access to the service provider API (e.g., food delivery service API).

[0087] In addition, or in other embodiments, the third-party service provider subsystem 620 can be embodied in, or can include, a rideshare service subsystem. In some cases, the user device 160 can present a user interface (e.g., the streaming business profile) that includes one or more selectable visual elements that permit requesting a ride via rideshare service subsystem using the mobile application 274 by means of API integration. To that end, the user interface can permit selecting the type of vehicle for the ride and receiving a cost estimate for the ride. The user interface also can permit the user device 160 to receive estimated arrival time of the vehicle and estimated drop-off time at a desired destination while remaining in mobile application 274.

[0088] In some embodiments, the subsystems 138 also can include a payment processor subsystem (not depicted in FIG. 6). The payment processor subsystem can process payments for goods, services, and/or reservations via a digital wallet present in the user device 160. Reservations can include event tickets/passes. To that end, the mobile application 274 can access natively the digital wallet to obtain a payment method for completing transactions, such as reservations and appointment bookings as required by the business affiliate. To that end, the mobile application 274 can access a payment integration API provided by an operating system (O/S) (e.g., iOS or Android) of the user device. Here, “natively” refers to utilizing device identifiers and O/S coupling to obtain the payment method via the API. More specifically, O/S coupling can be achieved by passing an authorization certificate associated with the mobile application 274 to the API. After the authorization certificate is authenticated, the mobile application 274 can pass user data, such as device ID, data defining products, data defining service, data defining amounts, and so forth, to complete a transaction.

[0089] As is also illustrated in FIG. 6, another example integration can include integration of the subsystems 138 with a third-party social network subsystem 630. In some embodiments, the content distribution subsystem 210 (FIG. 2 A) can permit sharing a video feed to a social network subsystem 630. To that end, the user device 160 can present a user interface (not depicted in FIG. 6) that includes a selectable visual element adjacent to, or within, a viewing pane. In response to selection of the selectable visual element, the user device can send an instruction and payload data (via control signaling 156 (FIG. 1), for example) to the content distribution subsystem 210. The instructions can dictate that the video feed be supplied to a particular social-media account indicated by the payload data. [0090] Embodiments of this disclosure can provide other functionalities. In some embodiments, a subscriber that provides a video stream and/or a service in accordance with aspects of this disclosure can create a broadcaster channel. Within the broadcaster channel, the subscriber can supply a video stream and/or other digital content (e.g., directed content) to user devices from a mobile device or another type of client device. In some cases, the video stream supplied in the broadcaster channel can include video feeds from multiple camera devices (e.g., camera device 104a and camera device 114) generating images essentially concurrently. As an illustration, the subscriber can be a karaoke group that can stream video from 20 bars concurrently (e.g., the first location 102 (FIG. 1) can be one bar and the second location 112 (FIG. 1) can be another bar). The user device 160 can execute a client application (e.g., the mobile application 274) to present the user interface 170. As mentioned, the user interface 170 can include a viewing pane 174 that present a single video stream. The user interface 170 also can include a menu of options — a carousel, an array of thumbnails, or a dropdown menu, for example — where each option corresponds to available video feeds from respective bars. The user device 160 can receive a selection of an option on the menu of options to select a video feed to consume in the viewing pane 174.

[0091] In some embodiments, the subscriber can permit peer-to-peer consumption of video streams via the broadcaster channel. In those embodiments, multiple user devices can send a video feed to the broadcaster channel via respective streaming business profiles. To that end, each one of the business profiles can cause a respective user device to generate a video stream from imaging data generated by a camera device integrated into, or otherwise coupled to, the user device. The subsystems 138 can include an aggregator subsystem that can receive video streams from respective ones of the multiple user devices. The aggregator subsystem can route the video streams to the user device 160 when consuming the broadcaster channel. As an illustration, multiple smartphones can stream video datasets to a particular broadcaster channel by sending video stream to the aggregator subsystem. Each video dataset can correspond to video of a scene generated from respective vantage points. For instance, a band can be playing at a venue and one smartphone can stream first video feed generated from a side of a stage where the band is located, another smartphone can stream second video feed generated from the back of the audience in the venue, and yet another smartphone can stream third video feed generated from a second floor above the stage. The user device 160 can present the user interface 170 including the viewing pane 174 and the menu of options, where the menu of options included the first, second, and third video feed. The user device 160 can receive a selection from the menu of options and can present the video feed corresponding to the vantage point that has been selected. By providing several video feeds for respective vantage points of a same scene, the client application (e.g., mobile application 274 (FIG. 2) can provide a more immersive or otherwise enhanced user experience.

[0092] The subscriber can monetize the video stream by charging for access to the video feed for a period during which the video stream is broadcasted live. In one example, a wedding venue could charge a fee to stream a video of a wedding on the broadcaster channel of the wedding venue. The video stream can be free to watch and can be paid by the wedding party. In some cases, end-users consuming the video stream can purchase one-time access or subscriptions. A subscription can provide all access to the video content. In addition, or in other cases, end-users can provide donations to the broadcaster channel. In addition, or in some embodiments, the user can store historical video streams in packaged repositories and sell access to them as well (e.g., a sales professional selling a 10-steps to success video series). Historical video streams can be stored within the data repository 148 and/or third- party storage devices.

[0093] In some embodiments, the analytics subsystem 360 (FIG. 3) can generate key performance indicators (KPIs), such as viewership, number of fans, and revenue generated for the streaming channel. A client device (e.g., client device 310) can execute the application 316 to present a dashboard to monitor one or more of the generated KPIs.

[0094] A broadcaster channel can be linked to a subscriber (or business affiliate). For example, if a band is playing from a bar (e.g., first location 102), the bar can tag the broadcaster channel of the band and followers on their business page (e.g., UI 170 (FIG. 1) or UI 510 (FIG. 5)). By creating a relationship between an individual broadcaster (or peer broadcaster) and a business affiliate (or streaming subscriber), the streaming subscriber can request and allow the individual broadcaster to stream on behalf of the business affiliate (e.g., a celebrity chef at a restaurant). This can help promote the business via social influencers and their audience. Users can find and view streams from individual broadcasters at specific businesses or view the streams from the individual broadcasters channel in the mobile application 274 (FIG. 2B), a web-browser, or similar application.

[0095] Additionally, or in some cases, a broadcaster channel can be accessed in response to receiving a fee. In one example, a DJ at a local club could stream their set for a fee. User devices, such as user device 160, can consume the video stream in response to effecting payment via an in-app purchase by natively accessing a payment API available to the user device 160, in accordance with aspects described herein before.

[0096] In order to provide some context, the computer-implemented methods, computerprogram products, and systems of this disclosure can be implemented on the computing environment 700 illustrated in FIG. 7 and described below. Similarly, the computer- implemented methods and systems disclosed herein can utilize one or more computing devices to perform one or more functions in one or more locations. FIG. 7 is a block diagram illustrating an example of a computing environment 700 for performing the disclosed methods and/or implementing the disclosed systems. The computing environment 700 shown in FIG. 7 is only an example of an operating environment and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. Neither should the operating environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment.

[0097] The computer-implemented methods and systems in accordance with this disclosure can be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that can be suitable for use with the systems and methods comprise, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional examples comprise set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that comprise any of the above systems or devices, and the like.

[0098] The processing of the disclosed computer-implemented methods and systems can be performed by software components. The disclosed systems and computer-implemented methods can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices. Generally, program modules comprise computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The disclosed methods can also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote computer storage media including memory' storage devices. [0099] Further, the systems and computer-implemented methods disclosed herein can be implemented via a general-purpose computing device in the form of a computing device 701. The components of the computing device 701 can comprise, but are not limited to, one or more processors 703, a system memory 712, and a system bus 713 that couples various system components including the one or more processors 703 to the system memory 712. The system can utilize parallel computing.

[00100] The system bus 713 represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, or local bus using any of a variety of bus architectures. The bus 713, and all buses specified in this description can also be implemented over a wired or wireless network connection and each of the subsystems, including the one or more processors 703, a mass storage device 704, an operating system 705, software 706, data 707, a network adapter 708, the system memory 712, an Input/Output Interface 710, a display adapter 709, a display device 711, and a human-machine interface 702, can be contained within one or more remote computing devices 714a, b,c at physically separate locations, connected through buses of this form, in effect implementing a fully distributed system. The computing device 701 can embody a user device (e.g., user device 160) in accordance with aspects described herein. Thus, in some embodiments, the software 706 can include the mobile application 274 (FIG. 2).

[00101] The computing device 701 typically comprises a variety of computer-readable media. Example readable media can be any available media that is accessible by the computing device 701 and comprises, for example and not meant to be limiting, both volatile and non-volatile media, removable and non-removable media. The system memory 712 comprises computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). The system memory 712 typically contains data such as the data 707 and/or program modules such as the operating system 705 and the software 706 that are immediately accessible to and/or are presently operated on by the one or more processors 703.

[00102] In another aspect, the computing device 701 can also comprise other removable/non-removable, volatile/non-volatile computer storage media. By way of example, FIG. 7 illustrates the mass storage device 704 which can provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computing device 701. For example and not meant to be limiting, the mass storage device 704 can be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory' (EEPROM), and the like.

[00103] Optionally, any number of program modules can be stored on the mass storage device 704, including by way of example, the operating system 705 and the software 706. Each of the operating system 705 and the software 706 (or some combination thereof) can comprise elements of the programming and the software 706. The data 707 can also be stored on the mass storage device 704. The data 707 can be stored in any of one or more databases known in the art. Examples of such databases comprise, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like. The databases can be centralized or distributed across multiple systems.

[00104] In another aspect, the user can enter commands and information into the computing device 701 via an input device (not shown). Examples of such input devices comprise, but are not limited to, a keyboard, pointing device (e.g., a “mouse”), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, and the like These and other input devices can be connected to the one or more processors 703 via the human-machine interface 702 that is coupled to the system bus 713, but can be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, or a universal serial bus (USB).

[00105] In yet another aspect, the display device 711 can also be connected to the system bus 713 via an interface, such as the display adapter 709. It is contemplated that the computing device 701 can have more than one display adapter 709 and the computing device 701 can have more than one display device 711. For example, the display device 711 can be a monitor, an LCD (Liquid Crystal Display), or a projector. In addition to the display device 711, other output peripheral devices can comprise components such as speakers (not shown) and a printer (not shown) which can be connected to the computing device 701 via the Input/Output Interface 710. Any operation and/or result of the methods can be output in any form to an output device. Such output can be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like. The display device 711 and computing device 701 can be part of one device, or separate devices.

[00106] The computing device 701 can operate in a networked environment using logical connections to one or more remote computing devices 714a,b,c. By way of example, a remote computing device can be a personal computer, portable computer, smartphone, a server, a router, a network computer, a peer device or other common network node, and so on. Logical connections between the computing device 701 and a remote computing device 714a, b,c can be made via a network 715, such as a local area network (LAN) and/or a general wide area network (WAN). Such network connections can be through the network adapter 708. The network adapter 708 can be implemented in both wired and wireless environments. In an aspect, one or more of the remote computing devices 714a, b.c can comprise an external engine and/or an interface to the external engine.

[00107] For purposes of illustration, application programs and other executable program components such as the operating system 705 are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device 701, and are executed by the one or more processors 703 of the computer. An implementation of the software 706 can be stored on or transmitted across some form of computer-readable media. Any of the disclosed methods can be performed by computer readable instructions embodied on computer- readable media. Computer-readable media can be any available media that can be accessed by a computer. By way of example and not meant to be limiting, computer-readable media can comprise “computer storage media” and “communications media.” “Computer storage media” comprise volatile and non-volatile, removable and non-removable media implemented in any methods or technology for storage of information such as computer- readable instructions, data structures, program modules, or other data. Exemplary computer storage media comprises, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.

[00108] It is to be understood that the methods and systems described here are not limited to specific operations, processes, components, or structure described, or to the order or particular combination of such operations or components as described. It is also to be understood that the terminology' used herein is for the purpose of describing exemplary embodiments only and is not intended to be restrictive or limiting.

[00109] As used herein the singular forms “a,” “an,” and “the” include both singular and plural referents unless the context clearly dictates otherwise. Values expressed as approximations, by use of antecedents such as “about” or “approximately,” shall include reasonable variations from the referenced values. If such approximate values are included with ranges, not only are the endpoints considered approximations, the magnitude of the range shall also be considered an approximation. Lists are to be considered exemplary' and not restricted or limited to the elements comprising the list or to the order in which the elements have been listed unless the context clearly dictates otherwise.

[00110] Throughout the specification and claims of this disclosure, the following words have the meaning that is set forth: “comprise” and variations of the word, such as “comprising” and “comprises,” mean including but not limited to, and are not intended to exclude, for example, other additives, components, integers, or operations. “Include” and variations of the word, such as “including” are not intended to mean something that is restricted or limited to what is indicated as being included, or to exclude what is not indicated. “May” means something that is permissive but not restrictive or limiting. “Optional” or “optionally” means something that may or may not be included without changing the result or what is being described. “Prefer” and variations of the word such as “preferred” or “preferably” mean something that is exemplary and more ideal, but not required. “Such as” means something that serves simply as an example.

[00111] Operations and components described herein as being used to perform the disclosed methods and construct the disclosed systems are illustrative unless the context clearly dictates otherwise. It is to be understood that when combinations, subsets, interactions, groups, etc. of these operations and components are disclosed, that while specific reference of each various individual and collective combinations and permutation of these may not be explicitly disclosed, each is specifically contemplated and described herein, for all methods and systems. This applies to all aspects of this application including, but not limited to, operations in disclosed methods and/or the components disclosed in the systems. Thus, if there are a variety of additional operations that can be performed or components that can be added, it is understood that each of these additional operations can be performed and components added with any specific embodiment or combination of embodiments of the disclosed systems and methods.

[00112] Embodiments of this disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. More particularly, the present methods and systems may take the form of web-implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD- ROMs, optical storage devices, or magnetic storage devices, whether internal, networked, or cloud-based.

[00113] Embodiments of this disclosure have been described with reference to diagrams, flowcharts, and other illustrations of computer-implemented methods, systems, apparatuses, and computer program products. Each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by processor-accessible instructions. Such instructions can include, for example, computer program instructions (e.g., processor-readable and/or processor-executable instructions). The processor-accessible instructions can be built (e.g., linked and compiled) and retained in processor-executable form in one or multiple memory devices or one or many other processor-accessible non-transitory storage media. These computer program instructions (built or otherwise) may be loaded onto a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine. The loaded computer program instructions can be accessed and executed by one or multiple processors or other types of processing circuitry. In response to execution, the loaded computer program instructions provide the functionality described in connection with flowchart blocks (individually or in a particular combination) or blocks in block diagrams (individually or in a particular combination). Thus, such instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart blocks (individually or in a particular combination) or blocks in block diagrams (individually or in a particular combination).

[00114] These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including processor-accessible instruction (e.g., processor-readable instructions and/or processor-executable instructions) to implement the function specified in the flowchart blocks (individually or in a particular combination) or blocks in block diagrams (individually or in a particular combination). The computer program instructions (built or otherwise) may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process. The series of operations can be performed in response to execution by one or more processor or other types of processing circuitry. Thus, such instructions that execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart blocks (individually or in a particular combination) or blocks in block diagrams (individually or in a particular combination).

[00115] Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions in connection with such diagrams and/or flowchart illustrations, combinations of operations for performing the specified functions and program instruction means for performing the specified functions. Each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or operations, or combinations of special purpose hardware and computer instructions.

[00116] The methods and systems can employ artificial intelligence techniques such as machine learning and iterative learning. Examples of such techniques include, but are not limited to, expert systems, case-based reasoning, Bayesian networks, behavior-based Al, neural networks, fuzzy systems, evolutionary computation (e.g. genetic algorithms), swarm intelligence (e.g. ant algorithms), and hybrid intelligent systems (e.g. expert inference rules generated through a neural network or production rules from statistical learning).

[00117] While the computer-implemented methods, apparatuses, devices, and systems have been described in connection with preferred embodiments and specific examples, it is not intended that the scope be limited to the particular embodiments set forth, as the embodiments herein are intended in all respects to be illustrative rather than restrictive. [00118] Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its operations be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its operations or it is not otherwise specifically stated in the claims or descriptions that the operations are to be limited to a specific order, it is in no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of operations or operational flow; plain meaning derived from grammatical organization or punctuation; the number or type of embodiments described in the specification.

[00119] It will be apparent to those skilled in the art that various modifications and variations can be made without departing from the scope or spirit. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims.