U.S. Pat. No. 12,109,498

SYSTEMS AND METHODS FOR TRANSMITTING INTERACTIVE GAME FEATURES TO MULTIPLE DEVICES OF A SHARED STREAMING MEDIA SESSION

AssigneeDiscovery com LLC

Issue DateMarch 20, 2023

Illustrative Figure

Abstract

Systems and methods for implementing one or more game features across devices associated with a synchronous multimedia stream are disclosed. One method may include: establishing, on the server, a virtual media streaming session; connecting two or more user profiles of a media streaming platform to the virtual media streaming session; providing, in the virtual media streaming session, an article of multimedia content, wherein the article of multimedia content is simultaneously viewable on user devices associated with each of the two or more user profiles; determining, using a processor associated with the server, whether one or more interactive game features associated with the article of multimedia content are activated; and implementing, responsive to determining that the one or more interactive game features are activated, aspects associated with the one or more interactive game features in the virtual media streaming session.

Description

DETAILED DESCRIPTION OF EMBODIMENTS The terminology used below may be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the present disclosure. Indeed, certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section. Both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the features, as claimed. In this disclosure, the term “based on” means “based at least in part on.” The singular forms “a,” “an,” and “the” include plural referents unless the context dictates otherwise. The term “exemplary” is used in the sense of “example” rather than “ideal.” The terms “comprises,” “comprising,” “includes,” “including,” or other variations thereof, are intended to cover a non-exclusive inclusion such that a process, method, or product that comprises a list of elements does not necessarily include only those elements, but may include other elements not expressly listed or inherent to such a process, method, article, or apparatus. Relative terms, such as, “substantially” and “generally,” are used to indicate a possible variation of ±10% of a stated or understood value. The term “user”, “subscriber,” and the like generally encompasses consumers who are subscribed to a streaming service (e.g., streaming platform) associated with the system described herein. The term “streaming service” (e.g., streaming platform) may refer to subscription-based video-on-demand (SVoD) services such as television shows, films, documentaries, and the like. The term “user” may be used interchangeably with “user profile,” “profile,” and the like throughout this application. The phrase “linking users” may be used interchangeably with “linking user profiles” and the like throughout this application. The phrase “registered with” may ...

DETAILED DESCRIPTION OF EMBODIMENTS

The terminology used below may be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the present disclosure. Indeed, certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section. Both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the features, as claimed.

In this disclosure, the term “based on” means “based at least in part on.” The singular forms “a,” “an,” and “the” include plural referents unless the context dictates otherwise. The term “exemplary” is used in the sense of “example” rather than “ideal.” The terms “comprises,” “comprising,” “includes,” “including,” or other variations thereof, are intended to cover a non-exclusive inclusion such that a process, method, or product that comprises a list of elements does not necessarily include only those elements, but may include other elements not expressly listed or inherent to such a process, method, article, or apparatus. Relative terms, such as, “substantially” and “generally,” are used to indicate a possible variation of ±10% of a stated or understood value.

The term “user”, “subscriber,” and the like generally encompasses consumers who are subscribed to a streaming service (e.g., streaming platform) associated with the system described herein. The term “streaming service” (e.g., streaming platform) may refer to subscription-based video-on-demand (SVoD) services such as television shows, films, documentaries, and the like. The term “user” may be used interchangeably with “user profile,” “profile,” and the like throughout this application. The phrase “linking users” may be used interchangeably with “linking user profiles” and the like throughout this application. The phrase “registered with” may be used interchangeably with “subscribed to” and the like throughout this application. The phrase “multimedia content” or “media content” may be used interchangeably with “multimedia content item” and the like throughout this application.

In the following description, embodiments will be described with reference to the accompanying drawings. As will be discussed in more detail below, in various embodiments, systems and methods for synchronous group consumption of streaming media are described.

Application platforms exist that attempt to duplicate the real-life group-based viewing experience. More particularly, many popular media-streaming services provide users with an option to view multimedia content with one or more other individuals in a virtual setting (e.g., a virtual room). Participants involved in this shared experience may each view the same article of media content substantially simultaneously (i.e., using their own electronic devices). Such a feature may ultimately improve an individual's content consumption experience by providing a social component that they otherwise would not have by streaming media using conventional means.

Some individuals, however, may want a more immersive experience with their virtual group. Specifically, these individuals desire virtual room functionality that is able to more closely duplicate a real-life group viewing setting. Accordingly, a variety of application platforms exist today that provide individuals the option to provide input (e.g., comments, emotive reactions, etc.) that other participants involved in the shared viewing experience may see. Although such features improve the social nature of the virtual group, they still do not enable the virtual group participants to engage in various activities that they otherwise might in real life.

To address the above-noted problems, the present disclosure describes a system that can expand the nature of interactions that participants in a virtual “watch party” may have with one another. In addition to improving upon the way that virtual group participants provide input to the virtual group (e.g., by enabling voice-to-text input functionality, presenting comments/reactions in more intuitive ways, etc.), the system may also enable participants to activate certain game features that are based on content occurring within the synchronous article of multimedia content. Additionally, the system may further augment the shared viewing experience by leveraging sensors and/or components on each user's device to enact haptic effects that may mirror events occurring in the synchronous article of multimedia content.

In an exemplary use case, a server associated with the system may establish a virtual media streaming session. The virtual media streaming session may be a digital space in which groups of individuals may virtually gather (e.g., as part of a “watch party”) to synchronously view multimedia content (e.g., video series, movies, documentaries, live event broadcasts, etc.). In the context of this application, “synchronous viewing” refers to a situation in which an article of multimedia content is viewed simultaneously by each member in the viewing party. Specifically, a chosen multimedia stream may be simultaneously presented on devices associated with each user profile connected to the virtual media streaming session. A first user profile (a “host”) may facilitate creation of the virtual media streaming session by inviting one or more other individuals to partake in a shared multimedia viewing experience. Responsive to identifying the invitees, and user profiles associated therewith, the server may connect all relevant user profiles to the virtual media streaming session.

The participants to the shared multimedia stream may provide input (e.g., text-based comments, emoticons, like or dislike reaction, etc.) using their own devices that may then be presented to some or all of the other participants. In an embodiment, participants may view the synchronous multimedia article and provide input using a single device. Alternatively, in another embodiment, participants may utilize a combination of devices to optimize their shared viewing experience (e.g., a participant may view the article of multimedia content on a first device, such as their television, and utilize a second device, such as their smart phone, to provide input). In an embodiment, the input may be presented chronologically (i.e., in the order that it is received by the server) and/or may be co-displayed with the synchronous article of multimedia content (e.g., in an unobtrusive portion of the screen). In an embodiment, participants may provide a comment using various means. For example, participants can utilize a conventional input device such as a remote control, stylus, and/or digital keyboard to type the comment. Additionally or alternatively, participants can provide audible input to their device that can thereafter be processed by the server and be converted to text-based input. In an embodiment, participants may be enabled to select and send reactions by, for example, long pressing a “select” icon, scrolling through an available list of reactive emoticons, and then submitting the reaction by pressing the “select” icon again.

The virtual media streaming session may contain a “Games” option that, when activated, may enable participants to select a game that they can all play. More particularly, the system may transmit one or more interactive game features associated with a selected game type to the devices associated with the virtual media streaming session. Each game type may be designed to complement the synchronous multimedia article and features of the game may be based on aspects associated with the multimedia article. Non-limiting examples of possible game types include a task performance game, an item identification game, a quiz-based game, a poll-based game, and a sing-a-long game.

The subject matter of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments. An embodiment or implementation described herein as “exemplary” is not to be construed as preferred or advantageous, for example, over other embodiments or implementations; rather, it is intended to reflect or indicate that the embodiment(s) is/are “example” embodiment(s). Subject matter may be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any exemplary embodiments set forth herein; exemplary embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. Accordingly, embodiments may, for example, take the form of hardware, software, firmware, or any combination thereof. The following detailed description is, therefore, not intended to be taken in a limiting sense.

Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment” or “in some embodiments” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of exemplary embodiments in whole or in part.

The terminology used below may be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the present disclosure. Indeed, certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section.

FIG.1is a diagram depicting an exemplary block diagram of a system environment100for enabling synchronous group consumption of multimedia content between at least two users, according to one or more embodiments of the present disclosure. The system environment100may include two or more user computing devices105operated by users102, an electronic network110, a computer server115, an interaction database120, and one or more content delivery databases125. One of skill in the art would recognize that the server115may configure the one or more user computing devices105so as to experience different functionalities and/or have access to different information (e.g., determined by credentials such as user ID/password).

The user computing devices105, the server115, the interaction database120and/or the content delivery databases125may be connected via the network110, using one or more standard communication protocols. The server115may be configured to receive data over the network110from the user computing devices105, including, but is not limited to, requests to invite others to a shared viewing experience in a virtual media streaming session (e.g., invite requests) and responses to the invite requests. “Interaction data” includes data received from the user computing devices105as well as data generated and analyzed by the computer server115. Interaction data may be stored in the interaction database120, and may include information as to which users/user profiles are present in a virtual media streaming session, the multimedia content (e.g., movies, shows, documentaries) presented in the virtual media streaming session, communications (e.g., reactions, comments, favorites) shared between users in the virtual media streaming session while watching the shared content, and notifications generated and transmitted by the computer server115to users. The server115may store the interaction data received over the network110in the interaction database120.

In one or more embodiments, the computer server115and the interaction database120may be one server computer device and a single database, respectively. Alternatively, in one or more embodiments, the server115may be a server cluster, or any other collection or network of a plurality of computer servers. The interaction database120also may be a collection of a plurality of interconnected databases. The server115and the interaction database120may be components of one server system. Additionally, or alternatively, the server115and the interaction database120may be components of different server systems, with the network110serving as the communication channel between them. The computer server115, the interaction database120, and/or the content delivery databases125may be associated with an entity130, such as a subscription-based streaming service provider (not shown). In some embodiments, the computer server115, the interaction database120, and/or the content delivery databases125may collectively be referred to as an entity system.

As shown inFIG.1, the computer server115may be in communication with the user devices105to transmit and receive messages from each other across the network110. The user devices105may be associated with users who are subscribed to a streaming service platform provided by the computer server115. The network110may comprise one or more networks that connect devices and/or components of environment100to allow communication between the devices and/or components. For example, the network110may be implemented as the Internet, a wireless network, a wired network (e.g., Ethernet), a local area network (LAN), a Wide Area Network (WANs), Bluetooth, Near Field Communication (NFC), or any other type of network that provides communications between one or more components of environment100. In some embodiments, the network110may be implemented using cell and/or pager networks, satellite, licensed radio, or a combination of licensed and unlicensed radio. The network110may be associated with a cloud platform that stores data and information related to methods disclosed herein.

The user computing device105may include a display/user interface (UI)105A, a processor105B, a memory105C, and/or a network interface105D. The user computing device105may be a personal computer (PC), a tablet PC, a set-top box (STB), a streaming device (e.g., Apple TV®, Amazon Fire®, Roku® player, Google Chromecast®), a television (TV), a smart TV, a gaming console, a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, etc. The user computing device105may execute, by the processor105B, an operating system (O/S) and at least one application (each stored in memory105C). The application may be a browser program or a mobile application program (which may also be a browser program in a mobile O/S). The application may generate one or more interactive graphic user interfaces (GUIs), such as, for example, the exemplary GUIs shown inFIGS.3-8, based on instructions/information received from the server115. In some embodiments, the application may generate one or more interactive GUIs based on instructions/information stored in the memory105C. The interactive GUIs may be application GUIs for the application executed based on XML and Android programming languages or Objective-C/Swift, but one skilled in the art would recognize that this may be accomplished by other methods, such as webpages executed based on HTML, CSS, and/or scripts, such as JavaScript. The display/UI105A may be a touch screen or a display with other input systems (e.g., mouse, keyboard, etc.). The network interface105D may be a TCP/IP network interface for, e.g., Ethernet or wireless communications with the network110. The processor105B, while executing the application, may receive user inputs from the display/UI105A, and perform actions or functions in accordance with the application.

The computer server115may include a display/UI115A, a processor1156, a memory115C, and/or a network interface115D. The server115may be a computer, system of computers (e.g., rack server(s)), and/or or a cloud service computer system. The server115may execute, by the processor1156, an operating system (O/S) and at least one instance of a server program (each stored in memory115C). The server115may store or have access to information from interaction database120and content delivery databases125. The display/UI115A may be a touch screen or a display with other input systems (e.g., mouse, keyboard, etc.) for an operator of the server115to control the functions of the server115(e.g., update the server program and/or the server information). The network interface115D may be a TCP/IP network interface for, e.g., Ethernet or wireless communications with the network110. The server program, executed by the processor1156on the server115, may be configured to identify interaction partners that align with a user's mission(s) of interest, as will be described in further detail below.

As described above, the interaction database120may store data associated with users subscribed to a streaming service platform associated with the computer server115. In addition to the interaction data, the interaction database120may also store additional data received from user computing devices105, including real time and/or near real time location data (e.g., GPS data) of the user computing devices105. The interaction database120may also store recommendations generated by the computer server115, such as recommendations for a user to watch a certain show, movie, and/or other multimedia content based on the past viewing activity (e.g., viewing history) of other user(s) to which the user is linked to (e.g., that a user is friends with on the streaming platform).

The interaction database120may also store user profiles generated by the computer server115for the users102. In some embodiments, the information described above, including the interaction data and any additional data received from user computing devices105, may be stored in a plurality of user profiles within the interaction database120. Each user profile may correspond to each user102in communication with the server115. A user102may have multiple user computing devices105registered with the server115based on the user's102viewing habits and/or preferences. For example, one user may register a personal laptop, a smart TV, and a mobile device with the server115. Information associated with registered user computing device(s)105and the user's102viewing habits and/or viewing preferences may all be stored within the user's user profile.

In some embodiments, in addition to the interaction data, each user profile may also include information associated with a respective user and their user computing device(s)105(e.g., a device identifier, device type), the user's name, a username or a nickname provided by the user for the user profile, content viewing preferences provided by the user, recommendations provided to the user by the server115, the user's past viewing activity, IP addresses and locations (as well as accompanying dates and times) from where the user has logged in from, and/or user analytics generated by the server115. In these embodiments, the user profiles may be updated to reflect a corresponding user's recent activities with another user, including sending or accepting an invite request.

In the system environment100, the computer server115may retrieve multimedia content requested by users from the content delivery databases125to enable requested content to be streamed by users on the user computing devices105. The content delivery databases125may store volumes of multimedia content items, including, for example, video series, movies, documentaries, and additional programming content that may be streamed by users102on the user computing devices105. In some embodiments, the content delivery databases125may be maintained by third party content providers. In other embodiments, the content delivery databases125may be maintained by the server115and/or additional servers associated with the server115.

FIG.2depicts an exemplary flowchart200for implementing one or more game features in a virtual media streaming session. The exemplary process flow200may be implemented by system environment100, which includes user computing devices105associated with users102, computer server115, interaction database120, and content delivery databases125(all shown inFIG.1).

At step205, an embodiment may establish a virtual media streaming session. Establishment of the virtual media streaming session may be originally facilitated in response to the transmission of an invite request (e.g., an invitation) from a first user (e.g., “user A”) to a second user (e.g., “user B”). User B may be a designated “friend” or “contact” of user A on the streaming service platform whose profile may be easily accessible to user A via navigation through a “friends list”. This invite request may be an invitation from user A to user B to join a virtual media streaming session, and more specifically, to be connected in a shared multimedia viewing experience via computer server115. The invite request may be transmitted from user A's user computing device105, and may identify a user (e.g., “user B”) with whom user A wishes to invite to the virtual media streaming session. It is important to note that a first user can invite more than one other individual to the virtual media streaming session. The invitation of a single individual to the virtual media streaming session, as described here, is not intended to be limiting and is described as such purely for exemplary purposes.

At step210, upon detecting acceptance of the invite request, an embodiment may connect, using computer server115, the user profiles associated with user A and user B to the virtual media streaming session. Once connected, the computer server115may present, at step215, an article of multimedia content (“multimedia article”) in the virtual media streaming session. The multimedia article may be selected by one or more of the participants present in the virtual media streaming session and may be retrieved from content delivery database(s)125. This multimedia article may be synchronously viewed by each of the connected user profiles in a shared viewing experience, wherein each user watches the multimedia article on their own computer device105. Additionally, computer server115may enable input (e.g., comments, reactions, etc.) provided by any participant in the virtual media streaming session to be viewable by other participants.

At step220, an embodiment may determine whether one or more game features associated with the multimedia article are activated. In this regard, an embodiment may determine whether the participants have selected to play one or more different game types while watching the multimedia article. Non-limiting examples of potential game types include a task performance game, an item spotting game, a quiz-based game, a poll-based game, a sing-a-long game, and the like. In an embodiment, activation of the game features may be facilitated by a single individual (e.g., the host of the virtual media streaming session) or, alternatively, may be facilitated by a group of individuals (e.g., a majority vote in a group poll).

Responsive to determining, at step220, that no game features are activated, an embodiment may, at step225, take no additional action. More particularly, an embodiment may proceed to play the multimedia article in the virtual media streaming session without implementing any game features. Conversely, responsive to determining, at step220, that one or more game features were activated, an embodiment may, at step330, implement rules associated with the activated game features in the shared streaming experience. More particularly, the system may enable one or more game features associated with the selected game to be active on each user device connected to the virtual media streaming session.

FIGS.3-8depict exemplary interactive graphical user interfaces (GUIs)300-800for enhancing the shared experience for participants (e.g., users102, shown inFIG.1) in a virtual media streaming session according to one or more embodiments of the present disclosure. Exemplary GUIs300-800may be generated by the computer server115for display at each of the participant's user computing devices105(shown inFIG.1), such as their TV or personal laptop.

In an embodiment, the participants may choose to play a task performance game in which, upon being presented with a notification in the virtual media streaming session, they need to each perform a predetermined task. For example, a task performance notification may intermittently appear in the virtual media streaming session prompting each user to take a sip of their drink (e.g., an alcoholic beverage). In an embodiment, the task performance notification may be configured to appear at a predetermined time interval (e.g., every minute, every 5 minutes, etc.) or may be configured to appear in response to events occurring in the multimedia article. More particularly, with respect to the latter, the computer server115may be able to glean data associated with objects presented and/or events occurring in the multimedia article (e.g., by accessing metadata associated with the article of multimedia content, by leveraging one or more audio and/or video processing techniques known in the art, etc.) and then utilize this data as a basis for providing the task performance notification. For example, a task performance notification may be configured to appear each time: a particular character is present in the scene, a specific word is uttered by any of the characters, a particular character performs a specific action, a specific object appears in the scene, and the like.

With continued reference to the task performance game, in an embodiment, the participants may either explicitly select the aspect(s) of the multimedia article that the task performance notification will be based on or may let the computer server115randomly choose. In an embodiment, the decision on which aspect(s) of the multimedia article to base the task performance notification on may be reliant upon a desired frequency of notification appearance (e.g., as designated by the participants). For example, in a situation where the participants want the task performance notification to occur at a low frequency, the computer server115may choose to base the notification on an infrequently occurring event (e.g., on the appearance of a secondary or tertiary character, on the utterance of an uncommon word or phrase, etc.). Conversely, in a situation where the participants want the task performance notification to occur at a higher frequency, the computer server115may choose to base the notification on a frequently occurring event (e.g., the appearance of a main character, the appearance of a primary object, the utterance of a popular word, etc.).

Turning now toFIG.3, an exemplary GUI300depicts a non-limiting example implementation of the task performance game. As shown in exemplary GUI300, a food travel show has been selected as the multimedia article that the participants in the virtual media streaming session have chosen to synchronously watch. In this iteration of the task performance game, a notification305is configured to appear each time the computer server115detects that a show host310takes a bite of food. In this instance, the notification305may instruct each participant to take a drink of their beverage.

In an embodiment, the participants may choose to play an item recognition game in which they are tasked with identifying when a specific item appears in the multimedia article. More particularly, one or more notifications may appear during a duration of the multimedia article that inform the participants that a designated item will appear within a predetermined time frame (e.g., the next 30 seconds, the next minute, etc.) and request that they identify the designated item as soon as they see it (e.g., by selecting an “item spotted” icon presented on the screen, by circling the designated item using their finger or other input device, etc.). In an embodiment, the computer server115may be able to glean data associated with items presented in the multimedia article using previously described techniques (e.g., by utilizing one or more audio and/or video analysis techniques, etc.) and may dynamically choose which of these identified items should be the subject of each “round” of item recognition.

With continued reference to the item recognition game, in an embodiment, the item recognition game may contain a scoring component in which participants are awarded points for spotting the designated items. All awarded points may then be aggregated at the conclusion of the multimedia article and a winner (i.e., the participant with the most awarded points) may be announced. In some embodiments, the scoring may be dependent upon how quickly a participant spots the designated item (e.g., a participant that identifies the designated item first may be awarded more points than a participant that identifies the designated item more slowly). Additionally, participants that misidentify the designated object (e.g., by circling the wrong object) or attempt to identify the object too early (e.g., by selecting an “item spotted” icon before the designated object appears) may be awarded 0 points for that round.

Turning now toFIGS.4A-4B, an exemplary GUI400depicts a non-limiting example implementation of the item recognition game. As shown in the exemplary GUI400ofFIG.4A, a home renovation show has been selected as the multimedia article that the participants in the virtual media streaming session have chosen to synchronously watch. During the duration of the show, a notification405appears that instructs each participant to keep a look out for a vase with flowers. Additionally, the notification405may specify that this object will appear in a specific upcoming time frame (i.e., in the next 30 seconds). Turning now toFIG.4B, exemplary GUI400now depicts a later point in the show (i.e., when the vase with flowers410is visible). In this iteration of the item recognition game, subsequent to provision of the notification405inFIG.4A, an “item spotted” icon415may appear on each participant's screen. When a participant sees the designated object (i.e., the vase with flowers410), they can select the “item spotted” icon415to provide an indication to the computer server115that they have spotted the object.

In an embodiment, the participants may choose to play a quiz-based game in which a plurality of quizzes may be periodically issued to the participants during the duration of the multimedia article. Each quiz may contain a question associated with content that occurred in the multimedia article. The question may be manually created (e.g., by a user, by a programmer of the media-streaming platform, etc.) or, alternatively, may be dynamically generated by the computer server115(e.g., based upon information gleaned from the multimedia article utilizing one or more video analysis techniques). In response to the quiz question, the participants may select a pre-configured answer from a popup list (e.g., a multiple-choice question list, etc.). In an embodiment, the participants may adjust the frequency at which the quiz questions appear. For example, the participants may desire for quiz questions to occur at predetermined intervals (e.g., every 5 minutes, etc.) or, alternatively, for the quiz questions to occur after each scene. In an embodiment, the multimedia article may be automatically paused during provision of the quiz question or, alternatively, may be configured to continue playing while participants answer the quiz question. In an embodiment, each quiz question may be required to be answered within a predetermined time period (e.g., within 10 seconds). For example, a countdown timer may be provided in a corner of the screen that indicates how long each participant has left to provide an answer to the quiz. In an embodiment, once all participants have selected an answer to the quiz question or once the predetermined time period to answer the quiz question has expired, whichever of the two happens first, the results of the quiz may be presented to each participant.

With continued reference to the quiz-based game, in an embodiment, the participants may specify the difficulty level of the quiz questions (e.g., easy, medium, hard, a mix of all of the foregoing, etc.). The computer server115may rely on this difficulty designation in situations where it dynamically constructs the quiz questions. For instance, lower difficulty questions may involve actions taken or phrases spoken by primary characters, the locations of the primary characters, characteristics of readily apparent/popular objects, and the like. Conversely, higher difficulty questions may involve actions or phrases spoken by secondary or tertiary characters, characteristics of objects in the background, obscure names of people or places, and the like. In an embodiment, a scoring component may be associated with the quiz-based game in which participants are awarded points for each correct quiz answer. In an embodiment, the awarded points may be weighted based on difficulty (e.g., 1 point for getting an easy question correct, 2 points for getting a hard question correct, etc.). All awarded points may then be aggregated at the conclusion of the multimedia article and a winner (i.e., the participant with the most awarded points) may be announced.

Turning now toFIG.5, an exemplary GUI500depicts a non-limiting example implementation of a quiz-based game. As shown in the exemplary GUI500, a home renovation show has been selected as the multimedia article that the participants in the virtual media streaming session have chosen to synchronously watch. In this embodiment, the show is automatically paused at the end of a scene and a quiz question505is presented to each participant directed a piece of information that was discussed in the previous scene (i.e., the cost of the kitchen remodel). The depicted quiz question505contains four answer choices (i.e., A, B, C, and D) from which a user must select one from before the countdown timer510reaches zero. In this situation, a user has selected answer choice B, which happens to be the correct answer. The selections of the user, as well as the other participants, may then be presented to all participants in a results window515. Once a predetermined time period has elapsed after the results window515is presented (e.g., 5 seconds, 10 seconds, etc.), all windows and icons (i.e., the quiz question505, the countdown timer510, and the results window515) may be removed from each participant's screen and the show may automatically resume.

In an embodiment, the participants may choose to play a poll-based game in which a plurality of polls may be periodically provided to each of the participants during the duration of the multimedia article. Each poll may contain a question associated with content occurring in the multimedia article. The question may be manually created (e.g., by a user, by a programmer of the media-streaming platform, etc.) or, alternatively, may be dynamically generated by the computer server115(e.g., based upon information gleaned from the multimedia article utilizing one or more video analysis techniques). In response to the poll question, the participants may select a pre-configured answer from a popup list (e.g., a “yes” or “no” answer, an answer from a multiple-choice question list, etc.) within a predetermined period of time (e.g., before the time in a presented countdown timer expires, etc.). In an embodiment, once all participants have selected an answer to the poll question or once the predetermined time limit to answer the poll question has expired, whichever of the two happens first, the results of the poll may be presented to each participant.

With continued reference to the poll-based game, in an embodiment, the participants may adjust the frequency at which the poll questions appear. For example, the participants may desire for poll questions to occur at predetermined intervals (e.g., every 5 minutes, etc.) or, alternatively, may desire for the poll questions to occur after each scene. In an embodiment, the multimedia article may be automatically paused during provision of the poll-question or, alternatively, may be configured to continue playing while participants answer the poll question. Non-limiting examples of poll questions may include: “Do you think he committed the crime?” after a scene in a crime investigation show; “What do you think is the right path for him to take?” after a scene in an adventure film; and “How do you think the date went?” after a date in a reality dating show.

Turning now toFIG.6, an exemplary GUI600depicts a non-limiting example implementation of a poll-based game. As shown in the exemplary GUI600, a home renovation show has been selected as the multimedia article that the participants in the virtual media streaming session have chosen to synchronously watch. In this embodiment, the show is automatically paused at the end of a scene and a poll question605is presented that requests each participant to indicate whether or not they liked the kitchen renovations that occurred in the home renovation show. The depicted poll question605contains two answer choices (i.e., yes and no) from which a user must choose one before the countdown timer610reaches zero. In this situation, a user has selected “yes”, indicating that they do like the home renovations. The selections of the user, as well as the other participants, may then be presented to all participants in a results window615. Once a predetermined time period has elapsed after the results window615is presented (e.g., 5 seconds, 10 seconds, etc.), all windows and icons (i.e., the poll question605, the countdown timer610, and the results window615) may be removed from each participant's screen and the show may automatically resume.

In an embodiment, the participants may choose to play a sing-a-long game in which they may be asked to sing along with a song occurring in the multimedia article. The sing-a-long game may be optimized for those articles of multimedia content that have one or more musical portions contained within (e.g., a musical show or film, a concert recording, a music video, etc.). In an embodiment, the computer server115may identify the song-based portions of the multimedia article using one or more previously described audio and/or video analysis techniques. In an embodiment, a notification may be provided to the participants when the song-based portion occurs in the multimedia article. In an embodiment, the notification may request that each participant sing or, alternatively, that only a singular designated participant should sing. In an embodiment, the notification may further specify how the designated participants should sing the song (e.g., using their normal voice, using a higher pitched voice, using a lower pitched voice, matching the pitch and/or tone of the original character(s) singing the song, etc.).

With continued reference to the sing-a-long game, in an embodiment, the designated participants may record their singing audio (e.g., using one or more microphones integrally or operatively coupled to their device). Upon receiving the audio recording(s), the computer server115may present the recorded audio to the other participants in the virtual media streaming session in one or more different ways. For example, the computer server115may present the recorded audio substantially as it is being recorded (e.g., the participant audio may be presented concurrently with the normal music, the normal music may be dynamically muted and replaced with the participant audio, etc.). As another example, the computer server115may dynamically pause the article of multimedia content (e.g., after the song is complete), play the recorded participant audio, and then automatically resume the article of multimedia content. In yet another example, an embodiment may present indications of each participant's recorded audio clip to the watch party (e.g., on a corner of the screen). Upon participant selection of a recorded audio clip, the selected audio clip may be played back to the selecting participant.

Turning now toFIG.7, an exemplary GUI700depicts a non-limiting example implementation of a sing-a-long game. As shown in the exemplary GUI700, a musical has been selected as the multimedia article that the participants in the virtual media streaming session have chosen to synchronously watch. In this embodiment, upon detection of a singing portion, a notification705is presented to a single participant, i.e. “Brian”, instructing them to sing. Upon receipt of this notification705, Brian may select the “record audio” icon710to record their singing input. Once Brian's recording is complete, it may be accessible to the other participants in the virtual media streaming session. In this embodiment, Brian's recording may be manifest as a play bar715that is presented in the corner of each participant's screen. Upon clicking the “play” icon on the play bar715, the recording of Brian singing may be played back to the selecting participant.

In an embodiment, any combination of the foregoing game types may be implemented simultaneously (i.e., during viewing of a single article of multimedia content). For example, a singular participant may be designated to sing a song and then the remainder of the participants may be polled about that participant's performance. As another example, during viewing of a multimedia article, the participants may be presented with both quiz questions and item-spotting challenges related to the content occurring in the multimedia article.

In an embodiment, the shared viewing experience of the participants in the watch party may be further augmented by leveraging one or more sensors on the participant devices. For instance, the computer server115may provide instructions to the connected devices to provide haptic feedback (e.g., by leveraging one or more actuators integrated within these devices, etc.) that is synced with certain events occurring in the multimedia article. As a non-limiting example, and with reference to the exemplary GUI800inFIG.8, an earthquake may be occurring in a natural disaster film805. To simulate the rumbling sensation of the earthquake, each connected participant device may be configured to vibrate810(e.g., via actuating one or more integrated actuators, etc.) during some or all scenes involving the earthquake.

In an embodiment, the watch party may include additional features that may improve the shared viewing experience. For example, the virtual media streaming session may contain a continuous play feature (not illustrated) that, when activated, may play another multimedia article after the conclusion of the current multimedia article. As another example, a democratic voting feature may be enabled that may allow participants to vote on the next article of multimedia content to watch, with a majority vote dictating the next article of multimedia content that may be presented in the virtual media streaming session.

FIG.9is a flowchart illustrating an exemplary method900for transmitting one or more game features to devices associated with the synchronous multimedia stream, according to one or more embodiments of the present disclosure. The method900may be performed by computer server115(shown inFIG.1). At step905, a virtual media streaming session may be established by the computer serer115. The virtual media streaming session may be a digital space where the user profiles of one or more individuals may be connected together to synchronously watch a multimedia article. At step910, the computer server115may detect selection of one or more game features. These game features may compliment the shared viewing experience by allowing the participants in the virtual media streaming session to engage in games that are based on the multimedia article, which may optimize the social component in the shared viewing experience. At step915, responsive to detecting selection of one or more game features, the computer server115may implement rules associated with the selected games in the virtual media streaming session.

Further aspects of the disclosure are discussed in the additional embodiments below. It should be understood that embodiments in this disclosure are exemplary only, and that other embodiments may include various combinations of features from other embodiments, as well as additional or fewer features.

In general, any process discussed in this disclosure that is understood to be computer-implementable, such as the process illustrated inFIG.9, may be performed by one or more processors of a computer server, such as computer server115, as described above. A process or process step performed by one or more processors may also be referred to as an operation. The one or more processors may be configured to perform such processes by having access to instructions (e.g., software or computer-readable code) that, when executed by the one or more processors, cause the one or more processors to perform the processes. The instructions may be stored in a memory of the computer server. A processor may be a central processing unit (CPU), a graphics processing unit (GPU), or any suitable types of processing unit.

A computer system, such as computer server115, may include one or more computing devices. If the one or more processors of the computer system are implemented as a plurality of processors, the plurality of processors may be included in a single computing device or distributed among a plurality of computing devices. If a computer server115comprises a plurality of computing devices, the memory of the computer server115may include the respective memory of each computing device of the plurality of computing devices.

FIG.10is a simplified functional block diagram of a computer system1000that may be configured as a computing device for executing the process illustrated inFIG.9, according to exemplary embodiments of the present disclosure.FIG.10is a simplified functional block diagram of a computer that may be configured as the computer server115according to exemplary embodiments of the present disclosure. In various embodiments, any of the systems herein may be an assembly of hardware including, for example, a data communication interface1020for packet data communication. The platform also may include a central processing unit (“CPU”)1002, in the form of one or more processors, for executing program instructions. The platform may include an internal communication bus1008, and a storage unit1006(such as ROM, HDD, SDD, etc.) that may store data on a computer readable medium1022, although the system1000may receive programming and data via network communications. The system1000may also have a memory1004(such as RAM) storing instructions1024for executing techniques presented herein, although the instructions1024may be stored temporarily or permanently within other modules of system1000(e.g., processor1002and/or computer readable medium1022). The system1000also may include input and output ports1012and/or a display1010to connect with input and output devices such as keyboards, mice, touchscreens, monitors, displays, etc. The various system functions may be implemented in a distributed fashion on a number of similar platforms, to distribute the processing load. Alternatively, the systems may be implemented by appropriate programming of one computer hardware platform.

Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine-readable medium. “Storage” type media include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer of the mobile communication network into the computer platform of a server and/or from a server to the mobile device. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links, or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.

While the presently disclosed methods, devices, and systems are described with exemplary reference to synchronous consumption of streaming media, it should be appreciated that the presently disclosed embodiments may be applicable to transmitting data and may be applicable to any environment, such as a desktop or laptop computer, a gaming console environment, and any CTV (connected TV) environment (e.g., an internet-connected device used to watch multimedia content items), etc. Also, the presently disclosed embodiments may be applicable to any type of Internet protocol.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

In general, any process discussed in this disclosure that is understood to be performable by a computer may be performed by one or more processors. Such processes include, but are not limited to: the process shown inFIG.13, and the associated language of the specification. The one or more processors may be configured to perform such processes by having access to instructions (computer-readable code) that, when executed by the one or more processors, cause the one or more processors to perform the processes. The one or more processors may be part of a computer system (e.g., one of the computer systems discussed above) that further includes a memory storing the instructions. The instructions also may be stored on a non-transitory computer-readable medium. The non-transitory computer-readable medium may be separate from any processor. Examples of non-transitory computer-readable media include solid-state memories, optical media, and magnetic media.

It should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.

Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.

Thus, while certain embodiments have been described, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as falling within the scope of the invention. For example, functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.

The above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other implementations, which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description. While various implementations of the disclosure have been described, it will be apparent to those of ordinary skill in the art that many more implementations are possible within the scope of the disclosure. Accordingly, the disclosure is not to be restricted except in light of the attached claims and their equivalents.

Claims

  1. A method for transmitting interactive game features to multiple devices associated with a synchronous multimedia stream using a server in network communication with at least one database, the method comprising operations including: establishing, on the server, a virtual media streaming session;connecting, using a processor associated with the server, two or more user profiles of a media streaming platform to the virtual media streaming session;initiating streaming, using the processor and in the virtual media streaming session, of an article of multimedia content, wherein the article of multimedia content is simultaneously viewable on user devices associated with each of the two or more user profiles and wherein the article of multimedia content is prerecorded;implementing, using the processor, aspects associated with one or more interactive game features in the virtual media streaming session;and transmitting, using the processor and responsive to the implementing, a request for user input from at least one of the two or more user profiles in furtherance of the one or more interactive game features, wherein the request is based on content occurring in the article of multimedia content.
  1. The method of claim 1, wherein the implementing comprises: initiating, in the virtual media streaming session, a poll;receiving, from each of the two or more user profiles, a vote;determining, based on the receiving, whether a majority vote exists to activate the one or more interactive game features;and activating, responsive to determining that the majority vote exists, the one or more interactive game features.
  2. The method of claim 1, wherein the request to provide user input corresponds to a challenge to perform a task and further comprising: identifying a difficulty associated with the challenge, wherein the difficulty is based on context of the content in the article of multimedia content and wherein the difficulty is one of a lower difficulty or a higher difficulty.
  3. The method of claim 3, wherein: the context of the content for the lower difficulty corresponds to one of: an action taken by a primary character, a phrase spoken by the primary character, and a popular object characteristic;the context of the content for the higher difficulty corresponds to one of: an action taken by a secondary character;a phrase spoken by the secondary character, and a background object characteristic.
  4. The method of claim 1, further comprising: identifying, in the article of multimedia content, a predetermined event;and transmitting, responsive to the identifying, instructions to each of the user devices to implement a haptic effect corresponding to the predetermined event.
  5. A system for transmitting interactive game features to multiple devices associated with a synchronous multimedia stream, the system comprising: a processor;at least one database;a server in network communication with the at least one database and the processor, the server storing instructions that are executable by the processor to: establish a virtual media streaming session;connect two or more user profiles of a media streaming platform to the virtual media streaming session;initiate streaming, in the virtual media streaming session, of an article of multimedia content, wherein the article of multimedia content is simultaneously viewable on user devices associated with each of the two or more user profiles and wherein the article of multimedia content is prerecorded;implement aspects associated with one or more interactive game features in the virtual media streaming session;and transmit, responsive to the implementing, a request for user input from to at least one of the two or more user profiles in furtherance of the one or more interactive game features, wherein the request is based on content occurring in the article of multimedia content.
  6. The system of claim 6, wherein the instructions executable by the processor to implement are further configured to cause the processor to: initiate, in the virtual media streaming session, a poll;receive, from each of the two or more user profiles, a vote;determine, based on the receiving, whether a majority vote exists to activate the one or more interactive game features;and activate, responsive to determining that the majority vote exists, the one or more interactive game features.
  7. The system of claim 6, wherein the request to provide user input corresponds to a challenge to perform a task and wherein the instructions are further executable to cause the processor to: identify a difficulty associated with the challenge, wherein the difficulty is based on context of the content in the article of multimedia content and wherein the difficulty is one of a lower difficulty or a higher difficulty.
  8. The system of claim 8, wherein: the context of the content for the lower difficulty corresponds to one of: an action taken by a primary character, a phrase spoken by the primary character, and a popular object characteristic;the context of the content for the higher difficulty corresponds to one of: an action taken by a secondary character;a phrase spoken by the secondary character, and a background object characteristic.
  9. The system of claim 6, wherein the instructions are further executable to cause the processor to: identify, in the article of multimedia content, a predetermined event;and transmit, responsive to the identifying, instructions to each of the user devices to implement a haptic effect corresponding to the predetermined event.
  10. A non-transitory computer-readable medium storing computer-executable instructions which, when executed by a server in network communication with at least one database, cause the server to perform operations comprising: establishing a virtual media streaming session;connecting two or more user profiles of a media streaming platform to the virtual media streaming session;initiating streaming, in the virtual media streaming session, of an article of multimedia content, wherein the article of multimedia content is simultaneously viewable on user devices associated with each of the two or more user profiles and wherein the article of multimedia content is prerecorded;implementing, using the processor, aspects associated with one or more interactive game features in the virtual media streaming session;and transmitting, responsive to the implementing, a request for user input from to at least one of the two or more user profiles in furtherance of the one or more interactive game features to perform a task, wherein the request is based on content occurring in the article of multimedia content.
  11. The non-transitory computer readable medium of claim 11, wherein the implementing determining comprises: initiating, in the virtual media streaming session, a poll;receiving, from each of the two or more user profiles, a vote;determining, based on the receiving, whether a majority vote exists to activate the one or more interactive game features;and activating, responsive to determining that the majority vote exists, the one or more interactive game features.
  12. The non-transitory computer readable medium of claim 11, wherein the request to provide user input corresponds to a challenge to perform a task and further comprising: identifying a difficulty associated with the challenge, wherein the difficulty is based on context of the content in the article of multimedia content and wherein the difficulty is one of a lower difficulty or a higher difficulty;wherein the context of the content for the lower difficulty corresponds to one of: an action taken by a primary character, a phrase spoken by the primary character, and a popular object characteristic;wherein the context of the content for the higher difficulty corresponds to one of: an action taken by a secondary character;a phrase spoken by the secondary character, and a background object characteristic.
  13. The non-transitory computer-readable medium of claim 11, further comprising: identifying, in the article of multimedia content, a predetermined event;and transmitting, responsive to the identifying, instructions to each of the user devices to implement a haptic effect corresponding to the predetermined event.
  14. The system of claim 6, wherein the implementing comprises: identifying, in the article of multimedia content, occurrence of one or more events;causing, subsequent to the identifying, the article of multimedia content to be paused;presenting, to users associated with the two or more user profiles, the request for user input, wherein the request for user input corresponds to a question directed to the one or more events;receiving, in response to the question, user input directed to at least two selectable answer options associated with the question;presenting a summary of the received user input from the users in the virtual media streaming session;and causing, subsequent to presenting the received user input for a predetermined period of time, the article of multimedia content to become unpaused.
  15. The method of claim 15, wherein the presenting comprises concurrently displaying a timer indicating a remaining time in which the user input to the question will be accepted.
  16. The method of claim 1, wherein the implementing comprises implementing aspects of a first game type to a first subset of the two or more user profiles and implement aspects of a second game type to a second subset of the two or more user profiles.
  17. The system of claim 6, wherein the instructions executable by the processor to implement are further configured to cause the processor to: identify, in the article of multimedia content, occurrence of one or more events;cause, subsequent to the identifying, the article of multimedia content to be paused;present, to users associated with the two or more user profiles, the request for user input, wherein the request for user input corresponds to a question directed to the one or more events;receive, in response to the question, user input directed to at least two selectable answer options associated with the question;present a summary of the received user input from the users in the virtual media streaming session;and cause, subsequent to presenting the received user input for a predetermined period of time, the article of multimedia content to become unpaused.
  18. The system of claim 18, wherein the instructions executable by the processor to present are further configured to cause the processor to: concurrently display a timer indicating a remaining time in which the user input to the question will be accepted.
  19. The system of claim 18, wherein the instructions executable by the processor to implement are further configured to cause the processor to: implement aspects of a first game type to a first subset of the two or more user profiles and implement aspects of a second game type to a second subset of the two or more user profiles.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.