U.S. Pat. No. 11,185,786
METHODS AND APPARATUS FOR MONITORING ACTIONS DURING GAMEPLAY
AssigneeGN Audio AS
Issue DateMarch 27, 2019
Illustrative Figure
Abstract
A system that incorporates the subject disclosure may include, for example, a processor that facilitates a performance of operations. The operations may include obtaining an identification of an action to monitor during a gameplay associated with a game, storing a representation of a sliding window of the gameplay in a first storage medium, monitoring, by the system, for the identification of the action during the gameplay, detecting the identification of the action during the gameplay responsive to the monitoring, and responsive to the detecting, storing the representation of the sliding window of the gameplay in a second storage medium that is different from the first storage medium. Additional embodiments are disclosed.
Description
DETAILED DESCRIPTION The subject disclosure describes, among other things, illustrative embodiments for generating and sharing content associated with a gameplay of a game. Other embodiments are described in the subject disclosure. One embodiment of the subject disclosure includes obtaining an identification of an action to monitor during a gameplay associated with a game. A representation of a sliding window of the gameplay may be stored in a first storage medium. A monitoring for the identification of the action may be performed during the gameplay. The identification of the action during the gameplay may be detected responsive to the monitoring. Responsive to the detecting, the representation of the sliding window of the gameplay may be stored in a second storage medium. The second storage medium may be the same as, or may be different from, the first storage medium. An embodiment of the subject disclosure includes a capture and storage of memorable/noteworthy moments during a gameplay associated with a game. In some embodiments, a tag may be applied to a representation of a portion of the gameplay. The tag may include a watermark, metadata, or a combination thereof. The tag may include an identification of an action that occurred during the gameplay. In some embodiments, a memorable/noteworthy moment may be identified based on one or more user inputs. The user input(s) may include an input obtained/received from a gamer. The user input(s) may be specified in accordance with a sound volume level, a count/number of messages (e.g., instant messages, email messages, etc.), a content of the messages (e.g., particular expressions/statements, punctuation (e.g., exclamation points or other characters or numbers)), emoticons/emojis, etc. In some embodiments, a bookmarking technique may be applied to identify a memorable/noteworthy moment during a gameplay associated with a game. In some embodiments, a determination may be made when ...
DETAILED DESCRIPTION
The subject disclosure describes, among other things, illustrative embodiments for generating and sharing content associated with a gameplay of a game. Other embodiments are described in the subject disclosure.
One embodiment of the subject disclosure includes obtaining an identification of an action to monitor during a gameplay associated with a game. A representation of a sliding window of the gameplay may be stored in a first storage medium. A monitoring for the identification of the action may be performed during the gameplay. The identification of the action during the gameplay may be detected responsive to the monitoring. Responsive to the detecting, the representation of the sliding window of the gameplay may be stored in a second storage medium. The second storage medium may be the same as, or may be different from, the first storage medium.
An embodiment of the subject disclosure includes a capture and storage of memorable/noteworthy moments during a gameplay associated with a game. In some embodiments, a tag may be applied to a representation of a portion of the gameplay. The tag may include a watermark, metadata, or a combination thereof. The tag may include an identification of an action that occurred during the gameplay.
In some embodiments, a memorable/noteworthy moment may be identified based on one or more user inputs. The user input(s) may include an input obtained/received from a gamer. The user input(s) may be specified in accordance with a sound volume level, a count/number of messages (e.g., instant messages, email messages, etc.), a content of the messages (e.g., particular expressions/statements, punctuation (e.g., exclamation points or other characters or numbers)), emoticons/emojis, etc.
In some embodiments, a bookmarking technique may be applied to identify a memorable/noteworthy moment during a gameplay associated with a game. In some embodiments, a determination may be made when a given action/event has occurred during a gameplay. When such action has occurred, a representation of the gameplay may be stored in one or more storage mediums. In some embodiments, a user prompt may be presented to request confirmation that the representation of the gameplay should be stored and/or shared, e.g., with a contact of the user, on a social media platform, etc. Over time, and with experience, insight may be acquired/obtained as to what actions are perceived to be memorable/noteworthy to a particular user, such that in the future prompts might not be presented. In this respect, aspects of the disclosure may leverage/include machine learning and/or artificial intelligence technologies to enhance accuracy associated with an identification of memorable/noteworthy moments.
FIG. 1depicts an illustrative embodiment of a Graphical User Interface (GUI) generated by an Accessory Management Software (AMS) application according to the present disclosure. The AMS application can be executed by a computing device such as a desktop computer, a laptop computer, a tablet, a server, a mainframe computer, a gaming console, a gaming accessory, or any combination or portions thereof. The AMS application can also be executed by portable computing devices such as a cellular phone, a personal digital assistant, or a media player. The AMS application can be executed by any device with suitable computing and communication resources.
FIG. 2illustrates a number of embodiments for utilizing a gaming controller115with a computing device206in the form of a gaming console. In the illustration ofFIG. 2, the gaming controller115can be communicatively coupled to the gaming console206with a tethered cable interface202such as a USB or proprietary cable, or a wireless interface204such as WiFi, Bluetooth, ZigBee, or a proprietary wireless communications protocol. The cable interface202provides a means for communication that may be less susceptible to electromagnetic interference. It will be appreciated that the gaming controller115may further include a headset114(with or without a microphone not shown) utilized by a gamer to communicate with teammates and/or to listen to game sounds in high fidelity. In the illustration ofFIG. 2, the AMS application can in whole or in part be executed by the gaming controller115, the gaming console206, or a combination thereof.
FIG. 3illustrates a number of other embodiments for utilizing a gaming controller115with a computing device206. In this embodiment, the gaming controller115comprises a mouse and the computing device206comprises a computer. The gaming controller115can be tethered to the computing device206by a cable interface202(e.g., USB cable or proprietary cable) or a wireless interface204. The cable interface202provides a means for communication that may be less susceptible to electromagnetic interference. It will be appreciated that the gaming controller115may further include a headset (with or without a microphone not shown) utilized by a gamer to communicate with teammates and/or to listen to game sounds in high fidelity. In the illustration ofFIG. 3, the AMS application can in whole or in part be executed by the gaming controller115, the gaming console206, or a combination thereof.
For illustration purposes, the terms gaming console206and computer206will be used hence forth interchangeably with the term computing device206with an understanding that a computing device206may represent a number of other devices such as a server, a tablet, a smart phone, and so on. Accordingly, a computing device206can represent any device with suitable computing resources to perform the methods described in the subject disclosure.
FIG. 4depicts an illustrative embodiment of a communication device400. Communication device400can serve in whole or in part as an illustrative embodiment of devices described in the subject disclosure. The communication device400can comprise a wireline and/or wireless transceiver402(herein transceiver402), a user interface (UI)404, a power supply414, a proximity sensor416, a motion sensor418, an orientation sensor420, and a controller406for managing operations thereof. The transceiver402can support short-range or long-range wireless access technologies such as Bluetooth, WiFi, Digital Enhanced Cordless Telecommunications (DECT), or cellular communication technologies, just to mention a few. Cellular technologies can include, for example, CDMA-1X, UMTS/HSDPA, GSM/GPRS, TDMA/EDGE, EV/DO, WiMAX, software defined radio (SDR), Long Term Evolution (LTE), as well as other next generation wireless communication technologies as they arise. The transceiver402can also be adapted to support circuit-switched wireline access technologies (such as PSTN), packet-switched wireline access technologies (such as TCP/IP, VoIP, etc.), and combinations thereof.
The UI404can include a depressible or touch-sensitive keypad408coupled to a navigation mechanism such as a roller ball, a joystick, a mouse, or a navigation disk for manipulating operations of the communication device400. The keypad408can be an integral part of a housing assembly of the communication device400or an independent device operably coupled thereto by a tethered wireline interface (such as a USB cable) or a wireless interface supporting for example Bluetooth. The keypad408can represent a numeric keypad, and/or a QWERTY keypad with alphanumeric keys. The UI404can further include a display410such as monochrome or color LCD (Liquid Crystal Display), OLED (Organic Light Emitting Diode) or other suitable display technology for conveying images to an end user of the communication device400.
In an embodiment where the display410utilizes touch-sensitive technology, a portion or all of the keypad408can be presented by way of the display410with navigation features. As a touch screen display, the communication device400can be adapted to present a user interface with graphical user interface (GUI) elements that can be selected by a user with a touch of a finger. The touch screen display410can be equipped with capacitive, resistive or other forms of sensing technology to detect how much surface area of a user's finger has been placed on a portion of the touch screen display. This sensing information can be used to control the manipulation of the GUI elements.
The UI404can also include an audio system412that utilizes common audio technology for conveying low volume audio (such as audio heard only in the proximity of a human ear) and high volume audio (such as speakerphone for hands free operation, stereo or surround sound system). The audio system412can further include a microphone for receiving audible signals of an end user. The audio system412can also be used for voice recognition applications. The UI404can further include an image sensor413such as a charged coupled device (CCD) camera for capturing still or moving images and performing image recognition therefrom.
The power supply414can utilize common power management technologies such as replaceable or rechargeable batteries, supply regulation technologies, and charging system technologies for supplying energy to the components of the communication device400to facilitate long-range or short-range portable applications. Alternatively, the charging system can utilize external power sources such as DC power supplied over a physical interface such as a USB port or by way of a power cord attached to a transformer that converts AC to DC power.
The proximity sensor416can utilize proximity sensing technology such as a electromagnetic sensor, a capacitive sensor, an inductive sensor, an image sensor or combinations thereof. The motion sensor418can utilize motion sensing technology such as an accelerometer, a gyroscope, or other suitable motion sensing technology to detect movement of the communication device400in three-dimensional space. The orientation sensor420can utilize orientation sensing technology such as a magnetometer to detect the orientation of the communication device400(North, South, West, East, combined orientations thereof in degrees, minutes, or other suitable orientation metrics).
The communication device400can use the transceiver402to also determine a proximity to a cellular, WiFi, Bluetooth, or other wireless access points by common sensing techniques such as utilizing a received signal strength indicator (RSSI) and/or a signal time of arrival (TOA) or time of flight (TOF). The controller406can utilize computing technologies such as a microprocessor, a digital signal processor (DSP), and/or a video processor with associated storage memory such as Flash, ROM, RAM, SRAM, DRAM or other storage technologies.
The communication device400as described herein can operate with more or less components described inFIG. 4to accommodate the implementation of devices described by the subject disclosure. These variant embodiments are contemplated by the subject disclosure.
FIGS. 5-7Adepict methods500-700describing illustrative embodiments of the AMS application. Method500can begin with step502in which the AMS application is invoked in a computing device. The computing device can be a remote server (not shown), the gaming console206or computer206ofFIGS. 2-3, or any other computing device with suitable computing resources. The invocation step can result from a user selection of the AMS application from a menu or iconic symbol presented by the computing device206, or when a user communicatively couples a gaming controller115or other form of accessory device with the computing device206. In step504, the AMS application can detect by way of software drivers in an operating system (OS) of the computing device206a plurality of operationally distinct accessories communicatively coupled to the computing device206. The accessories can be coupled to the computing device206by a tethered interface (e.g., USB cable), a wireless interface (e.g., Bluetooth or Wireless Fidelity—WiFi), or combinations thereof.
In the present context, an accessory can represent any type of device which can be communicatively coupled to the computing device206(or which can be an integral part of the computing device) and which can control aspects of the OS and/or a software application operating from the computing device206. An accessory can represent for example a keyboard, a touch screen display, a gaming pad, a gaming controller, a mouse, a joystick, a microphone, or a headset with a microphone—just to mention a few.
In step506, the AMS application presents a GUI101such as depicted inFIG. 1depicting operationally distinct accessories such as a keyboard108, and a gaming controller115. The GUI101presents the accessories108-116in a scrollable section117. One or more accessories can be selected by a user with a mouse pointer. In this illustration, the keyboard108and the gaming controller115were selected for customization. Upon selecting the keyboard108and the gaming controller115from the scrollable window of section117, the AMS application presents the keyboard108and the gaming controller115in split windows118,120, respectively, to assist the user during the customization process.
In step508, the AMS application can be programmed to detect a user-selection of a particular software application such as a video game. This step can be the result of the user entering in a Quick Search field160the name of a gaming application (e.g., World of Warcraft™ or WoW). Upon identifying a gaming application, the AMS application can retrieve in step510from a remote or local database gaming application actions which can be presented in a scrollable section139of the GUI represented as “Actions”130. The actions can be tactical actions132, communication actions134, menu actions136, and movement actions138which can be used to invoke and manage features of the gaming application.
The actions presented descriptively in section130of the GUI can represent a sequence of accessory input functions which a user can stimulate by button depressions, navigation or speech. For example, depressing the left button on the mouse110can represent the tactical action “Reload”, while the simultaneous keyboard depressions “Ctrl A” can represent the tactical action “Melee Attack”. For ease of use, the “Actions”130section of the GUI is presented descriptively rather than by a description of the input function(s) of a particular accessory.
Any one of the Actions130can be associated with one or more input functions of the accessories being customized in windows118and120by way of a drag and drop action or other customization options. For instance, a user can select a “Melee Attack” by placing a mouse pointer133over an iconic symbol associated with this action. Upon doing so, the symbol can be highlighted to indicate to the user that the icon is selectable. At this point, the user can select the icon by holding the left mouse button and drag the symbol to any of the input functions (e.g., buttons) of the keyboard108or selectable options of the gaming controller115to make an association with an input function of one of these accessories. Actions of one accessory can also be associated with another accessory that is of a different category. For example, key depressions “Ctrl A” of the keyboard108can be associated with one of the buttons of the gaming controller115(e.g., the left button119).
In one embodiment, a Melee Attack action can be associated by dragging this action to either the left button119or right button120of the gaming controller115. Thus, when the selected button is depressed, the stimulus signal that is generated by the selected button of the gaming controller115can be substituted by the AMS application with the Melee Attack action. In another embodiment, the AMS application can be configured so that the Melee Action can be associated with a combination of key button presses (e.g., simultaneous depression of the left and right buttons119,121, or a sequence of button depressions: two rapid left button depressions followed by a right button depression).
In yet another embodiment, the AMS application can be configured so that the Melee Action can be associated with movement of the gaming controller115such as, for example, rapid movement or shaking of the gaming controller115. In a further embodiment, the AMS application can be adapted to make associations with two dimensional or three dimensional movements of the gaming controller115according to a gaming venue state. For example, suppose the player's avatar enters a fighter jet. In this gaming venue state, moving the left navigation knob forward can be associated by the AMS application with controlling the throttle of the jet engines. Rapidly moving the gaming controller115downward can represent release of munitions such as a bomb.
In a gaming venue state where the gamer's avatar has entered a building, lifting of the gaming controller115above a first displacement threshold can be associated with a rapid movement of the avatar up one floor. A second displacement threshold can be associated with a rapid movement of the avatar down one floor—the opposite of the first displacement threshold. Alternatively, the second displacement threshold could be associated with a different action such as jumping between buildings when the avatar is on the roof of a building.
The AMS application can monitor gaming venue states by analyzing captured images produced by the gaming application (e.g., one or more still images of a tank, or a video of an avatar entering a tank), and/or by receiving messages from the gaming application by way of an application programming interface (API) thereby enabling the AMS application to identify the occurrence of a particular gaming venue state.
At step512the AMS application can also respond to a user selection of a profile. A profile can be a device profile or master profile invoked by selecting GUI button156or158, each of which can identify the association of gaming actions with input functions of one or more accessories. If a profile selection is detected in step512, the AMS application can retrieve in step514macro(s) and/or prior associations defined by the profile. The actions and/or macros defined in the profile can also be presented in step516by the AMS application in the actions column130of the GUI101to modify existing profile associations or create new associations.
In step518, the AMS application can also respond to a user selection to create a macro. A macro in the present context can mean any actionable command which can be recorded by the AMS application. An actionable command can represent a sequence of stimuli generated by manipulating input functions of an accessory, a combination of actions in the Action section130, an identification of a software application to be initiated by the OS of the computing device206, or any other recordable stimulus to initiate, control or manipulate software applications. For instance, a macro can represent a user entering the identity of a software application (e.g., instant messaging tool) to be initiated by the OS upon the AMS application detecting a speech command using speech recognition technology.
A macro can also represent recordable speech delivered by a microphone singly or in combination with a headset for detection by another software application through speech recognition or for delivery of the recorded speech to other parties. In yet another embodiment a macro can represent recordable navigation of an accessory such as a joystick of the gaming controller115, recordable selections of buttons of the gaming controller115, and so on. Macros can also be combinations of the above illustrations with selected actions from the Actions130menu. Macros can be created from the GUI101by selecting a “Record Macro” button148. The macro can be given a name and category in user-defined fields140and142.
Upon selecting the Record Macro button148, a macro can be generated by selection of input functions on an accessory (e.g., Ctrl A, speech, navigation knob movements of the gaming controller115, etc.) and/or by manual entry in field144(e.g., typing the name and location of a software application to be initiated by an OS, such as an instant messaging application, keyboard entries such as Ctrl A, etc.). Once the macro is created, it can be tested by selecting button150which can repeat the sequence specified in field144. The clone button152can be selected to replicate the macro sequence if desired. Fields152can also present timing characteristics of the stimulation sequence in the macro with the ability to modify and thereby customize the timing of one or more stimulations in the stimulation sequence. Once the macro has been fully defined, selection of button154records the macro in step520. The recording step can be combined with a step for adding the macro to the associable items Actions column130, thereby providing the user the means to associate the macro with input functions of the accessories (e.g., one or more keys of the keyboard108, buttons of the gaming controller115, etc.).
In step522, the AMS application can respond to drag and drop associations of actions with input functions of the keyboard108or the gaming controller115. Associations can also be made based on the two or three dimensional movements of the gaming controller115. If user input indicates that a user is performing an association, the AMS application can proceed to step524where it can determine if a profile has been identified in step512to record the association(s) detected. If a profile has been identified, the associations are recorded/stored in the profile in step526. If a profile has not been identified in step512, the AMS application can create a profile in step528for recording the detected associations. In the same step, the user can name the newly created profile as desired. The newly created profile can also be associated with one or more gaming software applications in step530for future reference. The AMS application can also record in a profile in step526associations based on gaming venue states. In this embodiment the same stimuli generated by the gaming controller115can result in different substitutions based on the gaming venue state detected by the AMS application.
Referring back to step526, once the associations have been recorded in a profile, the AMS application can determine in step532which of the accessories shown illustratively inFIGS. 1-3are programmable and available for programming. If the AMS application detects that an accessory (e.g., keyboard108, gaming controller115) is communicatively coupled to the computing device206and determines that the accessory is capable of performing stimulus substitutions locally, the AMS application can proceed to step534ofFIG. 5where it submits the profile and its contents for storage in the accessory (e.g., the gaming controller115inFIGS. 2-3). Once the accessory (e.g., the gaming controller115) is programmed with the profile, the accessory can perform stimuli substitutions according to the associations recorded by the AMS application in the profile. Alternatively, the AMS application can store the profile in the computing device206ofFIGS. 2-3and perform substitutions of stimuli supplied by the gaming controller115according to associations recorded in the profile by the AMS application.
The GUI101ofFIG. 1presented by the AMS application can have other functions. For example, the GUI101can present a layout of the accessory (button122), how the accessory is illuminated when associations between input functions and actions are made (button124), and configuration options for the accessory (button126). The AMS application can adapt the GUI101to present more than one functional GUI page. For instance, by selecting button102, the AMS application can adapt the GUI101to present a means to create macros and associate actions to accessory input functions as depicted inFIG. 1. Selecting button104can cause the AMS application to adapt the GUI101to present statistics from stimulation information and/or gaming action results captured by the AMS application as described in the subject disclosure. Selecting button106can also cause the AMS application to adapt the GUI101to present promotional offers and software updates.
The steps of method500in whole or in part can be repeated until a desirable pattern is achieved of associations between stimulus signals generated by accessories and substitute stimuli. It would be apparent to an artisan with ordinary skill in the art that there can be numerous other approaches to accomplish the embodiments described by method500or variants thereof. These undisclosed approaches are contemplated by the subject disclosure.
FIG. 6depicts a method600for illustrating additional operations of the AMS application. In the configurations ofFIGS. 2-3, the AMS application can be operating in whole or in part from the gaming controller115, a gaming console206, a computer206, or a remote server (not shown). For illustration purposes, it is assumed the AMS application operates from the gaming console206. Method600can begin with the AMS application establishing communications in steps602and604between the gaming console206and a gaming accessory such as the gaming controller115, and a headset114such as shown inFIG. 1. These steps can represent for example a user starting the AMS application from the gaming console206and/or the user inserting at a USB port of the gaming console206a connector of a USB cable tethered to the gaming controller115, which invokes the AMS application. In step606, the gaming controller115and/or headset114can in turn provide the AMS application one or more accessory ID's, or the user can provide by way of a keyboard or the gaming controller115user identification. With the accessory ID's, or user input the AMS application can identify in step608a user account associated with the gaming controller115and/or headset114. In step610, the AMS application can retrieve one or more profiles associated with the user account.
In step612, the user can be presented by way of a display coupled to the gaming console206profiles available to the user to choose from. If the user makes a selection, the AMS application proceeds to step614where it retrieves from the selected profiles the association(s) stored therein. If a selection is not made, the AMS application can proceed to step616where it can determine whether a software gaming application (e.g., video game) is operating from the gaming console206or whether the gaming console206is communicating with the software gaming application by way of a remote system communicatively coupled to the gaming console206(e.g., on-line gaming server(s) presenting, for example, World of Warcraft™). If a gaming software application is detected, the AMS application proceeds to step617where it retrieves a profile that matches the gaming application detected and the association(s) contained in the profile. As noted earlier, association(s) can represent accessory stimulations, navigation, speech, the invocation of other software applications, macros or other suitable associations that result in substitute stimulations. The accessory stimulations can be stimulations that are generated by the gaming controller115, as well as stimulations from other accessories (e.g., headset114), or combinations thereof.
Once a profile and its contents have been retrieved in either of steps614or step617, the AMS application can proceed to step719ofFIG. 7where it monitors for a change in a gaming venue state based on the presentations made by the gaming application, or API messages supplied by the gaming application. At the start of a game, for example, the gaming venue state can be determined immediately depending on the gaming options chosen by the gamer. The AMS application can determine the gaming venue state by tracking the gaming options chosen by a gamer, receiving an API instruction from the gaming application, or by performing image processing on the video presentation generated by the gaming application. For example, the AMS application can detect that the gamer has directed an avatar to enter a tank. The AMS application can retrieve in step719associations for the gaming controller115for controlling the tank.
The AMS application can process movements of the gaming controller115forwards, backwards, or sideways in two or three dimensions to control the tanks movement. Similarly, rotating the gaming controller115or tilting the gaming controller115forward can cause an accelerometer, gyro or magnetometer of the gaming controller115to provide navigational data to the AMS application which can be substituted with an action to cause the tank to turn and/or move forward. The profile retrieved by the AMS application can indicate that the greater the forward tilt of the gaming controller115, the greater the speed of the tank should be moving forward. Similarly, a rear tilt can generate navigation data that is substituted with a reverse motion and/or deceleration of the forward motion to stop or slow down the tank. A three dimensional lift of the mouse can cause the tank to steer according to the three dimensional navigation data provided by the gaming controller115. For example, navigation data associated with a combination of a forward tilt and right bank of the gaming controller115can be substituted by the AMS application to cause an increase in forward speed of the tank with a turn to the right determined by the AMS application according to a degree of banking of the gaming controller115to the right. In the above embodiment, the three dimensional navigation data allows a gamer to control any directional vector of the tank including speed, direction, acceleration and deceleration.
In another illustration, the AMS application can detect a new gaming venue state as a result of the gamer directing the avatar to leave the tank and travel on foot. Once again the AMS application retrieves in step719associations related to the gaming venue state. In this embodiment, selection of buttons of the gaming controller115can be associated by the AMS application with weaponry selection, firing, reloading and so on. The movement of the gaming controller115in two or three dimensions can control the direction of the avatar and/or selection or use of weaponry. Once the gaming venue state is detected in step719, the AMS application retrieves the associations related to the venue state, and can perform substitutions of stimuli generated by the gaming controller115, and/or speech commands received by microphone of the headset114.
In one embodiment, the AMS application can be configured in step719to retrieve a profile that provides substitute stimuli for replacing certain stimuli generated by accessories. The associations recorded in the profile can be venue independent. In another embodiment, the AMS application can retrieve a combination of profiles, where one or more profiles provide substitute stimuli that are venue dependent and one or more other profiles provide substitute stimuli that are venue independent.
The AMS application can monitor in step720stimulations generated by the accessories coupled to the gaming console206. The stimulations can be generated by the gamer by manipulating the gaming controller115, and/or by generating speech commands detected by a microphone of the headset114. If a stimulation is detected at step720, the AMS application can determine in step722whether to forward the detected stimulation(s) to an Operating System (OS) of the gaming console206or the gaming application directly without substitutions. This determination can be made by comparing the detected stimulation(s) to corresponding associations in one or more profiles retrieved by the AMS application. If the detected stimulation(s) match the associations, then the AMS application proceeds to step740where it retrieves substitute stimulation(s) in the profile(s). In step742, the AMS application can substitute the detected stimulation(s) with the substitute stimulations in the profile(s).
In one embodiment, the AMS application can track in step744the substitute stimulations by updating the stimulations with a unique identifier such as a globally unique identifier (GUID). In this embodiment, the AMS application can also add a time stamp to each substitute stimulation to track when the substitution was performed. In another embodiment, the AMS application can track each substitute stimulation according to its order of submission to the gaming application. For instance, sequence numbers can be generated for the substitute stimulations to track the order in which they were submitted to the gaming application. In this embodiment, the substitute stimulations do not need to be updated with sequence numbers or identifiers so long as the order of gaming action results submitted by the gaming application to the AMS application remain in the same order as the substitute stimulations were originally submitted.
For example, if a first stimulation sent to the gaming application by the AMS application is a command to shoot, and a second stimulation sent to the gaming application is a command to shoot again, then so long as the gaming application provides a first a game action result for the first shot, followed by a game action result for the second shot, then the substitute stimulations will not require updating with sequence numbers since the game action results are reported in the order that the stimulations were sent. If on the other hand, the game action results can be submitted out of order, then updating the stimulations with sequence numbers or another suitable identifier would be required to enable the AMS application to properly track and correlate stimulations and corresponding gaming action results.
Referring back to step722, if the detected stimulation(s) do not match an association in the profile(s), then the AMS application proceeds to one of steps744or746in order to track the stimulations of the accessory as described above. In another embodiment, tracking of original stimulations or substitute stimulations can be bypassed by skipping steps744or746and proceeding to step770ofFIG. 7B.
Once the stimulations received in step720have been substituted with other stimulations at step742responsive to a detected association, or maintained unchanged responsive to detecting no association with substitute stimuli, and (optionally) the AMS application has chosen a proper tracking methodology for correlating gaming action results with stimulations, the AMS application can proceed to step770ofFIG. 7B.
Referring toFIG. 7B, at step770, the AMS application can obtain an identification of an action to monitor during a gameplay associated with a game. The identification of the action may include a specification of a sound volume level associated with a user (e.g., a gamer). The identification of the action may include a specification of a number of user inputs exceeding a threshold. The number of user inputs may include a number of messages that are submitted, an identification of a content of the messages, an identification of an emoji, or a combination thereof. The identification of an action may include a gaming action provided by the game—seeFIGS. 8-9and accompanying descriptions.
At step772, the AMS application can store a representation of a sliding window of the gameplay in a first storage medium (e.g., first storage medium1272ofFIG. 12). The storage of step772may occur in real-time during the gameplay. The representation of the sliding window of the gameplay may include a video, an image, an audio track, or a combination thereof. The first storage medium may include a buffer of a graphics card, a random access memory, or a combination thereof.
The sliding window may be of a substantially fixed duration, such that the sliding window progresses as the user/gamer continues to play a game. For example, and briefly referring toFIG. 11, a sliding window1100(as a function of time t) is shown. As gameplay progresses, a new/supplemental representation of the gameplay may be added as shown via reference character/dashed portion1102. In order to accommodate storage of the portion1102, another portion1104may be deleted/overwritten. In the embodiment shown inFIG. 11, the portion1104to be deleted/overwritten corresponds to the oldest/earliest-in-time portion of the window1100. In some embodiments, a portion other than, or in addition to, the oldest portion may be identified for being deleted/overwritten. Still further, in some embodiments the sliding window1100may be of a variable duration. For example, the duration/length of the sliding window may be a function of network traffic, a capability of a device (e.g., storage capacity), user/gamer inputs, etc.
Referring back toFIG. 7B, at step774, the AMS application can monitor for the identification of the action during the gameplay.
At step776, the AMS application can detect the identification during the gameplay responsive to the monitoring. In some embodiments, whether an event has occurred or not, as reflected by the detection of step776, may be based on a comparison of game action with one or more thresholds. Such thresholds may be specified by users/gamers (e.g., in accordance with user inputs/preferences), may be predetermined based on one or more rules/configurations associated with a game, etc.
At step778, the AMS application can store at least a portion of the representation of the sliding window of the gameplay in a second storage medium (e.g., second storage medium1278ofFIG. 12). The second storage medium may be the same as, or different from, the first storage medium. The second storage medium may include a server associated with a social media platform, a server associated with a virtual machine, a memory contained within a common housing as the first storage medium, a network element (e.g., a router, a gateway, a switch, etc.), or a combination thereof.
The storing of step778may include storing a video of a gamer, an image of the gamer (e.g., a thumbnail or icon representation of the gamer), an audio track of the gamer, or a combination thereof.
The storing of step778may include presenting a prompt (potentially responsive to the monitoring of step774), placing a copy of the representation of the sliding window of the gameplay in a third storage medium (e.g., third storage medium1280ofFIG. 12, which may be different from the first storage medium (1272) and/or the second storage medium (1278)), receiving a user input in response to the prompt, and storing the copy in the second storage medium responsive to the user input.
The placement of the representation/copy of the sliding window of the gameplay in the third storage medium may free/alleviate the first storage medium, such that the first storage medium can continue capturing gameplay/action as the gameplay continues subsequent to the detection of step776. Also, the placement of the representation/copy of the sliding window of the gameplay in the third storage medium may free the user/gamer of not having to commit to placing the representation/copy of the sliding window of the gameplay into more permanent storage (e.g., the second storage medium). For example, placement in the third storage medium may facilitate editing or review operations of the representation/copy of the sliding window prior to uploading the same to the second storage medium.
In some embodiments, the placing of the copy of the representation of the sliding window of the gameplay in the third storage medium may include initiating a timer to store a second sliding window of the representation after detecting the action, thereby resulting in an updated representation of the sliding window of the gameplay. Responsive to detecting an expiration of the timer, the updated representation may be stored in the third storage medium. A length of the timer may be based on a user input.
In some embodiments, the storing of step778may include storing a new representation of the sliding window of the gameplay in the first storage medium during the gameplay after placing the copy in the third storage medium; in some embodiments, the storage of the new representation may coincide with a step that is separate from step778.
At step780, the AMS application may present (e.g., simultaneously present) the representation of the sliding window and/or the video, image, and/or audio track of the gamer, or a combination thereof. In some embodiments, a user/gamer may generate media that may be shared on one or more platforms (e.g., social media platforms) as a game is occurring, where the media may include the representation of the sliding window and/or the video, image, and/or audio track of the gamer, or a combination thereof. Alternatively, the user/gamer may generate the media following the conclusion of the game in order to avoid distractions during the game.
One or more of the steps shown in conjunction withFIG. 7Bmay be executed more than once. For example, subsequent to storing the representation of the sliding window of the gameplay in the second storage medium as part of step778, a second representation of the sliding window of the gameplay may be stored in the first storage medium (as part of a second execution of step772). The storing of the second representation of the sliding window of the gameplay may overwrite at least a portion of the representation of the sliding window of the gameplay in the first storage medium as described above.
FIG. 7Cillustrates another embodiment of a method that may be executed in conjunction with the flow shown inFIG. 7A. As shown inFIG. 7C, in step770′ the AMS application can obtain an identification of an action to monitor during a gameplay associated with a game. The identification of the action may include a specification of a number of actions per unit time.
In step772′, the AMS application can store a representation of a portion of the gameplay in a storage medium.
In step774′, the AMS application can monitor the gameplay for the identification of the action.
In step776′, the AMS application can apply a tag to the representation of the portion of the gameplay in the storage medium responsive to the monitoring.
The representation of the portion of the gameplay may include a first video clip that occurs prior to an occurrence of the action and a second video clip that occurs subsequent to the action. A first time duration of the first video clip, a first resolution of the first video clip, a second time duration of the second video clip, and a second resolution of the second video clip may be based on one or more user preferences, network traffic, a device capability, etc.
The representation of the portion of the gameplay may include a video clip. The tag may include a watermark that is applied to the video clip. The watermark may include the identification of the action. The tag may include metadata that is associated with the video clip. The metadata may be searchable via a search engine. The metadata may include a selectable link that, when selected, causes a client device to obtain the video clip.
FIG. 7Dillustrates another embodiment of a method that may be executed in conjunction with the flow shown inFIG. 7A. As shown inFIG. 7D, in step770″ the AMS application can monitor for an identification of an action during a gameplay.
In step772″, the AMS application can detect the identification of the action during the gameplay responsive to the monitoring.
In step774″, the AMS application can present a prompt responsive to the detecting.
In step776″, the AMS application can store a representation of a portion of the gameplay, a representation of a gamer controlling the gameplay, or a combination thereof, in a storage medium according to a user-generated input associated with the prompt.
In some embodiments, machine-learning/artificial intelligence may be applied to identify portions of a gameplay that are memorable or are of interest to a user (e.g., a gamer). For example, responsive to the user-generated input associated with the prompt in step776″, the AMS application can monitor for a second identification of the action (or an alternative action) in step778″.
In step780″, the AMS application can detect the second identification of the action (or the alternative action) during the gameplay responsive to the monitoring for the second identification.
In step782″, the AMS application can store a second representation of a second portion of the gameplay, a second representation of the gamer, or a combination thereof, in the storage medium without presenting a second prompt.
Once the AMS application at step748supplies to the OS of the computing device206a gaming action (i.e., one or more stimulations) from the method ofFIG. 7B, the method ofFIG. 7C, the method ofFIG. 7D, or a combination thereof, the AMS application can proceed to step734. The gaming action supplied to the OS at step748can be the unadulterated “original” gaming action of step720, or an alternative gaming action generated by steps744or746. At step734, the OS determines whether to invoke in step736a software application identified in the stimulation(s) (e.g., gamer says “turn on team chat”, which invokes a chat application), whether to forward the received stimulation(s) to the gaming software application in step738, or combinations thereof.
Contemporaneous to the embodiments described above, the AMS application can monitor in step750for game action results supplied by the gaming application via API messages previously described. For instance, suppose the stimulation sent to the gaming application in step738is a command to shoot a pistol. The gaming application can determine that the shot fired resulted in a miss of a target or a hit. The gaming application can respond with a message which is submitted by way of the API to the AMS application that indicates the shot fired resulted in a miss or a hit. If IDs such as GUIDs were sent with each stimulation, the gaming application can submit game action results with their corresponding GUID to enable the AMS application to correlate the gaming action results with stimulations having the same GUID.
For example, if the command to shoot included the ID “1234”, then the game action result indicating a miss will include the ID “1234”, enabling the AMS application in step752to correlate the game action result to the stimulation having the same ID. If on other hand, the order of game action results can be maintained consistent with the order of the stimulations, then the AMS application can correlate in step754stimulations with game action results by the order in which stimulation were submitted and the order in which game action results are received. In step756, the AMS application can catalogue stimulations and game action results. In another embodiment, the AMS application can be adapted to catalogue the stimulations in step760. In this embodiment, step760can be performed as an alternative to steps750through756. In another embodiment, step760can be performed in combination with steps750through756in order to generate a catalogue of stimulations, and a catalogue for gaming action results correlated to the stimulations.
FIG. 7Eillustrates an interface that may be used to present at least a portion of a gameplay associated with a game. Various controls/commands, such as for example VCR types/styles of controls/commands, may be presented as a part of the interface to facilitate a recording or capture of one or more portions of the gameplay.
FIG. 7Fillustrates an interface that may provide control over a recording or sharing of one or more representations (e.g., clips) of a gameplay associated with a game. Various controls, such as for example a “share” button or the like, may be provided to enable a user (e.g., a gamer) to post or otherwise share the representation(s). In some embodiments, editing controls may be provided to allow the user to customize the representation prior to, or subsequent to, sharing the representation.
In some embodiments, a user/gamer may have an ability to supplement the representation of the gameplay with commentary that describes, for example, what the user's thought process was during the captured/represented portion of the gameplay. In this respect, and assuming that the user/gamer is viewed or otherwise characterized as an expert in the game, a sharing of the representation of the gameplay may serve as a tutorial for novice users.
FIG. 7Gillustrates an interface that may present a tag702g(e.g., a watermark and/or metadata) associated with a representation of a gameplay. The tag702gmay include data acquired/obtained during the gameplay, such as for example a statement or other indication of results obtained by the gamer during the gameplay. Such a statement or other indication may be received via, e.g., a microphone, a keyboard, a mobile device, a computing/gaming console, etc.
The methods described herein (e.g., the methods described above in conjunction withFIGS. 7A-7D) may incorporate additional aspects. For example, in some embodiments a clip may be generated based on a user defined keybind (on a keyboard, mouse, or controller). Keybinds to trigger the clipping of a buffer to save to a local file system may be customized (e.g., may be based on user preferences). The gamer will be able to choose: the actual key to bind to the action, and the time slice to save (N seconds before and N′ seconds after).
In some embodiments, clips may be auto-generated based on some event, such as for example a detected event, an audible input (e.g., screaming), messages associated with a chat client, etc. In some embodiments, default settings may be provided, and those settings may be at least partially overridden/replaced based on affirmative user inputs and/or based on artificial intelligence/machine-learned user preferences.
In some embodiments, one or more filtering techniques may be applied to remove content from a representation of a gameplay that is not of interest. Such filtering may be based on one or more user inputs/preferences, may be learned over time via machine learning/artificial intelligence, etc. If multiple events/actions that are being monitored for happen within a threshold amount of time (which may coincide with a buffer time), an event/action endpoint may be extended to create one long time slice/representation of the gameplay. Alternatively, separate representations may be generated in some embodiments.
In some embodiments, tagging (e.g., watermarking) may be overlaid on a representation (e.g., a video) of a gameplay. A watermark may have a given level of transparency associated with it to avoid obscuring/blocking the representation of the gameplay. One or more logos may be applied as part of the tagging. In some embodiments, a watermark may pulsate or otherwise fade in-and-out. In this respect, dynamic watermarks may be used. The use of a dynamic watermark may serve to draw additional additional/incremental attention to the watermark, which may be useful for promotional/marketing/branding purposes
Aspects of sharing the representation of the gameplay may be controlled via one or more control parameters. Such control parameters may condition the sharing on a size of the representation (e.g., a video length), the content of the representation (e.g., controls may be present to limit a dissemination of the representation in view of intellectual property rights or other rights), etc. In some embodiments, a sharing of the representation of the gameplay may be limited to users that the gamer (or other entity) authorizes. For example, the sharing may be based on identifying a contact (e.g., a friend) of the gamer in one or more applications (e.g., a phone application, an email application, a text message application, a social media application, etc.).
FIGS. 8-9illustrate embodiments of a system with a corresponding communication flow diagram for correlating stimulations and gaming action results. In this illustration a user clicks the left button119of the gaming controller115. The gaming controller115can include firmware (or circuitry), which creates an event as depicted by event2inFIG. 8. The button depression and the event creation are depicted inFIG. 9as steps902and904. In step904, the firmware of the gaming controller115can, for example, generate an event type “left button #3”, and a unique GUID with a time stamp which is submitted to the AMS application. Referring back toFIG. 8, the AMS application catalogues event3, and if a substitute stimulation has been predefined, remaps the event according to the substitution. The remapped event is then transmitted to the gaming application at event4. Event3ofFIG. 8is depicted as step906inFIG. 9. In this illustration, the AMS application substitutes the left button #3 depression stimulus with a “keyboard ‘F’” depression which can be interpreted by the gaming application as a fire command. The AMS application in this illustration continues to use the same GUID, but substitutes the time stamp for another time stamp to identify when the substitution took place.
Referring back to event4, the gaming application processes the event and sends back at event5a game action result to the AMS application which is processed by the AMS application at event6. The AMS application then submits the results to the accessory at event7. Events4and5are depicted as step908inFIG. 9. In this step, the gaming application processes “F” as an action to fire the gamer's gun, and then determines from the action the result from logistical gaming results generated by the gaming application. In the present illustration, the action of firing resulted in a hit. The gaming application submits to the AMS application the result type “Hit” with a new time stamp, while utilizing the same GUID for tracking purposes. At step910, the AMS application correlates the stimulation “left button #3 (and/or the substitute stimulation keyboard “F”) to the game result “Hit” and catalogues them in memory. The AMS application then submits to the accessory (e.g., gaming controller115) in step910the game action results “Hit” with the same GUID, and a new time stamp indicating when the result was received. Upon receiving the message from the AMS application, the accessory in step912processes the “Hit” by asserting a red LED on the accessory (e.g., left button119illuminates in red or other LED of the gaming controller115illuminates in red) to indicate a hit. Other notification notices can be used such as another color for the LED to indicate misses, a specific sound for a hit, or kill, a vibration or other suitable technique for notifying the gamer of the game action result.
Upon reviewing the aforementioned embodiments, it would be evident to an artisan with ordinary skill in the art that the embodiments of the subject disclosure can be modified, reduced, or enhanced without departing from the scope of the claims described below. For example, the AMS application can be executed from an accessory115or computing device206to perform the embodiments described in the subject disclosure. The AMS application can also be operated from a remote server (“cloud services”). In yet another embodiment, functions of the AMS application can be distributed between devices. In yet another embodiment, the AMS application can be configured to track the performance of a gamer and adapt a threshold as the gamer improves or declines in performance.
For instance, as a gamer's performance improves with a particular gaming action, the threshold associated with the gaming action can be adapted to be less sensitive in detecting an over usage state. Similarly, the sensitivity of the threshold can be increased to promptly identify an over usage state of a gaming action if the gamer's performance declines as a result of an over usage of the gaming action. Additionally, the AMS application can be adapted to add gaming actions to an exclusion table when the gamer's performance substantially improves as a result of using the gaming action being excluded. The exclusion table can also be changed by the AMS application by removing a gaming action from the exclusion table responsive to its excessive use causing a decline in a gamer's performance.
Other embodiments can be applied to the subject disclosure.
It should be understood that devices described in the exemplary embodiments can be in communication with each other via various wireless and/or wired methodologies. The methodologies can be links that are described as coupled, connected and so forth, which can include unidirectional and/or bidirectional communication over wireless paths and/or wired paths that utilize one or more of various protocols or methodologies, where the coupling and/or connection can be direct (e.g., no intervening processing device) and/or indirect (e.g., an intermediary processing device such as a router).
FIG. 10depicts an exemplary diagrammatic representation of a machine in the form of a computer system1000within which a set of instructions, when executed, may cause the machine to perform any one or more of the methods described above. One or more instances of the machine can operate, for example, as an accessory, computing device or combinations thereof. In some embodiments, the machine may be connected (e.g., using a network1026) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client user machine in a server-client user network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
The machine may comprise a server computer, a client user computer, a personal computer (PC), a tablet, a smart phone, a laptop computer, a desktop computer, a control system, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. It will be understood that a communication device of the subject disclosure includes broadly any electronic device that provides voice, video or data communication. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods discussed herein.
The computer system1000may include a processor (or controller)1002(e.g., a central processing unit (CPU)), a graphics processing unit (GPU, or both), a main memory1004and a static memory1006, which communicate with each other via a bus1008. The computer system1000may further include a display unit1010(e.g., a liquid crystal display (LCD), a flat panel, or a solid state display). The computer system1000may include an input device1012(e.g., a keyboard), a cursor control device1014(e.g., a mouse), a disk drive unit1016, a signal generation device1018(e.g., a speaker or remote control) and a network interface device1020. In distributed environments, the embodiments described in the subject disclosure can be adapted to utilize multiple display units1010controlled by two or more computer systems1000. In this configuration, presentations described by the subject disclosure may in part be shown in a first of the display units1010, while the remaining portion is presented in a second of the display units1010.
The disk drive unit1016may include a tangible computer-readable storage medium1022on which is stored one or more sets of instructions (e.g., software1024) embodying any one or more of the methods or functions described herein, including those methods illustrated above. The instructions1024may also reside, completely or at least partially, within the main memory1004, the static memory1006, and/or within the processor1002during execution thereof by the computer system1000. The main memory1004and the processor1002also may constitute tangible computer-readable storage media.
Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein. Application specific integrated circuits and programmable logic array can use downloadable instructions for executing state machines and/or circuit configurations to implement embodiments of the subject disclosure. Applications that may include the apparatus and systems of various embodiments broadly include a variety of electronic and computer systems. Some embodiments implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application-specific integrated circuit. Thus, the example system is applicable to software, firmware, and hardware implementations.
In accordance with various embodiments of the subject disclosure, the operations or methods described herein are intended for operation as software programs or instructions running on or executed by a computer processor or other computing device, and which may include other forms of instructions manifested as a state machine implemented with logic components in an application specific integrated circuit or field programmable gate array. Furthermore, software implementations (e.g., software programs, instructions, etc.) including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein. It is further noted that a computing device such as a processor, a controller, a state machine or other suitable device for executing instructions to perform operations or methods may perform such operations directly or indirectly by way of one or more intermediate devices directed by the computing device.
While the tangible computer-readable storage medium1022is shown in an example embodiment to be a single medium, the term “tangible computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “tangible computer-readable storage medium” shall also be taken to include any non-transitory medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methods of the subject disclosure. The term “non-transitory” as in a non-transitory computer-readable storage includes without limitation memories, drives, devices and anything tangible but not a signal per se.
The term “tangible computer-readable storage medium” shall accordingly be taken to include, but not be limited to: solid-state memories such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories, a magneto-optical or optical medium such as a disk or tape, or other tangible media which can be used to store information. Accordingly, the disclosure is considered to include any one or more of a tangible computer-readable storage medium, as listed herein and including art-recognized equivalents and successor media, in which the software implementations herein are stored.
Although the present specification describes components and functions implemented in the embodiments with reference to particular standards and protocols, the disclosure is not limited to such standards and protocols. Each of the standards for Internet and other packet switched network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) represent examples of the state of the art. Such standards are from time-to-time superseded by faster or more efficient equivalents having essentially the same functions. Wireless standards for device detection (e.g., RFID), short-range communications (e.g., Bluetooth®, WiFi, Zigbee®), and long-range communications (e.g., WiMAX, GSM, CDMA, LTE) can be used by computer system1000.
The illustrations of embodiments described herein are intended to provide a general understanding of the structure of various embodiments, and they are not intended to serve as a complete description of all the elements and features of apparatus and systems that might make use of the structures described herein. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The exemplary embodiments can include combinations of features and/or steps from multiple embodiments. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Figures are also merely representational and may not be drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, can be used in the subject disclosure. In one or more embodiments, features that are positively recited can also be excluded from the embodiment with or without replacement by another component or step. The steps or functions described with respect to the exemplary processes or methods can be performed in any order. The steps or functions described with respect to the exemplary processes or methods can be performed alone or in combination with other steps or functions (from other embodiments or from other steps that have not been described).
Less than all of the steps or functions described with respect to the exemplary processes or methods can also be performed in one or more of the exemplary embodiments. Further, the use of numerical terms to describe a device, component, step or function, such as first, second, third, and so forth, is not intended to describe an order or function unless expressly stated so. The use of the terms first, second, third and so forth, is generally to distinguish between devices, components, steps or functions unless expressly stated otherwise. Additionally, one or more devices or components described with respect to the exemplary embodiments can facilitate one or more functions, where the facilitating (e.g., facilitating access or facilitating establishing a connection) can include less than every step needed to perform the function or can include all of the steps needed to perform the function.
In one or more embodiments, a processor (which can include a controller or circuit) has been described that performs various functions. It should be understood that the processor can be multiple processors, which can include distributed processors or parallel processors in a single machine or multiple machines. The processor can be used in supporting a virtual processing environment. The virtual processing environment may support one or more virtual machines representing computers, servers, or other computing devices. In such virtual machines, components such as microprocessors and storage devices may be virtualized or logically represented. The processor can include a state machine, application specific integrated circuit, and/or programmable gate array including a Field PGA. In one or more embodiments, when a processor executes instructions to perform “operations”, this can include the processor performing the operations directly and/or facilitating, directing, or cooperating with another device or component to perform the operations.
The Abstract of the Disclosure is provided with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Claims
- A method, comprising: obtaining, by a system comprising a processor, an identification of an action to monitor during a gameplay associated with a game, wherein the identification of the action includes a specification of a number of user inputs exceeding a threshold, and wherein the specification of the number of user inputs includes a number of instant messages and a number of email messages that are submitted;storing, by the system, a representation of a sliding window of the gameplay in a first storage medium;monitoring, by the system, for the identification of the action during the gameplay;detecting, by the system, the identification of the action during the gameplay responsive to the monitoring;and responsive to the detecting, storing, by the system, the representation of the sliding window of the gameplay in a second storage medium that is different from the first storage medium.
- The method of claim 1 , wherein the representation of the sliding window of the gameplay includes a video, an image, an audio track, or a combination thereof.
- The method of claim 1 , wherein the first storage medium includes a buffer of a graphics card, a random access memory, or a combination thereof.
- The method of claim 1 , wherein the second storage medium includes a server associated with a social media platform, a server associated with a virtual machine, a memory contained within a common housing as the first storage medium, a network element, or a combination thereof.
- The method of claim 1 , further comprising: subsequent to the storing of the representation of the sliding window of the gameplay in the second storage medium, storing, by the system, a second representation of the sliding window of the gameplay in the first storage medium, wherein the storing of the second representation of the sliding window of the gameplay overwrites the representation of the sliding window of the gameplay in the first storage medium.
- The method of claim 1 , wherein the storing, by the system, of the representation of the sliding window of the gameplay in the first storage medium is done in real-time during the gameplay.
- The method of claim 1 , wherein the system comprises a camera, and wherein the storing of the representation of the sliding window of the gameplay in the second storage medium further comprises: recording, by the system, a video of a gamer, an image of the gamer, an audio track of the gamer, or a combination thereof;and storing, by the system, the video of the gamer, the image of the gamer, the audio track of the gamer, or the combination thereof in the second storage medium.
- The method of claim 7 , further comprising: simultaneously presenting, by the system, the representation of the sliding window of the gameplay and the video of the gamer, the image of the gamer, the audio track of the gamer, or the combination thereof.
- The method of claim 1 , wherein the storing of the representation of the sliding window of the gameplay in the second storage medium further comprises: presenting, by the system, a prompt responsive to the monitoring;placing, by the system, a copy of the representation of the sliding window of the gameplay in a third storage medium;receiving, by the system, a user input in response to the prompt;and storing the copy in the second storage medium responsive to the user input.
- The method of claim 9 , wherein the placing of the copy of the representation of the sliding window of the gameplay in the third storage medium further comprises: initiating a timer to store a second sliding window of the representation after detecting the action, thereby resulting in an updated representation of the sliding window of the gameplay;and responsive to detecting an expiration of the timer, storing the updated representation in the third storage medium.
- The method of claim 9 , further comprising: storing a new representation of the sliding window of the gameplay in the first storage medium during the gameplay after placing the copy in the third storage medium.
- A device, comprising: a memory that stores instructions;and a processor coupled to the memory, wherein responsive to executing the instructions, the processor facilitates a performance of operations, the operations comprising: obtaining an identification of an action to monitor during a gameplay, wherein the identification of the action includes a specification of a number of user inputs exceeding a threshold, and wherein the specification of the number of user inputs includes a number of instant messages and a number of email messages that are submitted;storing a representation of a sliding window of the gameplay in a first storage medium;detecting the identification of the action during the gameplay;and responsive to the detecting, storing the representation of the sliding window of the gameplay in a second storage medium that is different from the first storage medium.
- The device of claim 12 , wherein the representation of the sliding window includes a first video clip that occurs prior to an occurrence of the action and a second video clip that occurs subsequent to the action, and wherein a first time duration of the first video clip, a first resolution of the first video clip, a second time duration of the second video clip, and a second resolution of the second video clip are based on one or more user preferences.
- The device of claim 12 , wherein the identification of the action includes a specification of a sound volume level associated with a gamer.
- The device of claim 12 , wherein the specification of the number of user inputs includes an identification of a content of the instant messages, the email messages, or the combination thereof.
- The device of claim 15 , wherein the identification of the content includes an identification of particular statements.
- The device of claim 15 , wherein the identification of the content includes an identification of particular punctuation, the particular punctuation including an exclamation point.
- The device of claim 12 , wherein the specification of the number of user inputs includes an identification of an emoji.
- A non-transitory machine-readable storage medium comprising instructions, wherein responsive to executing the instructions a processing system comprising a processor performs operations, the operations comprising: storing a representation of a sliding window of a gameplay associated with a game in a first storage medium;detecting an identification of an action during the gameplay based on a monitoring of the gameplay, wherein the identification of the action includes a specification of a number of user inputs exceeding a threshold, and wherein the specification of the number of user inputs includes a number of instant messages and a number of email messages that are submitted;and responsive to the detecting, storing the representation of the sliding window of the gameplay in a second storage medium that is different from the first storage medium.
- The non-transitory machine-readable storage medium of claim 19 , wherein the operations further comprise: responsive to the detecting, storing a representation of a gamer controlling the gameplay in the second storage medium.
Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.