U.S. Pat. No. 8,194,862
VIDEO GAME SYSTEM WITH MIXING OF INDEPENDENT PRE-ENCODED DIGITAL AUDIO BITSTREAMS
AssigneeActivevideo Networks, Inc.
Issue DateJuly 31, 2009
Illustrative Figure
Abstract
A computer-implemented method of encoding audio includes accessing a plurality of independent audio source streams, each of which includes a sequence of source frames. Respective source frames of each sequence include respective pluralities of pulse-code modulated audio samples. Each of the plurality of independent audio source streams is separately encoded to generate a plurality of independent encoded streams, each of which corresponds to a respective independent audio source stream. The encoding includes, for respective source frames, converting respective pluralities of pulse-code modulated audio samples to respective pluralities of floating-point frequency samples that are divided into a plurality of frequency bands. An instruction to mix the plurality of independent encoded streams is received; in response, respective floating-point frequency samples of the independent encoded streams are combined. An output bitstream is generated that includes the combined respective floating-point frequency samples.
Description
Like reference numerals refer to corresponding parts throughout the drawings. DETAILED DESCRIPTION OF EMBODIMENTS Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. FIG. 1is a block diagram illustrating an embodiment of a cable television system100for receiving orders for and providing content, such as one or more video games, to one or more users (including multi-user video games). Several content data streams may be transmitted to respective subscribers and respective subscribers may, in turn, order services or transmit user actions in a video game. Satellite signals, such as analog television signals, may be received using satellite antennas144. Analog signals may be processed in analog headend146, coupled to radio frequency (RF) combiner134and transmitted to a set-top box (STB)140via a network136. In addition, signals may be processed in satellite receiver148, coupled to multiplexer (MUX)150, converted to a digital format using a quadrature amplitude modulator (QAM)132-2(such as 256-level QAM), coupled to the radio frequency (RF) combiner134and transmitted to the STB140via the network136. Video on demand (VOD) server118may provide signals corresponding to an ordered movie to switch126-2, which couples the signals to QAM132-1for conversion into the digital format. These digital signals are coupled to the radio frequency (RF) combiner134and transmitted to the STB140via the network136. The STB140may display one or more video signals, including those corresponding to video-game content discussed below, on television or ...
Like reference numerals refer to corresponding parts throughout the drawings.
DETAILED DESCRIPTION OF EMBODIMENTS
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
FIG. 1is a block diagram illustrating an embodiment of a cable television system100for receiving orders for and providing content, such as one or more video games, to one or more users (including multi-user video games). Several content data streams may be transmitted to respective subscribers and respective subscribers may, in turn, order services or transmit user actions in a video game. Satellite signals, such as analog television signals, may be received using satellite antennas144. Analog signals may be processed in analog headend146, coupled to radio frequency (RF) combiner134and transmitted to a set-top box (STB)140via a network136. In addition, signals may be processed in satellite receiver148, coupled to multiplexer (MUX)150, converted to a digital format using a quadrature amplitude modulator (QAM)132-2(such as 256-level QAM), coupled to the radio frequency (RF) combiner134and transmitted to the STB140via the network136. Video on demand (VOD) server118may provide signals corresponding to an ordered movie to switch126-2, which couples the signals to QAM132-1for conversion into the digital format. These digital signals are coupled to the radio frequency (RF) combiner134and transmitted to the STB140via the network136.
The STB140may display one or more video signals, including those corresponding to video-game content discussed below, on television or other display device138and may play one or more audio signals, including those corresponding to video-game content discussed below, on speakers139. Speakers139may be integrated into television138or may be separate from television138. WhileFIG. 1illustrates one subscriber STB140, television or other display device138, and speakers139, in other embodiments there may be additional subscribers, each having one or more STBs, televisions or other display devices, and/or speakers.
The cable television system100may also include an application server114and a plurality of game servers116. The application server114and the plurality of game servers116may be located at a cable television system headend. While a single instance or grouping of the application server114and the plurality of game servers116is illustrated inFIG. 1, other embodiments may include additional instances in one or more headends. The servers and/or other computers at the one or more headends may run an operating system such as Windows, Linux, Unix, or Solaris.
The application server114and one or more of the game servers116may provide video-game content corresponding to one or more video games ordered by one or more users. In the cable television system100there may be a many-to-one correspondence between respective users and an executed copy of one of the video games. The application server114may access and/or log game-related information in a database. The application server114may also be used for reporting and pricing. One or more game engines (also called game engine modules)248(FIG. 2) in the game servers116are designed to dynamically generate video-game content using pre-encoded video and/or audio data. In an exemplary embodiment, the game servers116use video encoding that is compatible with an MPEG compression standard and use audio encoding that is compatible with the MPEG-1 Layer II compression standard.
The video-game content is coupled to the switch126-2and converted to the digital format in the QAM132-1. In an exemplary embodiment with 256-level QAM, a narrowcast sub-channel (having a bandwidth of approximately 6 MHz, which corresponds to approximately 38 Mbps of digital data) may be used to transmit 10 to 30 video-game data streams for a video game that utilizes between 1 and 4 Mbps.
These digital signals are coupled to the radio frequency (RF) combiner134and transmitted to STB140via the network136. The application server114may also access, via Internet110, persistent player or user data in a database stored in multi-player server112. The application server114and the plurality of game servers116are further described below with reference toFIG. 2.
The STB140may optionally include a client application, such as games142, that receives information corresponding to one or more user actions and transmits the information to one or more of the game servers116. The game applications142may also store video-game content prior to updating a frame of video on the television138and playing an accompanying frame of audio on the speakers139. The television138may be compatible with an NTSC format or a different format, such as PAL or SECAM. The STB140is described further below with reference toFIG. 3.
The cable television system100may also include STB control120, operations support system122and billing system124. The STB control120may process one or more user actions, such as those associated with a respective video game, that are received using an out-of-band (OOB) sub-channel using return pulse amplitude (PAM) demodulator130and switch126-1. There may be more than one OOB sub-channel. While the bandwidth of the OOB sub-channel(s) may vary from one embodiment to another, in one embodiment, the bandwidth of each OOB sub-channel corresponds to a bit rate or data rate of approximately 1 Mbps. The operations support system122may process a subscriber's order for a respective service, such as the respective video game, and update the billing system124. The STB control120, the operations support system122and/or the billing system124may also communicate with the subscriber using the OOB sub-channel via the switch126-1and the OOB module128, which converts signals to a format suitable for the OOB sub-channel. Alternatively, the operations support system122and/or the billing system124may communicate with the subscriber via another communications link such as an Internet connection or a communications link provided by a telephone system.
The various signals transmitted and received in the cable television system100may be communicated using packet-based data streams. In an exemplary embodiment, some of the packets may utilize an Internet protocol, such as User Datagram Protocol (UDP). In some embodiments, networks, such as the network136, and coupling between components in the cable television system100may include one or more instances of a wireless area network, a local area network, a transmission line (such as a coaxial cable), a land line and/or an optical fiber. Some signals may be communicated using plain-old-telephone service (POTS) and/or digital telephone networks such as an Integrated Services Digital Network (ISDN). Wireless communication may include cellular telephone networks using an Advanced Mobile Phone System (AMPS), Global System for Mobile Communication (GSM), Code Division Multiple Access (CDMA) and/or Time Division Multiple Access (TDMA), as well as networks using an IEEE 802.11 communications protocol, also known as WiFi, and/or a Bluetooth communications protocol.
WhileFIG. 1illustrates a cable television system, the system and methods described may be implemented in a satellite-based system, the Internet, a telephone system and/or a terrestrial television broadcast system. The cable television system100may include additional elements and/or omit one or more elements. In addition, two or more elements may be combined into a single element and/or a position of one or more elements in the cable television system100may be changed. In some embodiments, for example, the application server114and its functions may be merged with and into the game servers116.
FIG. 2is a block diagram illustrating an embodiment of a video-game system200. The video-game system200may include one or more data processors, video processors, and/or central processing units (CPUs)210, one or more optional user interfaces214, a communications or network interface220for communicating with other computers, servers and/or one or more STBs (such as the STB140inFIG. 1), memory222and one or more signal lines212for coupling these components to one another. The one or more data processors, video processors, and/or central processing units (CPUs)210may be configured or configurable for multi-threaded or parallel processing. The user interface214may have one or more keyboards216and/or displays218. The one or more signal lines212may constitute one or more communications busses.
Memory222may include high-speed random access memory and/or non-volatile memory, including ROM, RAM, EPROM, EEPROM, one or more flash disc drives, one or more optical disc drives, one or more magnetic disk storage devices, and/or other solid state storage devices. Memory222may optionally include one or more storage devices remotely located from the CPU(s)210. Memory222, or alternately non-volatile memory device(s) within memory222, comprises a computer readable storage medium. Memory222may store an operating system224(e.g., LINUX, UNIX, Windows, or Solaris) that includes procedures for handling basic system services and for performing hardware dependent tasks. Memory222may also store communication procedures in a network communication module226. The communication procedures are used for communicating with one or more STBs, such as the STB140(FIG. 1), and with other servers and computers in the video-game system200.
Memory222may also include the following elements, or a subset or superset of such elements, including an applications server module228, a game asset management system module230, a session resource management module234, a player management system module236, a session gateway module242, a multi-player server module244, one or more game server modules246, an audio signal pre-encoder264, and a bank256for storing macro-blocks and pre-encoded audio signals. The game asset management system module230may include a game database232, including pre-encoded macro-blocks, pre-encoded audio signals, and executable code corresponding to one or more video games. The player management system module236may include a player information database240including information such as a user's name, account information, transaction information, preferences for customizing display of video games on the user's STB(s)140(FIG. 1), high scores for the video games played, rankings and other skill level information for video games played, and/or a persistent saved game state for video games that have been paused and may resume later. Each instance of the game server module246may include one or more game engine modules248. Game engine module248may include games states250corresponding to one or more sets of users playing one or more video games, synthesizer module252, one or more compression engine modules254, and one or more audio frame mergers (also referred to as audio frame stitchers)255. The bank256may include pre-encoded audio signals257corresponding to one or more video games, pre-encoded macro-blocks258corresponding to one or more video games, and/or dynamically generated or encoded macro-blocks260corresponding to one or more video games.
The game server modules246may run a browser application, such as Windows Explorer, Netscape Navigator or FireFox from Mozilla, to execute instructions corresponding to a respective video game. The browser application, however, may be configured to not render the video-game content in the game server modules246. Rendering the video-game content may be unnecessary, since the content is not displayed by the game servers, and avoiding such rendering enables each game server to maintain many more game states than would otherwise be possible. The game server modules246may be executed by one or multiple processors. Video games may be executed in parallel by multiple processors. Games may also be implemented in parallel threads of a multi-threaded operating system.
AlthoughFIG. 2shows the video-game system200as a number of discrete items,FIG. 2is intended more as a functional description of the various features which may be present in a video-game system rather than as a structural schematic of the embodiments described herein. In practice, and as recognized by those of ordinary skill in the art, the functions of the video-game system200may be distributed over a large number of servers or computers, with various groups of the servers performing particular subsets of those functions. Items shown separately inFIG. 2could be combined and some items could be separated. For example, some items shown separately inFIG. 2could be implemented on single servers and single items could be implemented by one or more servers. The actual number of servers in a video-game system and how features, such as the game server modules246and the game engine modules248, are allocated among them will vary from one implementation to another, and may depend in part on the amount of information stored by the system and/or the amount of data traffic that the system must handle during peak usage periods as well as during average usage periods. In some embodiments, audio signal pre-encoder264is implemented on a separate computer system, which may be called a pre-encoding system, from the video game system(s)200.
Furthermore, each of the above identified elements in memory222may be stored in one or more of the previously mentioned memory devices. Each of the above identified modules corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory222may store a subset of the modules and data structures identified above. Memory222also may store additional modules and data structures not described above.
FIG. 3is a block diagram illustrating an embodiment of a set top box (STB)300, such as STB140(FIG. 1). STB300may include one or more data processors, video processors, and/or central processing units (CPUs)310, a communications or network interface314for communicating with other computers and/or servers such as video game system200(FIG. 2), a tuner316, an audio decoder318, an audio driver320coupled to one or more speakers322, a video decoder324, and a video driver326coupled to a display328. STB300also may include one or more device interfaces330, one or more IR interfaces334, memory340and one or more signal lines312for coupling components to one another. The one or more data processors, video processors, and/or central processing units (CPUs)310may be configured or configurable for multi-threaded or parallel processing. The one or more signal lines312may constitute one or more communications busses. The one or more device interfaces330may be coupled to one or more game controllers332. The one or more IR interfaces334may use IR signals to communicate wirelessly with one or more remote controls336.
Memory340may include high-speed random access memory and/or non-volatile memory, including ROM, RAM, EPROM, EEPROM, one or more flash disc drives, one or more optical disc drives, one or more magnetic disk storage devices, and/or other solid state storage devices. Memory340may optionally include one or more storage devices remotely located from the CPU(s)210. Memory340, or alternately non-volatile memory device(s) within memory340, comprises a computer readable storage medium. Memory340may store an operating system342that includes procedures (or a set of instructions) for handling basic system services and for performing hardware dependent tasks. The operating system342may be an embedded operating system (e.g., Linux, OS9 or Windows) or a real-time operating system suitable for use on industrial or commercial devices (e.g., VxWorks by Wind River Systems, Inc). Memory340may store communication procedures in a network communication module344. The communication procedures are used for communicating with computers and/or servers such as video game system200(FIG. 2). Memory340may also include control programs346, which may include an audio driver program348and a video driver program350.
STB300transmits order information and information corresponding to user actions and receives video-game content via the network136. Received signals are processed using network interface314to remove headers and other information in the data stream containing the video-game content. Tuner316selects frequencies corresponding to one or more sub-channels. The resulting audio signals are processed in audio decoder318. In some embodiments, audio decoder318is an MPEG-1 Layer II (i.e., MP2) decoder, also referred to as an MP2 decoder, implemented in accordance with the MPEG-1 Layer II standard as defined in ISO/IEC standard 11172-3 (including the original 1993 version and the “Cor1:1996” revision), which is incorporated by reference herein in its entirety. The resulting video signals are processed in video decoder324. In some embodiments, video decoder314is an MPEG-1 decoder, MPEG-2 decoder, H.264 decoder, or WMV decoder. In general, audio and video standards can be mixed arbitrarily, such that the video decoder324need not correspond to the same standard as the audio decoder318. The video content output from the video decoder314is converted to an appropriate format for driving display328using video driver326. Similarly, the audio content output from the audio decoder318is converted to an appropriate format for driving speakers322using audio driver320. User commands or actions input to the game controller332and/or the remote control336are received by device interface330and/or by IR interface334and are forwarded to the network interface314for transmission.
The game controller332may be a dedicated video-game console, such as those provided by Sony Playstation®, Nintendo®, Sega® and Microsoft Xbox®, or a personal computer. The game controller332may receive information corresponding to one or more user actions from a game pad, keyboard, joystick, microphone, mouse, one or more remote controls, one or more additional game controllers or other user interface such as one including voice recognition technology. The display328may be a cathode ray tube, a liquid crystal display, or any other suitable display device in a television, a computer or a portable device, such as a video game controller332or a cellular telephone. In some embodiments, speakers322are embedded in the display328. In some embodiments, speakers322include left and right speakers (e.g., respectively positioned to the left and right of the display328).
In some embodiments, the STB300may perform a smoothing operation on the received video-game content prior to displaying the video-game content. In some embodiments, received video-game content is decoded, displayed on the display328, and played on the speakers322in real time as it is received. In other embodiments, the STB300stores the received video-game content until a full frame of video is received. The full frame of video is then decoded and displayed on the display328while accompanying audio is decoded and played on speakers322.
AlthoughFIG. 3shows the STB300as a number of discrete items,FIG. 3is intended more as a functional description of the various features which may be present in a set top box rather than as a structural schematic of the embodiments described herein. In practice, and as recognized by those of ordinary skill in the art, items shown separately inFIG. 3could be combined and some items could be separated. Furthermore, each of the above identified elements in memory340may be stored in one or more of the previously mentioned memory devices. Each of the above-identified modules corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory340may store a subset of the modules and data structures identified above. Memory340also may store additional modules and data structures not described above.
FIG. 4Ais a block diagram of a system400for performing MPEG-1 Layer II encoding of frames of audio data in an audio source stream in accordance with some embodiments. The system400produces an encoded bitstream434that includes compressed frames corresponding to respective frames in the audio source stream.
In the system400, a Pseudo-Quadrature Mirror Filtering (PQMF) filter bank402receives 1152 Pulse-Code Modulated (PCM) audio samples420for a respective channel of a respective frame in the audio source stream. If the audio source stream is monaural (i.e., mono), there is only one channel; if the audio source stream is stereo, there are two channels (e.g., left (L) and right (R)). The PQMF filter bank402performs time-to-frequency domain conversion of the 1152 PCM samples420per channel to a maximum of 1152 floating point (FP) frequency samples422per channel, arranged in 3 blocks of 12 samples for each of a maximum of 32 bands, sometimes referred to as sub-bands. (As used herein, the term “floating point frequency sample” includes samples that are shifted into an integer range. For example, FP frequency samples may be shifted from an original floating point range of [−1.0, 1.0] to a 16-bit integer range by multiplying by 32,768.) The time-to-frequency domain conversion performed by the PQMF filter bank402is computationally expensive and time consuming.
A block-wide scale factor calculation module404receives the FP frequency samples422from the PQMF filter bank402and calculates scale factors used to store the FP frequency values422. To reduce the required number of bits for storing the FP frequency samples422in the compressed frame produced by the system400, the module404determines a block-wide maximum scale factor424for each of the three blocks of 12 samples of a particular frequency band. The 12 samples of a respective block for a particular band, as scaled by the block-wide scale factor, can be stored using the block-wide scale factor, which functions as a single common exponent. The module404performs determination of block-wide scale factors424independently for each of the up to 32 bands, resulting in a maximum of 96 scale factors424per frame. The scale factors424are one of the parameters used by the scaling and quantization module412, described below, to quantize the mantissas of the FP frequency samples422in the compressed frame. (FP frequency samples as stored in a compressed frame in an encoded bitstream are represented by a mantissa and a scale factor).
A scale factor compression module408, which receives the block-wide scale factors424from the module404, further saves bits in the compressed frame by determining the difference of the three scale factors424for a particular frequency band in a frame and classifying the difference into one of 8 transmission patterns. Transmission patterns are referred to as scale factor select information (scfsi428) and are used to compress the three scale factors424for respective frequency bands. For some patterns, depending on the relative difference between the three scale factors for a particular band, the value of one or two of the three scale factors is set equal to that of a third scale factor. Thus the quantization performed by the scaling and quantization module412is influenced by the selected transmission pattern428.
A Psycho-Acoustic Model (PAM) module406receives the FP frequency samples422from the PQMF filter bank402as well as the PCM samples420and determines a Signal-To-Mask Ratio (SMR)426according to a model of the human hearing system. In some embodiments, the PAM module406performs a fast-Fourier transform (FFT) of the source PCM samples420as part of the determination of the SMR ratio426. Accordingly, depending on the method used, application of the PAM is highly computationally expensive. The resulting SMR426is provided to the bit allocation module410and bitstream formatting module414, described below, and is used in the bit allocation process to determine which frequency bands require more bits in comparison to others to avoid artifacts.
A bit allocation module410receives the transmission pattern428from the scale factor compression module408and the SMR426from the PAM module406and produces bit allocation information430. The module410performs an iterative bit allocation process, operating across frequency bands and channels, to assign bits to frequency bands depending on a Mask-To-Noise ratio (MNR) defined as MNR[band]=SNR[band]−SMR[band], where SNR is provided by a fixed table determining the importance of each band, and SMR426is the result of the psycho-acoustic model calculation performed by the PAM module406. Bands with the current minimum MNR receive more bits first, by relaxing the quantization for the band (initially, the quantization is set to “maximum” for all bands, which corresponds to no information being stored at all). When a band is selected to receive bits, the scale factor select information428is used to determine the fixed amount of bits required to store the scale factors for this band. The bit allocation process can require a significant number of iterations to complete; it ends when no more bits are available in the compressed target frame of the encoded bitstream434. In general, the number of bits available for allocation depends on the selected target bit rate at which the encoded bitstream434is to be transmitted.
A scaling and quantization module412receives the FP frequency samples422from the module402, the block-wide scale factors424from the module404, and the bit allocation information430from the module410. The scaling and quantization module412scales the mantissas of the FP frequency samples422of each frequency band according to the block-wide scale factors424and quantizes the mantissas according to the bit allocation information430.
Quantized mantissas432from the scaling and quantization module412are provided to a bitstream formatting module414along with the SMR426from the PAM module406, based on which the module414generates compressed target frames of the encoded bitstream434. Generating a target frame includes storing a frame header, storing the bit allocation information430, storing scale factors424, storing the quantized mantissas432for the FP frequency samples422as scaled by the scale factors424, and adding stuffing bits. To store the frame header, 32 frame header bits, plus optionally an additional 16 bits for cyclic redundancy check (CRC), are written to the compressed target frame. To store the bit allocation information, the numbers of bits required for the mantissas of the FP frequency samples422are stored as indices into a table, to save bits. Scale factors424are stored according to the transmission pattern (scfsi428) determined by the module408. Depending on the selected scfsi428for a frequency band, either three, two, or just one scale factor(s) are stored for the band. The scale factor(s) are stored as indices into a table of scale factors. Stuffing bits are added if the bit allocation cannot completely fill the target frame.
In the case of a stereo source with two channels, the encoding process performed by the system400is executed independently for each channel, and the bitstream formatting module434combines the data for both channels and writes the data to respective channels of the encoded bitstream434. In the case of a mono source with a single channel, the encoding process encodes the data for the single channel and writes the encoded data to the encoded bitstream434. In the case of “joint stereo mode,” the encoding process creates two channels of encoded FP frequency samples for frequency bands below or equal to a specified (e.g., predefined) limit, but only one channel of encoded FP frequency samples for all frequency bands above the specified limit. In joint stereo mode, the encoder thus effectively operates as a single-channel (i.e., mono) encoder for bands above the specified limit, and as a stereo encoder for bands below or equal to the specified limit.
AlthoughFIG. 4Ashows the encoding system400as a number of discrete modules,FIG. 4Ais intended more as a functional description of the various features which may be present in an encoder rather than as a structural schematic of an encoder. In practice, and as recognized by those of ordinary skill in the art, modules shown separately inFIG. 4Acould be combined and some modules could be separated into multiple modules. In some embodiments, each of the above-identified modules402,404,406,408,410,412, and414corresponds to a set of instructions for performing a function described above. These sets of instructions need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. Alternatively, one or more of the above-identified modules402,404,406,408,410,412, and414may be implemented in hardware.
In the video game system200, it is desirable to be able to mix multiple audio source streams in real time. For example, continuous (e.g., present over an extended period of time) background music may be mixed with one or more discrete sound effects generated based on a current state of a video game (e.g., in response to a user input), such that the background music will continue to play while the one or more sound effects are played. Combining PCM samples for the multiple audio source streams and then using the system400to encode the combined PCM samples is computationally inefficient because the encoding performed by the system400is computationally intensive. In particular, PQMF filtering, scale factor calculation, application of a PAM, and bit allocation can be highly computationally efficient. Accordingly, it is desirable to encode audio source streams such that the encoded streams can be mixed in real time without performing one or more of these operations.
In some embodiments, independent audio source streams are mixed by performing PQMF filtering off-line and then adding respective FP frequency samples of respective sources in real-time and dividing the results by a constant value, or adjusting the scale factors accordingly, to avoid clipping. For example, two sources of audio (e.g., two stereo sources with two channels (L+R) each) may be mixed by performing PQMF filtering of each source (e.g., by PQMF-filtering each of the two channels of each source) offline and then adding respective FP frequency samples of the two sources in real time. Specifically, each of the twelve FP frequency samples in each of the 3 blocks for a particular frequency band in a frame of the first source is added to a corresponding FP frequency sample at a corresponding location in a corresponding block for the particular frequency band in a corresponding frame of the second source. To avoid clipping, the resulting combined FP frequency samples are divided by a constant value (e.g., 2 or √{square root over (2)}) or their scale factors are adjusted accordingly. Real-time mixing is then performed by executing the other steps of the encoding process (e.g., as performed by the modules404,406,408,410,412, and414,FIG. 4A) for the combined FP frequency samples. In some embodiments, because division of the combined FP frequency samples by the constant value leads to the volume level of the mixed audio being lower than that of unmixed audio, unmixed audio is scaled down by the same amount to achieve an even volume level.
In some embodiments, in addition to performing PQMF filtering off-line, the audio source streams are further encoded off-line by applying a fixed PAM to the FP frequency samples produced by the PQMF filtering and by precalculating scale factors. Furthermore, in some embodiments the scale factors are calculated such that each of the three blocks for a particular frequency band in a frame has the same scale factor (i.e., the difference between the scale factors of the three blocks of a frequency band is zero), resulting in a constant transmission pattern (0x111) for each frequency band in each frame. The scale factors thus are frame-wide scale factors, as opposed to the block-wide scale factors424generated in the system400(FIG. 4A). The combination of a fixed PAM and frame-wide scale factors results in a constant bit allocation.
The fixed PAM corresponds to a table of SMR values (i.e., an SMR table) to be applied to FP frequency samples of respective frequency bands. Use of a fixed PAM eliminates the need to re-apply a full PAM to each frame in a stream. The SMR values may be determined empirically by performing multiple runs of a SMR detection algorithm (e.g., implemented in accordance with the MPEG-1 Layer II audio specification) using different kinds of audio material (e.g., various audio materials resembling the audio material in a video game) and averaging the results. For example, the following SMR table was found to provide acceptable results, with barely noticeable artifacts in the higher frequency bands: {30, 17, 16, 10, 3, 12, 8, 2.5, 5, 5, 6, 6, 5, 6, 10, 6, −4, −10, −21, −30, −42, −55, −68, −75, −75, −75, −75, −75, −91, −107, −110, −108}
The SMR values in this table correspond to respective frequency bands, sorted by increasing frequency, and are used for each of the two channels in a stereo source stream. Thus, in this example, the frequencies in the lower half of the spectrum get more weight, against which the weights for the upper frequencies are traded off.
FIG. 4Bis a block diagram of a system440for performing offline encoding of frames of audio data in an audio source stream using a fixed PAM and frame-wide scale factors in accordance with some embodiments. A frame-wide scale factor calculation module442receives FP frequency samples422from the PQMF filter bank402, which operates as described with regard toFIG. 4A. The frame-wide scale factor calculation module442determines a frame-wide maximum scale factor444for the 36 FP frequency samples422in a particular frequency band of a frame. Because all three blocks for each frequency band have the same scale factor, the transmission pattern is a constant, known value (e.g., pattern 0x111). Accordingly, the scale factor compression module408of the system400(FIG. 4A) is omitted from the system440.
Because the transmission pattern is constant and the SMR provided by the fixed PAM is constant, the bit allocation information446is also constant, allowing the bit allocation module410of the system400(FIG. 4A) to be omitted from the system440. The constant bit allocation information446, frame-wide scale factors444, and FP frequency samples422are provided to the scaling and quantization module412, which produces quantized mantissas448. The quantized mantissas448are provided to the bitstream formatting module414along with the constant transmission pattern450and constant SMR452. The bitstream formatting module414produces an encoded bitstream454, which is stored for subsequent real-time mixing with other encoded bitstreams454generated from other audio source streams. In some embodiments, encoded bitstreams454are stored as pre-encoded audio signals257in the memory222of a video game system200(FIG. 2).
In some embodiments, scale factors (e.g., block-wide scale factors424,FIG. 4A, or frame-wide scale factors444,FIG. 4B) are stored as indices into a table of scale factors. For example, the MPEG-1 Layer II standard uses 6-bit binary indices to reference 64 distinct possible scale factors. Thus, in some embodiments the block-wide scale factors424(FIG. 4A) and/or frame-wide scale factors444(FIG. 4B) are stored as 6-bit indices into a table of 64 distinct scale values (e.g., as specified by the MPEG-1 Layer II standard). 6-bit indices provide 2 dB resolution, with one step in the scale factor corresponding to 2 dB. In some embodiments, however, additional bits beyond the specified 6 bits are used to store higher-resolution scale factors for encoded bitstreams. This use of higher-resolution scale factors improves the sound quality resulting from mixing encoded bitstreams.
FIG. 4Cis a block diagram of a system460for performing offline encoding of frames of audio data in accordance with some embodiments. Like the system440(FIG. 4B), the system460uses a fixed PAM and frame-wide scale factors. However, the system460uses high-precision frame-wide scale factors470, as determined by the frame-wide scale factor calculation module462. In this context, “high-precision” refers to higher than 6-bit resolution for the scale factor indices. The system460also separates the scaling and quantization operations performed by the module412in the system440(FIG. 4B). In the system460, a high-precision scaling module464generates scaled mantissas472, which then are quantized by the quantization module466. This separation allows the scaled mantissas472to be stored before quantization. The quantization module466provides quantized mantissas474to the bitstream formatting module414, which generates an encoded bitstream476.
In some embodiments, 8-bit binary indices are used to store the high-precision frame-wide scale factors470. 8-bit indices provide 0.5 dB resolution, with one step in the scale factor corresponding to 0.5 dB. For example, the available high-precision frame-wide scale factors470may have values determined by the formula
HighprecScaleFactor[i]=21−i/12, fori=0 to 255, (1)
where i is an integer that serves as an index. The scale factors as determined by this formula may be stored in a look-up table indexed by i. Use of 8-bit indices allows mantissas to be virtually shifted by 1/12 of a bit, as opposed to ¼ of a bit for 6-bit indices.
In some embodiments, scaled mantissas (e.g.,472) are stored using a single byte each. In some embodiments, scaled mantissas (e.g.,472) are stored using 16 bits each.
In some embodiments, encoded bitstreams476are stored as pre-encoded audio signals257in the memory222of a video game system200(FIG. 2).
FIGS. 4B and 4C, likeFIG. 4A, are intended more as functional descriptions of the various features which may be present in encoders (e.g., in an audio signal pre-encoder264,FIG. 2) rather than as structural schematics of encoders. In practice, and as recognized by those of ordinary skill in the art, modules shown separately inFIGS. 4B and 4Ccould be combined and some modules could be separated into multiple modules. In some embodiments, each of the above-identified modules402,442,412, and414(FIG. 4B) or402,462,464,466, and414(FIG. 4C) corresponds to a set of instructions for performing a function described above. These sets of instructions need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. Alternatively, one or more of the above-identified modules402,442,412, and414(FIG. 4B) or402,462,464,466, and414(FIG. 4C) may be implemented in hardware.
To mix multiple encoded bitstreams (e.g., multiple encoded bitstreams454(FIG. 4B) or476(FIG. 4C)) in real time, respective FP frequency samples in the encoded bitstreams are combined. For example, to mix first and second encoded bitstreams, each of the 36 FP frequency samples of a particular frequency band in a frame of the first encoded bitstream is combined with a respective FP frequency sample of the same frequency band in a corresponding frame of the second encoded bitstream. In some embodiments, combining the FP frequency samples includes calculating an adjusted scale factor to scale FP frequency samples in a particular frequency band of respective frames of the first and second encoded bitstreams. In some embodiments, the adjusted scale factor is calculated as a function of the difference between the frame-wide scale factors of the respective frames of the first and second encoded bitstreams for a particular frequency band. For example, the adjusted scale factor may be calculated by subtracting the larger of the two scale factors from the smaller of the two scale factors and, based on the difference, adding an offset to the larger of the two scale factors, where the offset is a monotonically decreasing (i.e., never increasing) function of the difference between the larger and smaller of the two scale factors.
As discussed above, the scale factors may be represented by indices into a table of scale factors. As can be seen in Equation (1), lower indices i correspond to larger scale factors, and vice versa (i.e., the higher the index i, the smaller the scale factor). Thus, to calculate the index for the adjusted scale factor, the difference between the scale factors of the respective frames of the first and second encoded bitstreams for a particular frequency band is determined. Based on the difference, an offset is subtracted from the lower of the two indices, wherein the offset is a monotonically decreasing (i.e., never increasing) function of the difference.
FIG. 5is a flow diagram of a process500of mixing high-precision frame-wide scale factors470of respective frames of first and second encoded bitstreams for a particular frequency band by determining an adjusted scale factor index based on indices for the high-precision frame-wide scale factors470of the first and second encoded bitstreams476in accordance with some embodiments. In some embodiments, the process500is performed by an audio frame mixer (e.g., mixer255,FIG. 2). In the process500, the upper and lower (i.e., larger and smaller) indices for the high-precision frame-wide scale factors470of respective frames of the first and second encoded bitstreams for a particular frequency band are identified (502) and the difference between the upper and lower indices is determined (504). If the difference between the two indices is less than 12 (506-Yes), then the adjusted scale factor is set equal to the lower index minus 12 (508). If not (506-No), and if the difference between the two indices is less than 24 (510-Yes), then the adjusted scale factor is set equal to the lower index minus 8 (512). If not (510-No), and if the difference between the two indices is less than 36 (514-Yes), then the adjusted scale factor is set equal to the lower index minus 4 (516). Otherwise, the adjusted scale factor is set equal to the lower index (518). The offsets in the process500are thus seen to be a monotonically decreasing (i.e., never increasing) function of the difference between the upper and lower indices: as the difference increases, the offsets decrease monotonically from 12 (508) to 8 (512) to 4 (516) to zero (518). These offset values and their corresponding ranges of differences are merely examples of possible offsets; other values may be used if they are empirically determined to provide acceptable sound quality. A similar process to the process500may be implemented using 6-bit resolution scale factor indices.
Once the adjusted scale factor has been determined, respective FP scale factors in corresponding frames and frequency bands of the first and second encoded bitstreams (e.g., bitstreams454(FIG. 4B) or476(FIG. 4C)) are scaled by the adjusted scale factor and then added together according to the following formula:
Combined FP Freq. Sample=(FP1*SF1)/Adj.SF+(FP2*SF2)/Adj.SF (2)
where FP1 and FP2 are respective unscaled FP frequency samples422reconstructed from the first and second encoded bitstreams, SF1 and SF2 are their original scale factors (e.g.,444(FIG. 4B) or470(FIG.4C)), and Adj.SF is the adjusted scale factor (e.g., calculated according to the process500,FIG. 5). Where the scale factors SF1, SF2, and Adj.SF are stored as indices into a table of scale factors HighprecScaleFactor[i], respective FP scale factors are combined according to the following formula, which is equivalent to Equation (2):
Combined FP Freq. Sample=FP1*HighprecScaleFactor[Adj.idx−SF1.idx]+FP2*HighprecScaleFactor[Adj.idx−SF2.idx] (3)
where Adj.idx is the index corresponding to Adj.SF, SF1.idx is the index corresponding to SF1, and SF2.idx is the index corresponding to SF2.
In some embodiments, if the absolute value of “Combined FP Freq. Sample” exceeds a predefined limit, it is adjusted to prevent clipping. For example, if “Combined FP Freq. Sample” is greater than a predefined limit (e.g., 32,767), it is set equal to the limit (e.g., 32,767). Similarly, if “Combined FP Freq. Sample” is less than a predefined limit (e.g., −32,768), it is set equal to the limit (e.g., −32,768). The boundaries [−32678, 32768] result from shifting the FP frequency samples from an original floating point range of [−1.0, 1.0] by multiplying by 32,768. Shifting the FP frequency samples into the 16-bit integer range uses less storage for the pre-encoded data and allows for faster integer operations during real time stream merging.
The Combined FP Freq. Samples are written to an output bitstream, which is provided to an appropriate system for playback. For example, the output bitstream may be transmitted to a STB300where it is decoded and provided to speakers for playback.
An output bitstream may include mixed audio data from multiple sources at some times and audio data from only a single source at other times. In some embodiments, encoded bitstreams include real-time-mixable data as well as standard MPEG-1 Layer II data that may be provided to the output bitstream when mixing is not being performed.
FIG. 6is a block diagram of a system600that combines elements of the systems400(FIG. 4A) and 460(FIG. 6) to generate mixable frames606that include both real-time mixable audio data as generated by the system460and standard MPEG-1 Layer II audio data in accordance with some embodiments. The real-time mixer (e.g., audio frame merger255,FIG. 2) selects the standard MPEG-1 Layer II audio data when only a single audio source (e.g., background music in a video game) is specified for playback and selects the real-time mixable audio data when multiple audio sources (e.g., background music and a sound effect) are specified to be mixed for playback. In the system600, the scaled mantissas472generated by the high-precision scaling module464are stored as pre-encoded mixable data by the module602. A combine data module604combines the pre-encoded mixable data with the standard MPEG-1 Layer II frame generated by the bitstream formatting module414to produce a mixable frame606that includes both the real-time mixable audio data and the standard MPEG-1 Layer II audio data.
For stereo mode, the system600processes each channel separately, resulting in two sets of data that are stored in separate channels of the mixable frames606. For joint stereo mode, the system600produces three sets of data that are stored separately in the mixable frames606.
In some embodiments, mixable frames606are stored as audio frame sets.FIG. 7illustrates a data structure of an audio frame set700generated by the system600in accordance with some embodiments. In the example ofFIG. 7, the frame set700is generated from a stereo source stream and thus has two channels. The frame set700includes a header702, constant bit allocation information704-1and704-2(e.g., corresponding to constant bit allocation information446,FIG. 6) for each of the two channels, and frames706-1through706-n, where n is an integer corresponding to the number of frames in the set700. The frames706each include a standard MPEG-1 Layer II frame708(e.g., corresponding to frame608,FIG. 6) with two channels, high precision frame-wide scale factors710-1and710-2(e.g., corresponding to scale factors470) for each of the two channels, and scaled mantissas712-1and712-2(e.g., corresponding to scaled mantissas472) for each of the two channels. The high precision scale factors710are stored as scale factor table indices714-0through714-31(for the example of 32 frequency bands, in which case sblimit=31), each of which correspond to a particular frequency band. The scaled mantissas712include scaled mantissas716-0through716-31(for the example of 32 frequency bands, in which case sblimit=31), each corresponding to a particular frequency band.
FIG. 8is a flow diagram illustrating a process800of real-time audio frame mixing, also referred to as audio frame stitching, in accordance with some embodiments. The process800is performed by an audio frame merger (e.g., audio frame merger255,FIG. 2) and generates an output bitstream for transmission to a client device (e.g., to STB300,FIG. 3) for playback.
In the process800, a fast copy of the constant header and bit allocation information to the target frame in the output bitstream is performed (802). Because the bits of the frame header do not change (i.e., are constant from frame to frame) once they have been set at the beginning of the real-time mixing, and because the constant bit allocation immediately follows the frame header, in some embodiments both the frame header bits and the constant bit allocation are stored in a constant bit array and copied to the beginning of each frame in the output bitstream in operation802.
For each channel in the target frame of the output bitstream, respective scale factors in the corresponding frames of the encoded bitstreams are mixed (804). For example, an adjusted scale factor is calculated in accordance with the process500(FIG. 5).
For each channel in the target frame of the output bitstream, respective scaled mantissas in the corresponding frames in the encoded bitstreams being mixed are combined (806). The mantissas are combined, for example, in accordance with Equations (2) and (3). The combined mantissas are quantized (808) according to the constant bit allocation. The combined mantissas and corresponding adjusted scale factors are written (810) to the target frame of the output bitstream.
The operations804and806may be repeated an arbitrary number of times to mix in additional encoded bitstreams corresponding to additional sources.
The process800may include calculation of a CRC. Alternatively, the CRC is omitted to save CPU time.
If two stereo encoded bitstreams corresponding to two independent stereo sources are mixed, their left channels are mixed into the left channel of the output bitstream and their right channels are mixed into the right channel of the output bitstream. If a stereo encoded bitstream corresponding to a stereo source (e.g., to background music) is mixed with a mono encoded bitstream corresponding to a mono source (e.g., to a sound effect), a pseudo-center channel may be simulated by mixing the mono encoded bitstream with both the left and right channels of the stereo encoded bitstream, such that the left channel of the output bitstream is a mix of the mono encoded bitstream and the left channel of the stereo encoded bitstream, and the right channel of the output bitstream is a mix of the mono encoded bitstream and the right channel of the stereo encoded bitstream. Alternatively, a mono encoded bitstream may be mixed with only one channel of a stereo encoded bitstream, such that one channel of the output bitstream is a mix of the mono encoded bitstream and one channel of the stereo encoded bitstream and the other channel of the output bitstream only includes audio data from the other channel of the stereo encoded bitstream.
Attention is now directed to operation of the audio frame merger255(FIG. 2) in different scenarios.
If no sources are to be played, the audio frame merger255copies a standard MPEG-1 Layer II frame containing silence to the data location of the target frame in the output bitstream.
If a single source is to be played, the audio frame merger255copies the standard MPEG-1 Layer II frame608/708(FIGS. 6 and 7) for the source to the data location of the target frame in the output bitstream. The copied frame608/708may be in mono, stereo, or joint stereo mode.
If two or more sources are to be mixed, the scaled mantissas and corresponding scale factors (e.g., frame-wide scale factors444,FIG. 4B, or high-precision frame-wide scale factors470,FIG. 4C) from the encoded bitstream for one of the sources are copied to separate intermediate stores for each channel. The values in the intermediate stores are then mixed with respective values from the encoded bitstream of a second source (e.g., in accordance with the process800,FIG. 8) and the results are written back to the intermediate stores. This process may be repeated to mix in data from additional sources.
In some embodiments, if the target frame has two channels but there is only source data for one channel, the mixer automatically copies scale factors and scaled mantissas comprising silence to the corresponding intermediate store of the other channel.
Once the mixing is complete, the target frame of the output bitstream is constructed based on the pre-computed frame header, the constant bit allocation, and the data in the intermediate stores. Where high-precision frame-wide scale factors are used, the scale factor indices are divided down to the standard 6-bit indices, which are written to the target frame. For example, if 8-bit high-precision frame-wide scale factor indices are used for the scale factors470, the adjusted scale factor indices in the intermediate stores are divided by four before being written to the output bitstream. The mixed, scaled mantissas in the intermediate stores are quantized (e.g., in accordance with the MPEG-1 Layer II standard quantization algorithm) and written to the output bitstream.
FIG. 9illustrates a data structure of an audio frame900in an output bitstream generated by the process800in accordance with some embodiments. The frame header902, bit allocation information904, and transmission pattern906are constant in value. The frame900also includes scale factors908stored as indices (e.g., 6-bit indices) into a table of scale factors, and blocks910-1,910-2, and910-3. Each block910includes frequency sample mantissas912-1through912-12for each frequency band being used. One or more values906,908, and/or912may be absent. For example, a particular frequency band may be unused. In some embodiments, three consecutive mantissas912are compressed into a single code word in accordance with the MPEG-1 Layer II standard.
FIG. 10Ais a flow diagram illustrating a process1000of encoding audio in accordance with some embodiments.
In the process1000, a plurality of independent audio source streams is accessed (1002). Each source stream includes a sequence of source frames. Respective source frames of each sequence include respective pluralities of pulse-code modulated audio samples (e.g., PCM samples420,FIGS. 4B-4Cand6).
Each of the plurality of independent audio source streams is separately encoded (1004) to generate a plurality of independent encoded streams (e.g., encoded bitstreams454,FIG. 4B, or476,FIG. 4C). Each independent encoded stream corresponds to a respective independent audio source stream. The encoding includes, for respective source frames, converting respective pluralities of pulse-code modulated audio samples (e.g., PCM samples420,FIGS. 4B-4C) to respective pluralities of floating-point frequency samples (e.g., FP frequency samples422,FIGS. 4B-4Cand6) that are divided into a plurality of frequency bands.
In some embodiments, a respective encoded stream generated from a respective source stream includes a sequence of encoded frames (e.g., frames706,FIG. 7) that correspond to respective source frames in the respective source stream.
In some embodiments, converting the respective pluralities of pulse-code modulated audio samples to respective pluralities of floating-point frequency samples includes performing (1006) Pseudo-Quadrature Mirror Filtering (PQMF) of the respective pluralities of pulse-code modulated audio samples (e.g., using the PQMF filter bank402,FIGS. 4B-4C).
In some embodiments, the encoding includes applying (1008) a fixed psycho-acoustic model (PAM) to successive respective pluralities of floating-point frequency samples. In some embodiments, the fixed PAM is implemented as a predefined table having a plurality of entries, wherein each entry corresponds to a signal-to-mask ratio (SMR) for a respective frequency band of the plurality of frequency bands.
In some embodiments, the encoding includes, for each respective frequency band of a respective frame, calculating (1010) a single respective scale factor (e.g., a frame-wide scale factor444,FIG. 4B, or high-precision frame-wide scale factor470,FIGS. 4C and 6) to scale mantissas of each floating-point frequency sample. The floating-point frequencies in the respective frequency band of the respective frame, as scaled by the single respective scale factor, thus share a single exponent corresponding to the single respective scale factor.
In some embodiments, successive encoded frames of the respective encoded stream each comprise three blocks. Each block stores twelve floating-point frequency samples per frequency band. For each of the successive encoded frames, the single respective scale factor in each respective frequency band scales each of the twelve floating-point frequency samples in each of the three blocks. In some embodiments, the encoding operation1004includes selecting a transmission pattern to indicate, for each respective frequency band of each of the successive encoded frames, that the single scale factor scales the mantissas in the three blocks.
An instruction is received (1012) to mix the plurality of independent encoded streams. For example, the instruction could specify the mixing of one or more sound effects with background music in a video game or the mixing of multiple sounds effects in a video game.
In response to the instruction to mix the plurality of independent encoded streams, respective floating-point frequency samples of the independent encoded streams are combined (1014).
In some embodiments, combining respective floating-point frequency samples includes mixing scale factors by calculating (1016) an adjusted scale factor (e.g., in accordance with operation804of the process800,FIG. 8). The adjusted scale factor is used to scale the floating-point frequency samples of a respective frequency band and respective frame of first and second independent encoded bitstreams.
An output bitstream is generated (1018) that includes the combined respective floating-point frequency samples. In some embodiments, the output bitstream is generated in accordance with the process800(FIG. 8). The output bitstream is transmitted (1020) to a client device (e.g., STB300,FIG. 3) for decoding and playback.
In some embodiments, respective frames of an independent audio source stream of the plurality of independent audio source streams are also encoded in accordance with the MPEG-1 Layer II standard (e.g., as described for the system600,FIG. 6). An instruction is received to play audio associated only with the independent audio source stream. In response, an output bitstream is generated that includes the respective frames of the independent audio source stream as encoded in accordance with the MPEG-1 Layer II standard (e.g., frames708,FIG. 7).
In some embodiments, first and second independent audio source streams of the plurality of independent audio source streams and corresponding first and second independent encoded streams of the plurality of independent encoded streams each include a left channel and a right channel. The combining operation1014includes mixing the left channels of the first and second independent encoded streams to generate a left channel of the output bitstream and mixing the right channels of first and second independent encoded streams to generate a right channel of the output bitstream.
In some embodiments, a first independent audio source stream and corresponding first independent encoded stream of the plurality of independent encoded streams each include a left channel and a right channel. A second independent encoded stream of the plurality of independent encoded streams and corresponding second independent encoded stream of the plurality of independent encoded streams each include a mono channel. The combining operation1014includes mixing the right channel of the first independent encoded stream with the mono channel of the second independent encoded stream to generate a right channel of the output bitstream and mixing the left channel of the first independent encoded stream with the mono channel of the second independent encoded stream to generate a left channel of the output bitstream. Alternatively, the combining operation includes mixing one channel (either left or right) of the first independent encoded stream with the mono channel of the second independent encoded stream to generate one channel of the output bitstream and copying the other channel (either right or left) of the first independent encoded stream to the other channel of the output bitstream.
In some embodiments, first and second independent encoded streams each comprise first and second stereo channels for frequency bands below a predefined limit and a mono channel for frequency bands above the predefined limit (e.g., the streams are in joint stereo mode). The combining operation1014includes separately mixing the first stereo channels, second stereo channels, and mono channels of the first and second independent encoded streams to generate the output bitstream.
In some embodiments, a first independent audio source stream of the plurality of independent audio source streams comprises a continuous source of non-silent audio data (e.g., background music for a video game) and a second independent audio source stream of the plurality of independent audio source streams comprises a second episodic source of non-silent audio data (e.g., a non-continuous sound effect for a video game). In some embodiments, a first independent audio source stream of the plurality of independent audio source streams comprises a first episodic source of non-silent audio data (e.g., a first non-continuous sound effect for a video game) and a second independent audio source stream of the plurality of independent audio source streams comprises a second episodic source of non-silent audio data (e.g., a second non-continuous sound effect for a video game).
FIG. 10Bis a flow diagram illustrating a process1030for use as part of the encoding operation1004(FIG. 10A). In the method1030, a first scale factor is calculated (1032) to scale floating-point frequency samples in a respective frequency band of a respective frame of a first independent encoded stream. A second scale factor is calculated (1032) to scale floating-point frequency samples in a respective frequency band of a respective frame of a second independent encoded stream. In some embodiments, the scale factor calculations are performed by the frame-wide scale factor calculation module442(FIG. 4B) or462(FIGS. 4C and 6).
For the first independent encoded bitstream, the floating-point frequency samples of the respective frequency band of the respective frame are scaled (1034) by the first scale factor. For the second independent encoded bitstream, the floating-point frequency samples of the respective frequency band of the respective frame are scaled (1034) by the second scale factor. In some embodiments, the scaling is performed by the scaling and quantization module412(FIG. 4B) or the high-precision scaling module464(FIGS. 4C and 6).
For the first independent encoded bitstream, the floating-point frequency samples of the respective frequency band of the respective frame are stored (1036) as scaled by the first scale factor. For the second independent encoded bitstream, the floating-point frequency samples of the respective frequency band of the respective frame are stored (1036) as scaled by the second scale factor. The first and second scale factors thus function as common exponents for storing respective floating-point frequency samples of respective frequency bands and frames in respective encoded bitstreams.
FIG. 10Cis a flow diagram illustrating a process1040for use as part of the combining operation1014(FIG. 10A). In the method1040, an adjusted scale factor is calculated (1042) to scale the floating-point frequency samples of the respective frequency band and respective frame of the first independent encoded bitstream and the floating-point frequency samples of the respective frequency band and respective frame of the second independent encoded bitstream.
In some embodiments, the adjusted scale factor is calculated (1044) as a first function of a difference between the first and second scale factors (e.g., in accordance with the process500,FIG. 5). In some embodiments, the first function includes addition of an offset to the first or second scale factor, the offset being a monotonic second function of the magnitude of the difference between the first and second scale factors. In some embodiments, the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table (e.g., in accordance with Equation (1)) and the difference between the first and second scale factors is calculated by subtracting the smaller of the indices corresponding to the first and second scale factors from the larger of the indices corresponding to the first and second scale factors (e.g., in accordance with operation504,FIG. 5). In some embodiments, the first function comprises subtraction of an offset from the lower of the indices encoding the first or second scale factor, the offset being a monotonic second function of the magnitude of the difference between the indices encoding the first and second scale factors.
The floating-point frequency samples of the respective frequency band and respective frame of the first independent encoded bitstream are scaled (1046) by a first ratio of the first scale factor to the adjusted scale factor. The floating-point frequency samples of the respective frequency band and respective frame of the second independent encoded bitstream are scaled (1046) by a second ratio of the second scale factor to the adjusted scale factor. In some embodiments, the scaling is performed by the scaling and quantization module412(FIG. 4B) or the high-precision scaling module464(FIGS. 4C and 6).
Respective floating-point frequency samples of the first independent encoded bitstream, as scaled by the first ratio, are added (1048) to respective floating-point frequency samples of the second independent encoded bitstream, as scaled by the second ratio (e.g., in accordance with operations804and806of the process800,FIG. 8). In some embodiments, respective mantissas of combined floating-point frequency samples, generated by adding respective floating-point frequency samples of the first and second encoded bitstreams, are stored (1050) in respective single bytes. In some embodiments (e.g., if mantissas of FP frequency samples are stored using 16 bits), respective mantissas of combined FP frequency samples are stored using more than one byte (e.g., are stored using 16 bits).
In some embodiments, a determination is made that a combined floating-point frequency sample, generated by adding respective floating-point frequency samples of the first and second encoded bitstreams, exceeds a predefined limit (or, for negative numbers, is less than a predefined limit). In response to the determination, the combined floating-point frequency sample is assigned to equal the predefined limit, to prevent clipping.
FIG. 10Dis a flow diagram illustrating a process1060for use as part of the encoding operation1004and combining operation1014(FIG. 10A). In the method1060, the first, second, and adjusted scale factors are encoded (1062) as indices referencing scale factor values stored in a table (e.g., in accordance with Equation (1)). In some embodiments, each of the indices encoding the first, second, and adjusted scale factors is stored (1064) in a single respective byte.
The floating-point frequency samples of the respective frequency band and respective frame of the first independent encoded bitstream are scaled (1066) by a scale factor value having an index corresponding to a difference between indices encoding the adjusted and first scale factors. The floating-point frequency samples of the respective frequency band and respective frame of the second independent encoded bitstream are scaled (1068) by a scale factor value having an index corresponding to a difference between indices encoding the adjusted and second scale factors.
Respective floating-point frequency samples, as scaled, of the first and second independent encoded bitstreams are added (1070) (e.g., in accordance with operations804and806of the process800,FIG. 8).
The process1000(FIG. 10A), including the processes1030(FIG. 10B),1040(FIG. 10C), and1060(FIG. 10D), enables fast, computationally efficient real-time mixing of encoded (or, in other words, compressed-domain) audio data. While the process1000includes a number of operations that appear to occur in a specific order, it should be apparent that the process1000can include more or fewer operations, which can be executed serially or in parallel (e.g., using parallel processors or a multi-threading environment), an order of two or more operations may be changed and/or two or more operations may be combined into a single operation.
In some embodiments, the operations1002and1004(including, for example, operations1006,1008, and/or1010) of the process1000are performed prior to execution of a video game, while the operations1012-1020of the process1000are performed during execution of the video game. The operations1002and1004thus are performed off-line while the operations1012-1020are performed on-line in real time. Furthermore, in some embodiments various operations of the process1000are performed at different systems. For example, the operations1002and1004are performed at an off-line system such as a game developer workstation. The resulting plurality of independent encoded streams then is provided to and stored in computer memory (i.e., in a computer-readable storage medium) in a video game system200(FIG. 2), such as one or more game servers116(FIG. 1) in the cable TV system100, and the operations1012-1020are performed at the video game system200during execution of a video game. Alternatively, the entire process1000is performed at a video-game system200(FIG. 2), which may be implemented as part of the cable TV system100(FIG. 1).
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.
Claims
- A method of encoding audio, comprising: at an audio encoding system including one or more processors and memory, during execution of a video game by a computer system: receiving an instruction to mix a first independent encoded audio stream with a second independent encoded audio stream, the first and second independent encoded audio streams each comprising a sequence of frames, wherein respective frames of each sequence comprise floating-point frequency samples divided into a plurality of frequency bands, the floating-point frequency samples of a respective frequency band of a respective frame of the first independent encoded audio stream being scaled by a first scale factor, the floating-point frequency samples of a respective frequency band of a respective frame of the second independent encoded audio stream being scaled by a second scale factor;in response to the instruction to mix the first independent encoded audio stream with the second independent encoded audio stream, combining respective floating-point frequency samples of the first and second independent encoded audio streams, the combining comprising: calculating an adjusted scale factor as a first function of a difference between the first and second scale factors;scaling the floating-point frequency samples of the respective frequency band of the respective frame of the first independent encoded audio stream by a first ratio of the first scale factor to the adjusted scale factor;scaling the floating-point frequency samples of the respective frequency band of the respective frame of the second independent encoded audio stream by a second ratio of the second scale factor to the adjusted scale factor;and adding respective floating-point frequency samples of the first independent encoded audio stream, as scaled by the first ratio, to respective floating-point frequency samples of the second independent encoded audio stream, as scaled by the second ratio;and generating an output bitstream comprising the combined respective floating-point frequency samples.
- The method of claim 1 , further comprising transmitting the output bitstream to a client device for decoding and playback.
- The method of claim 1 , wherein the combining further comprises: determining that a combined floating-point frequency sample, generated by adding respective floating-point frequency samples of the first and second encoded bitstreams, exceeds a predefined limit;and in response to the determination, assigning the combined floating-point frequency sample to equal the predefined limit.
- The method of claim 1 , wherein respective mantissas of combined floating-point frequency samples, generated by adding respective floating-point frequency samples of the first and second encoded bitstreams, are stored in respective single bytes.
- The method of claim 1 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, the indices each being represented with more than six bits.
- The method of claim 1 , wherein the first function comprises addition of an offset to the first or second scale factor, the offset being a monotonic second function of the magnitude of the difference between the first and second scale factors.
- The method of claim 1 , wherein: the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table;and the difference between the first and second scale factors is calculated by subtracting the lower of the indices corresponding to the first and second scale factors from the larger of the indices corresponding to the first and second scale factors.
- The method of claim 7 , wherein the first function comprises subtraction of an offset from the lower of the indices encoding the first or second scale factor, the offset being a monotonic second function of the magnitude of the difference between the indices encoding the first and second scale factors.
- The method of claim 7 , wherein each of the indices encoding the first, second, and adjusted scale factors is stored in a single byte.
- The method of claim 1 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, the combining further comprising: scaling the floating-point frequency samples of the respective frequency band and respective frame of the first independent encoded bitstream by a scale factor value having an index corresponding to a difference between indices encoding the adjusted and first scale factors;scaling the floating-point frequency samples of the respective frequency band and respective frame of the second independent encoded bitstream by a scale factor value having an index corresponding to a difference between indices encoding the adjusted and second scale factors;and adding respective floating-point frequency samples, as scaled, of the first and second independent encoded bitstreams.
- The method of claim 10 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, the indices each being represented with more than six bits, the combining further comprising: dividing the index encoding the adjusted scale factor to produce a divided scale factor index being represented by six bits;and writing the divided scale factor index to the encoded bitstream.
- The method of claim 1 , wherein the combining comprises calculating respective sums of respective floating-point frequency samples and dividing the respective sums by a constant value.
- The method of claim 12 , wherein the constant value equals 2 or √{square root over (2)}.
- The method of claim 1 , wherein: the first and second independent encoded streams of the plurality of independent encoded streams each comprises a left channel and a right channel;and the combining comprises: mixing the left channels of the first and second independent encoded streams to generate a left channel of the output bitstream;and mixing the right channels of first and second independent encoded streams to generate a right channel of the output bitstream.
- The method of claim 1 , wherein: the first independent encoded stream comprises a left channel and a right channel;the second independent encoded stream comprises a mono channel;and the combining comprises: mixing the left channel of the first independent encoded stream with the mono channel of the second independent encoded stream to generate a left channel of the output bitstream;and mixing the right channel of the first independent encoded stream with the mono channel of the second independent encoded stream to generate a right channel of the output bitstream.
- The method of claim 1 , wherein: the first and second independent encoded streams each comprises first and second stereo channels for frequency bands below a predefined limit and a mono channel for frequency bands above the predefined limit;and the combining comprises separately mixing the first stereo channels, second stereo channels, and mono channels of the first and second independent encoded streams.
- The method of claim 1 , wherein: the first independent encoded audio stream is generated from a first independent audio source stream that comprises a continuous source of non-silent audio data;and the second independent encoded audio stream is generated from a second independent audio source stream that comprises an episodic source of non-silent audio data.
- The method of claim 1 , wherein: the first independent encoded audio stream is generated from a first independent audio source stream that comprises a first episodic source of non-silent audio data;and the second independent encoded audio stream is generated from a second independent audio source stream that comprises a second episodic source of non-silent audio data.
- A system for encoding audio, comprising: memory;one or more processors;one or more programs stored in the memory and configured for execution by the one or more processors, the one or more programs including instructions for: receiving an instruction to mix a first independent encoded audio stream with a second independent encoded audio stream, the first and second independent encoded audio streams each comprising a sequence of frames, wherein respective frames of each sequence comprise floating-point frequency samples divided into a plurality of frequency bands, the floating-point frequency samples of a respective frequency band of a respective frame of the first independent encoded audio stream being scaled by a first scale factor, the floating-point frequency samples of a respective frequency band of a respective frame of the second independent encoded audio stream being scaled by a second scale factor;in response to the instruction to mix the first independent encoded audio stream with the second independent encoded audio stream, combining the respective floating-point frequency samples of the first and second independent encoded audio streams, the combining comprising: calculating an adjusted scale factor as a first function of a difference between the first and second scale factors;scaling the floating-point frequency samples of the respective frequency band of the respective frame of the first independent encoded audio stream by a first ratio of the first scale factor to the adjusted scale factor;scaling the floating-point frequency samples of the respective frequency band of the respective frame of the second independent encoded audio stream by a second ratio of the second scale factor to the adjusted scale factor;and adding respective floating-point frequency samples of the first independent encoded audio stream, as scaled by the first ratio, to respective floating-point frequency samples of the second independent encoded audio stream, as scaled by the second ratio;and generating an output bitstream comprising the combined respective floating-point frequency samples.
- The system of claim 19 , wherein the instructions for combining further comprise instructions for: determining that a combined floating-point frequency sample, generated by adding respective floating-point frequency samples of the first and second encoded bitstreams, exceeds a predefined limit;and in response to the determination, assigning the combined floating-point frequency sample to equal the predefined limit.
- The system of claim 19 , wherein respective mantissas of combined floating-point frequency samples, generated by adding respective floating-point frequency samples of the first and second encoded bitstreams, are stored in respective single bytes.
- The system of claim 19 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, the indices each being represented with more than six bits.
- The system of claim 19 , wherein the first function comprises addition of an offset to the first or second scale factor, the offset being a monotonic second function of the magnitude of the difference between the first and second scale factors.
- The system of claim 19 , wherein: the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table;and the difference between the first and second scale factors is calculated by subtracting the lower of the indices corresponding to the first and second scale factors from the larger of the indices corresponding to the first and second scale factors.
- The system of claim 24 , wherein the first function comprises subtraction of an offset from the lower of the indices encoding the first or second scale factor, the offset being a monotonic second function of the magnitude of the difference between the indices encoding the first and second scale factors.
- The system of claim 24 , wherein each of the indices encoding the first, second, and adjusted scale factors is stored in a single byte.
- The system of claim 19 , wherein the one or more programs further comprise instructions for transmitting the output bitstream to a client device for decoding and playback.
- The system of claim 19 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, and the instructions for combining further comprise instructions for: scaling the floating-point frequency samples of the respective frequency band and respective frame of the first independent encoded bitstream by a scale factor value having an index corresponding to a difference between indices encoding the adjusted and first scale factors;scaling the floating-point frequency samples of the respective frequency band and respective frame of the second independent encoded bitstream by a scale factor value having an index corresponding to a difference between indices encoding the adjusted and second scale factors;and adding respective floating-point frequency samples, as scaled, of the first and second independent encoded bitstreams.
- The system of claim 28 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, the indices each being represented with more than six bits, and the instructions for combining further comprise instructions for: dividing the index encoding the adjusted scale factor to produce a divided scale factor index being represented by six bits;and writing the divided scale factor index to the encoded bitstream.
- The system of claim 19 , wherein the instructions for combining further comprise instructions for calculating respective sums of respective floating-point frequency samples and dividing the respective sums by a constant value.
- The system of claim 30 , wherein the constant value equals 2 or √{square root over (2)}.
- The system of claim 19 , wherein: the first and second independent encoded streams of the plurality of independent encoded streams each comprises a left channel and a right channel;and the instructions for combining further comprise instructions for: mixing the left channels of the first and second independent encoded streams to generate a left channel of the output bitstream;and mixing the right channels of first and second independent encoded streams to generate a right channel of the output bitstream.
- The system of claim 19 , wherein: the first independent encoded stream comprises a left channel and a right channel;the second independent encoded stream comprises a mono channel;and the instructions for combining further comprise instructions for: mixing the left channel of the first independent encoded stream with the mono channel of the second independent encoded stream to generate a left channel of the output bitstream;and mixing the right channel of the first independent encoded stream with the mono channel of the second independent encoded stream to generate a right channel of the output bitstream.
- The system of claim 19 , wherein: the first and second independent encoded streams each comprises first and second stereo channels for frequency bands below a predefined limit and a mono channel for frequency bands above the predefined limit;and the instructions for combining further comprise instructions for separately mixing the first stereo channels, second stereo channels, and mono channels of the first and second independent encoded streams.
- The system of claim 19 , wherein: the first independent encoded audio stream is generated from a first independent audio source stream that comprises a continuous source of non-silent audio data;and the second independent encoded audio stream is generated from a second independent audio source stream that comprises an episodic source of non-silent audio data.
- The system of claim 19 , wherein: the first independent encoded audio stream is generated from a first independent audio source stream that comprises a first episodic source of non-silent audio data;and the second independent encoded audio stream is generated from a second independent audio source stream that comprises a second episodic source of non-silent audio data.
- A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system, cause the computer system to: receive an instruction to mix a first independent encoded audio stream with a second independent encoded audio stream, the first and second independent encoded audio streams each comprising a sequence of frames, wherein respective frames of each sequence comprise floating-point frequency samples divided into a plurality of frequency bands, the floating-point frequency samples of a respective frequency band of a respective frame of the first independent encoded audio stream being scaled by a first scale factor, the floating-point frequency samples of a respective frequency band of a respective frame of the second independent encoded audio stream being scaled by a second scale factor;in response to the instruction to mix the first independent encoded audio stream with the second independent encoded audio stream, combine the respective floating-point frequency samples of the first and second independent encoded audio streams the combining comprising: calculating an adjusted scale factor as a first function of a difference between the first and second scale factors;scaling the floating-point frequency samples of the respective frequency band of the respective frame of the first independent encoded audio stream by a first ratio of the first scale factor to the adjusted scale factor;scaling the floating-point frequency samples of the respective frequency band of the respective frame of the second independent encoded audio stream by a second ratio of the second scale factor to the adjusted scale factor;and adding respective floating-point frequency samples of the first independent encoded audio stream, as scaled by the first ratio, to respective floating-point frequency samples of the second independent encoded audio stream, as scaled by the second ratio;and generate an output bitstream comprising the combined respective floating-point frequency samples.
- The non-transitory computer readable storage medium of claim 37 , wherein the one or more programs further comprise instructions which, when executed by the computer system, cause the computer system to: determine that a combined floating-point frequency sample, generated by adding respective floating-point frequency samples of the first and second encoded bitstreams, exceeds a predefined limit;and in response to the determination, assign the combined floating-point frequency sample to equal the predefined limit.
- The non-transitory computer readable storage medium of claim 37 , wherein respective mantissas of combined floating-point frequency samples, generated by adding respective floating-point frequency samples of the first and second encoded bitstreams, are stored in respective single bytes.
- The non-transitory computer readable storage medium of claim 37 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, the indices each being represented with more than six bits.
- The non-transitory computer readable storage medium of claim 37 , wherein the first function comprises addition of an offset to the first or second scale factor, the offset being a monotonic second function of the magnitude of the difference between the first and second scale factors.
- The non-transitory computer readable storage medium of claim 37 , wherein: the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table;and the difference between the first and second scale factors is calculated by subtracting the lower of the indices corresponding to the first and second scale factors from the larger of the indices corresponding to the first and second scale factors.
- The non-transitory computer readable storage medium of claim 42 , wherein the first function comprises subtraction of an offset from the lower of the indices encoding the first or second scale factor, the offset being a monotonic second function of the magnitude of the difference between the indices encoding the first and second scale factors.
- The non-transitory computer readable storage medium of claim 42 , wherein each of the indices encoding the first, second, and adjusted scale factors is stored in a single byte.
- The non-transitory computer readable storage medium of claim 37 , wherein the one or more programs further comprise instructions which, when executed by the computer system, cause the computer system to transmit the output bitstream to a client device for decoding and playback.
- The non-transitory computer readable storage medium of claim 37 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, and the instructions to combine further comprise instructions which, when executed by the computer system, cause the computer system to: scale the floating-point frequency samples of the respective frequency band and respective frame of the first independent encoded bitstream by a scale factor value having an index corresponding to a difference between indices encoding the adjusted and first scale factors;scale the floating-point frequency samples of the respective frequency band and respective frame of the second independent encoded bitstream by a scale factor value having an index corresponding to a difference between indices encoding the adjusted and second scale factors;and add respective floating-point frequency samples, as scaled, of the first and second independent encoded bitstreams.
- The non-transitory computer readable storage medium of claim 46 , wherein the first, second, and adjusted scale factors are encoded as indices referencing scale factor values stored in a table, the indices each being represented with more than six bits, and the instructions to combine further comprise instructions which, when executed by the computer system, cause the computer system to: divide the index encoding the adjusted scale factor to produce a divided scale factor index being represented by six bits;and write the divided scale factor index to the encoded bitstream.
- The non-transitory computer readable storage medium of claim 37 , wherein the instructions to combine further comprise instructions which, when executed by the computer system, cause the computer system to calculate respective sums of respective floating-point frequency samples and dividing the respective sums by a constant value.
- The non-transitory computer readable storage medium of claim 48 , wherein the constant value equals 2 or √{square root over (2)}.
- The non-transitory computer readable storage medium of claim 37 , wherein: the first and second independent encoded streams of the plurality of independent encoded streams each comprises a left channel and a right channel;and the instructions to combine further comprise instructions which, when executed by the computer system, cause the computer system to: mix the left channels of the first and second independent encoded streams to generate a left channel of the output bitstream;and mix the right channels of first and second independent encoded streams to generate a right channel of the output bitstream.
- The non-transitory computer readable storage medium of claim 37 , wherein: the first independent encoded stream comprises a left channel and a right channel;the second independent encoded stream comprises a mono channel;and the instructions to combine further comprise instructions which, when executed by the computer system, cause the computer system to: mix the left channel of the first independent encoded stream with the mono channel of the second independent encoded stream to generate a left channel of the output bitstream;and mix the right channel of the first independent encoded stream with the mono channel of the second independent encoded stream to generate a right channel of the output bitstream.
- The non-transitory computer readable storage medium of claim 37 , wherein: the first and second independent encoded streams each comprises first and second stereo channels for frequency bands below a predefined limit and a mono channel for frequency bands above the predefined limit;and the instructions to combine further comprise instructions which, when executed by the computer system, cause the computer system to separately mix the first stereo channels, second stereo channels, and mono channels of the first and second independent encoded streams.
- The non-transitory computer readable storage medium of claim 37 , wherein: the first independent encoded audio stream is generated from a first independent audio source stream that comprises a continuous source of non-silent audio data;and the second independent encoded audio stream is generated from a second independent audio source stream that comprises an episodic source of non-silent audio data.
- The non-transitory computer readable storage medium of claim 37 , wherein: the first independent encoded audio stream is generated from a first independent audio source stream that comprises a first episodic source of non-silent audio data;and the second independent encoded audio stream is generated from a second independent audio source stream that comprises a second episodic source of non-silent audio data.
Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.