U.S. Pat. No. 8,924,560
OPTIMIZED GAME SERVER RELOCATION ENVIRONMENT
AssigneeAT&T Intellectual Property I, L.P.; The University of Illinois
Issue DateNovember 29, 2010
Illustrative Figure
Abstract
A system is provided for migrating a VM over a WAN. A first server has a VM. The first and second servers are operatively connected over the WAN by a virtual private local area network service. The first server migrates the VM to the second server by coping files and state of the VM to the second server without interrupting the interactive software on the VM. During a last round of migrating the VM, for packets intended for the VM on the first server, the first server buffers the packets in a buffer as buffered packets. Instead of delivering the buffered packets to the VM, the first server transmits the buffered packets to the second server. The second server plays the buffered packets to the VM migrated to and operating on the second server, such that buffered packets are played before current packets currently received from the clients are played.
Description
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS There is a particular class of applications that are more difficult to support in cloud environments which are highly interactive network applications, such as online games. These interactive applications face several unique challenges. Online games have stringent quality of service (QoS) demands where even small increases in latency can interfere with gameplay, and hence come with strong requirements on where servers can be placed geographically. At the same time, the unpredictability of network QoS coupled with the difficulty in predicting where, i.e., geographic location of a game server, a game will become popular make it hard to statically place these services. Even further, unlike web users, game players often have strong affinity to a particular server, e.g., game servers containing their friends, which is a particular server that friends log onto at a particular geographic location. This can make it difficult to use traditional load balancing and dynamic service provisioning mechanisms to service new clients. While application specific schemes have been proposed for network games to deal with some of these issues, e.g., by partitioning state or relaxing consistency models, these schemes require substantial changes in the way game developers architect/create their games. These challenges are ill-timed, as the online gaming market is a rapidly growing multi-billion dollar industry with hundreds of millions of players. Similar challenges prevent other interactive applications from reaping the full benefits of cloud computing, such as collaborative document editing and digital whiteboards. In developing exemplary embodiments, we explored and developed how dynamic migration of cloud computing services can simplify issues related to highly interactive network applications, such as online games. We propose an Optimized Game Server Relocation Environment (OGRE), which dynamically optimizes geographic server placement and transparently exploits application specific properties, of e.g., online games, to minimize observable effects of ...
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
There is a particular class of applications that are more difficult to support in cloud environments which are highly interactive network applications, such as online games. These interactive applications face several unique challenges. Online games have stringent quality of service (QoS) demands where even small increases in latency can interfere with gameplay, and hence come with strong requirements on where servers can be placed geographically. At the same time, the unpredictability of network QoS coupled with the difficulty in predicting where, i.e., geographic location of a game server, a game will become popular make it hard to statically place these services. Even further, unlike web users, game players often have strong affinity to a particular server, e.g., game servers containing their friends, which is a particular server that friends log onto at a particular geographic location. This can make it difficult to use traditional load balancing and dynamic service provisioning mechanisms to service new clients. While application specific schemes have been proposed for network games to deal with some of these issues, e.g., by partitioning state or relaxing consistency models, these schemes require substantial changes in the way game developers architect/create their games.
These challenges are ill-timed, as the online gaming market is a rapidly growing multi-billion dollar industry with hundreds of millions of players. Similar challenges prevent other interactive applications from reaping the full benefits of cloud computing, such as collaborative document editing and digital whiteboards.
In developing exemplary embodiments, we explored and developed how dynamic migration of cloud computing services can simplify issues related to highly interactive network applications, such as online games. We propose an Optimized Game Server Relocation Environment (OGRE), which dynamically optimizes geographic server placement and transparently exploits application specific properties, of e.g., online games, to minimize observable effects of live server migration. Exemplary embodiments take the relatively unexplored approach of integrating cloud services with the network. Along these lines, exemplary embodiments break out into three areas for ease of understanding but not limitation.
First, to reduce outages during migration of gaming servers, we develop a network based “accelerator” for virtual machine migration according to exemplary embodiments. Exemplary embodiments provide an example implementation to realize this substrate with traditional protocols such as Virtual private LAN service (VPLS) in Internet service provider (ISP) networks. Virtual private LAN service (VPLS) is a way to provide Ethernet based multipoint to multipoint communication over Internet protocol (IP)/Multiprotocol Label Switching (MPLS) networks. VPLS allows geographically dispersed sites to share an Ethernet broadcast domain by connecting sites through pseudo-wires. VPLS is a virtual private network (VPN) technology. In a VPLS, the local area network (LAN) at each site is extended to the edge of the provider network. The provider network then emulates a switch or bridge to connect all of the customer LANs to create a single bridged LAN. As shown inFIGS. 2-5below, the LAN of hosting points105A and105B is extended to provide edge routers1and2of the provider network, and the hosting points105A and105B are connected by a VPLS225.
Second, to strategically place servers in the face of changing network characteristics and service workloads, exemplary embodiments provide a dynamic placement scheme to automatically trigger and re-locate gaming servers in the wide area.
Third, to ensure high quality paths and to take into account discovered network characteristics, e.g., of online gamers, exemplary embodiments provide a framework for integration of route control and server placement. By performing judicious modifications of route advertisements, OGRE can migrate a VM in the wide area in a manner transparent to game servers and players.
In developing exemplary embodiments, we conducted a measurement study of online games to characterize migration according to exemplary embodiments. In alignment with the online game measurement study, exemplary embodiments provide for and we discovered that (1) games should have dynamic, not static allocation of resources such as game server resources, (2) games should have migration, not replication of resources, and (3) games should have seamless, not paused migration.
Now turning toFIG. 1,FIG. 1illustrates a block diagram of a system architecture100in accordance with exemplary embodiments.
The architecture100includes various components. The service operator deploys a set of hosting points105in the wide area network (WAN)120. These hosting points105such as105A,105B,105C, and105D act as facilities such as datacenters where processes may be run. Each hosting points105may include multiple servers, storage systems, and connections for operating as a datacenter. Hosting points105may also include redundant or backup power supplies, redundant data communications connections, environmental controls including, e.g., air conditioning and fire suppression, and security devices. For example, an Internet service provider (ISP) network service operator may integrate cloud computing functionality into its backbone by deploying hosting points105in its points of presence. Internally, the hosting point105may be realized as one or more servers running virtualization software to support migration of the interactive processes. In one implementation of exemplary embodiments, each process such as an online game runs within its own virtual machine (VM), with migration performed by the hypervisor of the hosting point105which is discussed further herein.
Next, to support the migration, the network exposes router interfaces110,115to monitor and manipulate network state. Router interfaces110,115may also be known as tees. Router interfaces110,115may be used to manipulate routes, redirect traffic, and observe traffic patterns and routing tables within the network120, to support migration of processes, e.g., to redirect traffic to the process's new location such as from host point105A in New York to hosting point105B in San Francisco. Exemplary embodiments may utilize existing router interfaces110,115already in service. For example, router interfaces110,115may be implemented through routing protocol sessions and/or vty scripts to manipulate state of the server having the VM, and through Netflow analyzer, Syslog and/or simple network management protocol (SNMP) for monitoring. The process running within the VM can be interactive online gaming software.
To coordinate migration of the VM in one hosting point105to another hosting point105, e.g., from current hosting point105A to future hosting point105B, the service operator runs a service controller125. The service controller125is a software application in memory running on a processor of a computer system such as a server. The migration coordination may also be performed by the hypervisor of the current hosting point105and/or by the hypervisor of the future hosting point105, and the migration coordination may be shared between the hypervisors of the current hosting point105and the future hosting point105. A hypervisor is also called virtual machine monitor (VMM), and the hypervisor allows multiple operating systems to run concurrently on a host computer or server in the hosting point105, which is called hardware virtualization. The hypervisor presents the guest operating systems a virtual operating platform and monitors the execution of the guest operating systems on the virtualized hardware resources of, e.g., a server in the hosting point105. The guest operating system may be the interactive online game software.
The service controller125and/or hypervisor monitors performance of processes running at hosting points105, including the processes of the VM of the online game. The service controller125and/or hypervisor then uses this information to make decisions regarding when VM processes should be migrated, and which new hosting points105that the VM processes should be migrated to. VM processes, the processes of VM, and VM may be used interchangeably. The service controller125and/or hypervisor is configured to migrate the VM from one hosting point105to another hosting point105when a predefined number of gamers (referred to as game clients inFIGS. 2-5) have logged onto the game server, when a predefined number of gamers are attempting to log onto the game server, and/or when a predefined number of gamers are playing on the game server of the current hosting point105. Also, the service controller125is configured to migrate the VM based on a time of day such as the morning, midday, evening, and/or night. For example, since measurements of exemplary embodiments determine that online gamers have peaks in the evening and night and that online gamers prefer certain game servers, the service controller125may be configured to migrate the VM of the game server nearing and/or at peak times, such as in the evening, e.g., 5:00 PM and/or at night, e.g., 8:00 PM. Also, the service controller125and/or hypervisor may be configured to migrate the VM based on the particular day of the week such as weekend, week day, holiday, and/or special occasion. For example, online gamers may play all day on weekends and holidays, so the service controller125and/or hypervisor is configured to migrate the VM from a popular game server at hosting point105A to a different game server at hosting point105B because the different game server has more resources such as processor speed, random access memory, hard drive, faster busses, etc., and/or because the different game server has less communication traffic. Also, the service controller125and/or hypervisor may be configured to migrate the VM of the game server to a different game server when the service controller125and/or hypervisor determines that for the instant game operating on the VM the number of gamers causes the instant game to run slowly, to reach a predefined delay in user response on the game that is recognizable by the user, and/or to potentially reach the predefined delay in user response that is recognizable by the user if gamers continue logging into the game server. As mentioned above, the hypervisor of VM for the current game server and/or the hypervisor of the game server for the future location for the VM can function and operate as the service controller125for determining when to migrate the VM from one hosting point105to another hosting point105.
To begin the VM migration from one hosting point105to a different hosting point105at a different geographical location, the service controller125instructs the hypervisor (i.e., virtualization software running) at the hosting points105when and where the process should be migrated to. To coordinate this change with the network120, the service controller125also sends instructions to router interfaces110,115(i.e., network tees), to indicate any changes to routing state. Routing state is the forwarding information for routing packets over the network120. Also, in exemplary embodiments, the hypervisor may determine that VM migration should occur.
Now, with reference toFIGS. 2,3,4, and5,FIGS. 2-5are meant to incorporate all the elements ofFIG. 1although some elements ofFIG. 1are not shown for conciseness.FIGS. 2-5illustrate the migration of the VM110awhich comprises, e.g., interactive online game software as the process for an online game from one game server230A in hosting point105A to another game server230B in hosting point105B according to exemplary embodiments. For example, the current hosting point105A is in New York while the future hosting point105B is in San Francisco.
To migrate virtual machines VM110aover long distances and with minimal service disruption, one may attempt to use conventional virtual machine migration techniques. Conventional migration cannot be directly used in wide area networks (WAN) to migrate a VM of an interactive online game in which multiple game clients215, which are users of the VM interactive online game that have established IP socket connections to the VM.
Exemplary embodiments are configured to reduce packet loss while performing the VM110amigration of the interactive online game of the VM110awhile multiple game clients215, i.e., gamers, are actively playing the interactive game of the VM110a. Without the game clients215experiencing a pause and/or delay in their online gaming experience of the VM110a, the service controller125and/or hypervisors240A and240B seamlessly migrates the VM110aand continues the interactive online game of the VM110a.
In this example, consider the migration of process P from the hypervisor240A running at the server230A to the hypervisor240B running at server230B, where the process is the interactive online game. With reference toFIG. 2, during migration which occurs while the game clients215are playing the online game of the VM110a, the hypervisor240A copies the files and state of the virtual machine VM110ain multiple rounds to the hypervisor240B. With reference toFIG. 3, during the last round of migration, hypervisor240A begins to buffer all packets in buffer250from the game clients215, instead of delivering them to VM110aof the server230A for processing, thus for this brief instance the VM110ais not processing incoming packets of game clients215. After VM110afinishes migrating to hypervisor240B, the hypervisor240B establishes processing of the VM110anow residing in game server230B in San Francisco. At the end of the last round of migration of copying files and state of the VM110afrom server230A to server230B, the processing of VM110aon server230B is simultaneously and/or near simultaneously started.
To mask the appearance of any outage that would normally be caused in conventional systems when processing of the VM110ais paused in the hypervisor240A of the server230A, the hypervisor240A begins simultaneously sending the buffered packets255in its buffer250to the hypervisor240B, which then plays the buffered packets255back to process P running in VM110aof the sever230B. For example, at the point when the VM110aat the server230A is no longer processing, the hypervisor240A sends the buffered packets255to the server230B for processing by the VM110a. Each of the buffered packets255are marked by the hypervisor240A as marked to be identified as marked buffered packets255by the hypervisor240B.
To provide a seamless migration of the VM110ato the server230B for the game clients215without interruption and without requiring the game clients215to connect to a different IP address, the hypervisor240A also switches incoming network traffic from the game clients215to the VM's110anew location at the hypervisor240B of the server230B. As such, the unmarked packets260intended for the server230A are re-routed on route270by router205aand/or PE1as instructed, e.g., by the server230A.
To mask and/or prevent any loss during this switchover of unmarked packets260from being received at the server230A in hosting point105A to now being received at VM110ain the server230B in hosting point105B, the buffered packets255from the last round of migration in the hypervisor240A's buffer250are marked; the hypervisor240B plays back marked buffered packets255to VM110ain server230B before any unmarked packets260received directly from the network120are played.
To cause hypervisor240B to switch over to processing packets260directly received from the network instead of processing the marked packets255, hypervisor240A sends a special end-of-buffer message265when its buffer250is empty as shown inFIG. 4. As seen inFIG. 4, the hypervisor240A of the server230A instructs the router205ato route/redirect the packets260to the router205bto send to the server230B for processing by VM110ain the hosting point205B.
According to exemplary embodiments, shifting traffic with seamless route control is further discussed below. In addition, to migrating the process of the VM110a, exemplary embodiments provide a way for incoming traffic to reach the VM110aprocess's new location at the hosting point105B. One approach would be to migrate the VM process to a new server, which has a new IP address. The issue with this approach is that this would break existing client connections of the game clients. To address this, some virtualization techniques have been developed for local area networks (LAN) to fail-over traffic from one server to another, in a manner that appears seamless to external clients. However, these techniques target migration within a single Ethernet-based IP subnet, as opposed to wide area networks that span large geographic regions, multiple routing protocols, and multiple IP subnets.
Exemplary embodiments are configured to address this issue related to wide area networks120that span large geographical regions such as the Internet and other issues by coordinating VM110amigration with network route control. Here, each VM process including VM110aand other VMs on the servers230A and230B in the network is allocated a small IP subnet. Internal border gateway protocol (iBGP) sessions are run between physical locations such as the New York hosting point105A and the San Francisco hosting point105B where the servers230A and230B are hosted to routers205aand205bwithin the network. Respective server software405A and405B runs the iBGP session on the servers230A and230B. Upon migration of the VM110aprocess to the new hosting point105B, the new hosting point105B, e.g., the hypervisor205B, is instructed to begin advertising the VM110aprocess's IP subnet at the new hosting point105B. In particular, when the VM110aprocess is migrated from hosting point205A to hosting point205B, three steps may be performed. First, the iBGP session via server software405B at server230B starts to announce a more specific prefix to reach the VM110aprocess now migrated onto the server230B. By iBGP session advertising the more specific prefix, the new route275inFIG. 5will be more preferred than the old route270inFIG. 4, causing all traffic260to shift away from the old route270. Second, after a suitable delay to let the network converge, the iBGP session by server software405A at server230A withdraws the less-specific prefix which corresponds to the route270. Further, to enable the VM110aon the server230B to be migrated again later on, the iBGP session at server230B may begin advertising the less-specific prefix after a suitable delay (by the server software405B). When this advertising process by the iBGP is complete, the original IP prefix has been migrated over the wide area network120to the new location at hosting point105B, so traffic260within the network is sent to the new VM110alocation at the server230B via the route275as shown inFIG. 5.
In a conventional system, there may be a challenge that this network control using iBGP can result in an outage while iBGP is converging. To address this, exemplary embodiments use VPLS225to create a virtual LAN that spans the hosting points105A and105B over the wide area network120. This ensures the correct delivery of packets260arriving at the previous hosting point105A during the convergence time so that the router205acan re-route packets260from game clients215to the VM110ain hosting point105B without interruption to the online interactive game, although the route270used re-routing might not be optimal for the transient period. Upon the completion of VM110amigration, the new hosting point105B can immediately announce the new location of the IP address assigned to the VM110a, e.g., using unsolicited ARP reply. Moreover, performing route control with these techniques of exemplary embodiments ensures that every game client215is always able to reach the VM110ain the server230B of the new hosting point105B without interruption to the online game being played and without delay. Also, as long as the different physical locations are within a single Autonomous System (AS), no inter-AS advertisements are caused by the route control of the exemplary embodiments.
Further, each of the game clients215, which represent a plurality of individual gamers using processor based computing devices having gaming capabilities and network connection capabilities, is able to maintain its IP connection via the same IP address that the game clients215originally connected with server230A in hosting point205A. Exemplary embodiments utilize VPLS225to connect distinct hosting points105across states, i.e., different geographical locations, by using subnetting.
For example, the hosting points105A and105B may have the same IP address, in that an IP (Internet Protocol) address is a unique identifier for a node or host connection on an IP network, and every IP address consists of two parts, one identifying the host network and one identifying the node. The node part may be referred to as the subnet “/— —” part of the IP address. Within the same IP address, hosting points105A and105B are identified by the same host network address part. For example, hosting points105A and105B may have the same IP address host network part 99.1.73.7. To specifically reach the server230A in the hosting point105A the IP address including subnet is 99.1.73.7/29, and to reach the server230B in the hosting point105B the IP address including subnet is 99.1.73.7/32. In exemplary embodiments, VM110ahas IP address 99.1.73.7 for this example. Before migration, traffic to and from VM110agoes through provider edge1(PE1) to VM110ain the hosting point105A. After the VM110ais migrated to hosting point105B, traffic can still reach the VM110ain hosting point105B because of layer2VPLS via re-routing by router205aon route270. InFIG. 4, the hypervisor240A and/or hypervisor240B changes the default gateway on the VM110ato reach the Internet WAN120via route280, such that the VM110atransmits its packets to the individual game clients215through router205band PE2via the route280. As such, traffic from the VM110ain server230B is routed correctly. However, inFIG. 4, traffic to the VM110afrom game clients215is still on route270, which may not be optimal. The server software405B via an iBGP session advertises IP address 99.1.73.7/32 for the VM110aon server230B as the more specific route275preferred over 99.1.73.7/29 for hosting point205A. Nevertheless, at this point both paths270and275are valid via PE1and PE2as shown inFIGS. 4 and 5without interruption to the playing of the interactive online game software by the VM110a, and the game clients215can benefit from having more resource on the server230B.
Moreover, the rerouting of packets from the first server230A to the second server230B is accomplished as follows: The router205B in front of the second server205B advertises a more specific IP prefix via BGP than the router205A in front of the first server205A and this prefix encompasses the virtual machine's IP address. After a short delay, the router205A in front of the first server withdraws its less specific prefix via BGP. Finally, the router205B in front of the second server230B advertises the same less specific prefix via BGP as the first router205A did and withdraws the more specific prefix. After this process, migration can occur in the reverse direction with the roles of the two servers reversed. The BGP routing instructions for router205A may be from server software405A, and BGP routing instructions for router205B may be from server software405B. After this process, migration can occur in the reverse direction with the roles of the two servers reversed.
The hypervisors240A and240B and/or the service controller125are configured to have application aware migration software. As discussed herein, the VM110amay be running an interactive software game in which game clients215are operatively connected to and playing on the server230A. The following example uses hypervisor240A but applies to hypervisors240B and the service controller125. The hypervisor240A may monitor the connections by game clients215to the VM110aon the server230A, and when the number of connections by game clients215reaches a predefined number that is related to the resources allocated for the VM110aon the server230A, the hypervisor240A then monitors the particular game being played on the VM110ato determine the round/stage of the game. For example, the hypervisor240A can determine and/or receive an indication of nearing the end of the round and/or the end of the round. When the round ends, the hypervisor240A can start VM110amigration to the hosting point205B by sending files and state as discussed herein. Accordingly, VM migration can occur at the end of a game round/stage as determined by the hypervisor240A. In certain games running on the VM110a, after every level, players return to their initial positions, and the game map and statistics are reset; during this time player movements are static so performing migration at this time may be helpful to make any transient packet loss less directly perceptible to players, i.e., individual game clients215. That is, the hypervisor240A can be designed to be game software application aware such that the hypervisor240A can identify such periods above during which the game server, i.e., VM110a, can be relocated. Note that VM110amay be considered a virtualized server and game server.
Exemplary embodiments are backwards compatible to existing/legacy servers, routers, access points, and switches. For example, exemplary embodiments do not require existing/legacy switches, routers, access points to implement new specialized software and/or hardware components for VM migration discussed herein. For example, the routers205a,205b, PE1, and PE2are not required to have a special software running on them such that original route290inFIG. 1can be changed to route270and/or275. Additionally, exemplary embodiments do not require a special controller to cause the routes to be changes from route290to routes270and/or275during VM migration. For example, exemplary embodiments do not require specialized FlowVisor™ and/or OpenFlow™ software to be implemented in routers205a,205b, PE1, PE2, service controller125, and switches (not shown) to cause seamless migration of VM110afrom servers230A to230B. Further, exemplary embodiments do not require specialized routing-type software to sit between underlying physical hardware and software that controls a router, switch, and/or computer to control the flow of traffic entering and existing the same.
Also, exemplary embodiments provide primitives for live VM (server) migration and do not cause outages and/or downtimes to the game clients215when migrating the VM110aover the WAN120. When transferring the files of the VM110a, the hypervisor240A also transfers the primitives to the hypervisor240B during migration. Also, implementations of exemplary embodiments do not require modification of the particular game software running on the VM110afor live VM modification to occur over the WAN120.
FIG. 6illustrates an example of a computer600having capabilities, which may be included in exemplary embodiments. Various methods, procedures, modules, flow diagrams, tools, application, architectures, and techniques discussed herein may also incorporate and/or utilize the capabilities of the computer600. Moreover, capabilities of the computer600may be utilized to implement features of exemplary embodiments discussed herein. One or more of the capabilities of the computer600may implement any element discussed herein such as inFIGS. 1-5.
Generally, in terms of hardware architecture, the computer600may include one or more processors610, computer readable storage memory620, and one or more input and/or output (I/O) devices670that are communicatively coupled via a local interface (not shown). The local interface can be, for example but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface may have additional elements, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.
The processor610is a hardware device for executing software that can be stored in the memory620. The processor610can be virtually any custom made or commercially available processor, a central processing unit (CPU), a data signal processor (DSP), or an auxiliary processor among several processors associated with the computer600, and the processor610may be a semiconductor based microprocessor (in the form of a microchip) or a macroprocessor.
The computer readable memory620can include any one or combination of volatile memory elements (e.g., random access memory (RAM), such as dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and nonvolatile memory elements (e.g., ROM, erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), tape, compact disc read only memory (CD-ROM), disk, diskette, cartridge, cassette or the like, etc.). Moreover, the memory620may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory620can have a distributed architecture, where various components are situated remote from one another, but can be accessed by the processor610.
The software in the computer readable memory620may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. The software in the memory620includes a suitable operating system (O/S)650, compiler640, source code630, and one or more applications660of the exemplary embodiments. As illustrated, the application660comprises numerous functional components for implementing the features, processes, methods, functions, and operations of the exemplary embodiments. The application660of the computer600may represent numerous applications, agents, software components, modules, interfaces, controllers, etc., as discussed herein but the application660is not meant to be a limitation.
The operating system650may control the execution of other computer programs, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services.
The application(s)660may employ a service-oriented architecture, which may be a collection of services that communicate with each. Also, the service-oriented architecture allows two or more services to coordinate and/or perform activities (e.g., on behalf of one another). Each interaction between services can be self-contained and loosely coupled, so that each interaction is independent of any other interaction.
Further, the application660may be a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When a source program, then the program is usually translated via a compiler (such as the compiler640), assembler, interpreter, or the like, which may or may not be included within the memory620, so as to operate properly in connection with the O/S650. Furthermore, the application660can be written as (a) an object oriented programming language, which has classes of data and methods, or (b) a procedure programming language, which has routines, subroutines, and/or functions.
The I/O devices670may include input devices (or peripherals) such as, for example but not limited to, a mouse, keyboard, scanner, microphone, camera, etc. Furthermore, the I/O devices670may also include output devices (or peripherals), for example but not limited to, a printer, display, etc. Finally, the I/O devices670may further include devices that communicate both inputs and outputs, for instance but not limited to, a NIC or modulator/demodulator (for accessing remote devices, other files, devices, systems, or a network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc. The I/O devices670also include components for communicating over various networks, such as the Internet or an intranet. The I/O devices670may be connected to and/or communicate with the processor105utilizing Bluetooth connections and cables (via, e.g., Universal Serial Bus (USB) ports, serial ports, parallel ports, FireWire, HDMI (High-Definition Multimedia Interface), etc.).
When the computer600is in operation, the processor610is configured to execute software stored within the memory620, to communicate data to and from the memory620, and to generally control operations of the computer600pursuant to the software. The application660and the O/S650are read, in whole or in part, by the processor610, perhaps buffered within the processor610, and then executed.
When the application660is implemented in software it should be noted that the application660can be stored on virtually any computer readable storage medium for use by or in connection with any computer related system or method. In the context of this document, a computer readable storage medium may be an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program for use by or in connection with a computer related system or method.
The application660can be embodied in any computer-readable medium620for use by or in connection with an instruction execution system, apparatus, server, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable storage medium” can be any means that can store, read, write, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, or semiconductor system, apparatus, or device.
More specific examples (a nonexhaustive list) of the computer-readable medium620would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic or optical), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc memory (CDROM, CD R/W) (optical). Note that the computer-readable medium could even be paper or another suitable medium, upon which the program is printed or punched, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
In exemplary embodiments, where the application660is implemented in hardware, the application660can be implemented with any one or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
It is understood that the computer600includes non-limiting examples of software and hardware components that may be included in various devices, servers, routers, switches, and systems discussed herein, and it is understood that additional software and hardware components may be included in the various devices and systems discussed in exemplary embodiments.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, element components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated
The flow diagrams depicted herein are just one example. There may be many variations to this diagram or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order or steps may be added, deleted or modified. All of these variations are considered a part of the claimed invention.
While the exemplary embodiments of the invention have been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.
Claims
- A system for migrating a virtual machine over a wide area network, comprising: a first server at a first hosting point, the first server comprising the virtual machine;a second server at a second hosting point, the second hosting point being in a different geographical location than the first hosting point, the first server and the second server comprising hardware processors, wherein the second hosting point is operatively connected to the first hosting point over the wide area network;wherein within the wide area network the first hosting point is operatively connected to the second hosting point by a virtual private local area network service;wherein the virtual machine runs an interactive online gaming software on the first server;wherein the virtual machine on the first server communicates packets to and from clients operatively connected to the virtual machine on the first server via an internet protocol address assigned to the virtual machine;wherein in response to instructions, the first server migrates the virtual machine to the second server on the second hosting point by incrementally coping files and state of the virtual machine to the second server without interrupting the interactive software running on the virtual machine;wherein the first server comprises application aware software that detects an end of a game stage in the interactive online gamming software running on the virtual machine when online players in a game return to initial positions after a level;wherein the first server starts migration of the virtual machine for the interactive online gamming software running on the virtual machine in response detecting the end of the game stage when players in the game return to initial positions after the level;wherein during a last round of migrating the virtual machine to the second server when the virtual machine temporarily stops operating, for packets intended for the virtual machine on the first server, the first server buffers the packets in a buffer as buffered packets, and instead of discarding the packets, the first server transmits the buffered packets to the second server at the second hosting point;wherein the second server sends the buffered packets to the virtual machine now migrated to and operating on the second server at the second hosting point, such that the buffered packets are played before current packets currently received from the clients are played;wherein the second server causes a second router, in front of the second server, to advertise a more specific internet protocol prefix via a border gateway protocol than a first router in front of the first server, the more specific internet protocol prefix encompasses the internet protocol address assigned to the virtual machine;wherein in response, the first server causes the first router in front of the first server to withdraw a less specific internet protocol prefix of the first router via the border gateway protocol;wherein the second server receives and distinguishes current packets received by the second server from the buffered packets received by the second server, in which the current packets are received from the clients and the buffered packets are received from the first server;wherein in response to the last round of migrating the virtual machine to the second server, the first server marks the buffered packets which were originally intended for the virtual machine on the first server, such that the second server can identify a mark of the buffered packets when the second server receives the buffered packets;and wherein at an end of the buffered packets being transmitted to the second server, the first server sends an end of buffered packets message to the second server, such that the second server knows to start playing the current packets to the virtual machine on the second server upon receipt of the end of buffered packets message in place of playing the buffered packets having the mark.
- The system of claim 1 , wherein in response to the virtual machine being migrated to the second server, the current packets intended for the first server and received at the first router of the first hosting point are re-routed to the second router at the second hosting point for transmission to the second server;and wherein the current packets are re-routed over a first path which is from the clients to the first router of the first point and then to the second server.
- The system of claim 2 , wherein the virtual machine on the second server transmits packets through the second router of the second hosting point directly to the clients without first routing to the first router of the first hosting point.
- The system of claim 2 , wherein in response to the virtual machine being migrated to the second server at the second hosting point, the second server advertises a more specific internet protocol address subnet of the virtual machine on the second server;and wherein the more specific internet protocol address subnet of the virtual machine on the second server is different from a previous subnet of the virtual machine when the virtual machine was on the first server.
- The system of claim 4 , wherein by switching from the previous subnet of the virtual machine when the virtual machine was on the first server to the more specific internet protocol address subnet of the virtual machine on the second server, the current traffic switches from the first route to a second route.
- The system of claim 5 , wherein the second route does not include the first router of the first hosting point.
- The system of claim 1 , wherein when the virtual machine is migrated from the first sever to the second server, the internet protocol address identifying a host network part remains the same, such that internet protocol connections previously made by the clients to the virtual machine at the first server using the host network part of the internet protocol address do not change and are not broken during and after the virtual machine is migrated from the first server to the second server.
- The system of claim 7 , wherein the interactive software running on the virtual machine is interactive online gamming software;wherein the clients are online game players operatively connected to the virtual machine by their internet protocol connections to the internet protocol address, the internet protocol address identifying the host network part and a subnet part;wherein the host network part is a conjunction of the first hosting point and second hosting point by the virtual private local area network service;wherein during the last round of migrating the virtual machine to the second server, the buffered packets are received by the first server when the virtual machine is no longer processing the buffered packets to play the interactive online gamming software;wherein the buffered packets are played first by the virtual machine migrated to the second server such that the online game players do not recognize that the virtual machine is migrated to the second server and such that the online game players continue to connect to the internet protocol address;and wherein the subnet of the Internet protocol address is changed from a first subnet of the first server to a second subnet of the second server, both the first and second servers being in the virtual private local area network service which is within the larger wide area network and both the first and second servers being at different geographical locations.
- A method for migrating a virtual machine over a wide area network, comprising: providing a first hosting point having a first server, the first server comprising the virtual machine;providing a second hosting point having a second server, the second hosting point being in a different geographical location than the first hosting point, wherein the second hosting point is operatively connected to the first hosting point over the wide area network, and wherein within the wide area network the first hosting point is operatively connected to the second hosting point by a virtual private local area network service;wherein the virtual machine runs an interactive online gaming software on the first server;wherein the virtual machine on the first server communicates packets to and from clients operatively connected to the virtual machine on the first server via an internet protocol address assigned to the virtual machine;in response to instructions, migrating by the first server the virtual machine to the second server on the second hosting point by incrementally coping files and state of the virtual machine to the second server without interrupting the interactive software running on the virtual machine;wherein the first server comprises application aware software that detects an end of a game stage in the interactive online gamming software running on the virtual machine when online players in a game return to initial positions after a level;wherein the first server starts migration of the virtual machine for the interactive online gamming software running on the virtual machine in response detecting the end of the game stage when players in the game return to initial positions after the level;during a last round of migrating the virtual machine to the second server when the virtual machine temporarily stops operating, for packets intended for the virtual machine on the first server, buffering by the first server the packets in a buffer as buffered packets instead of discarding the packets, transmitting by the first server the buffered packets to the second server at the second hosting point;and sending by the second server the buffered packets to the virtual machine now migrated to and operating on the second server at the second hosting point, such that the buffered packets are played before current packets currently received from the clients are played;wherein the second server causes a second router, in front of the second server, to advertise a more specific internet protocol prefix via a border gateway protocol than a first router in front of the first server, the more specific internet protocol prefix encompasses the internet protocol address assigned to the virtual machine;wherein in response, the first server causes the first router in front of the first server to withdraw a less specific internet protocol prefix of the first router via the border gateway protocol;wherein the second server receives and distinguishes current packets received by the second server from the buffered packets received by the second server, in which the current packets are received from the clients and the buffered packets are received from the first server;wherein in response to the last round of migrating the virtual machine to the second server, the first server marks the buffered packets which were originally intended for the virtual machine on the first server, such that the second server can identify a mark of the buffered packets when the second server receives the buffered packets;and wherein at an end of the buffered packets being transmitted to the second server, the first server sends an end of buffered packets message to the second server, such that the second server knows to start playing the current packets to the virtual machine on the second server upon receipt of the end of buffered packets message in place of playing the buffered packets having the mark.
Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.