U.S. Pat. No. 9,261,968

Methods and Systems for Dynamic Calibration of Movable Game Controllers

AssigneeAiLive, Inc.

Issue DateOctober 16, 2013

Illustrative Figure

Abstract

A video gaming system includes a wireless controller that senses linear and angular acceleration to calculate paths of controller movement over a broad range of controller motion. The system also includes an electromagnetic alignment element, such as a set of LEDS. The controller includes an additional sensor to sense light from the LEDs over a relatively restricted range of controller motion, and use this sensed light to dynamically calibrate the controller when the controller passes through the restricted range of motion over which the sensor senses the light.

Description

DETAILED DESCRIPTION OF THE INVENTION 1. Definitions In the description of the invention herein, above and hereinafter, the following definitions are offered to clarify the terminology used: Self-contained inertial sensor: a device that requires no external signal sources to be placed in the environment for measuring acceleration of a moving body along one or more axes of the six possible linear and angular axes. Unless stated otherwise, the word sensor is understood to refer to a self-contained inertial sensor. For illustrative purposes, in this document, we describe instantiations using accelerometers and gyroscopes. However, those skilled in the art would immediately recognize that other devices could be used as self-contained inertial sensors. For example, a camera that compares images over time (such as the camera used in an optical mouse) could be used as a self-contained inertial sensor. But an infrared camera that is designed to work by tracking infrared sources or markers that have been deliberately placed in the environment is not an example of a self-contained inertial sensor. Accelerometer: a device for measuring acceleration along one or more axes at a point on a moving body. An accelerometer is an example of a self-contained inertial sensor. The device can be from one to tri-axial dependent upon the number of axes it measures at a given location. For example, a tri-axial accelerometer measures acceleration along three axes at the point where the accelerometer is located. A rigid-body can move independently in any of six possible degrees of freedom, three linear and three rotational. Therefore, without additional assumptions about constraints on the motion path, a single accelerometer can never be sufficient to determine the linear and angular motion of a rigid body to which it is attached. Moreover, without making additional assumptions about constraints on the motion path, a single (even ...

DETAILED DESCRIPTION OF THE INVENTION

1. Definitions

In the description of the invention herein, above and hereinafter, the following definitions are offered to clarify the terminology used:

Self-contained inertial sensor: a device that requires no external signal sources to be placed in the environment for measuring acceleration of a moving body along one or more axes of the six possible linear and angular axes. Unless stated otherwise, the word sensor is understood to refer to a self-contained inertial sensor. For illustrative purposes, in this document, we describe instantiations using accelerometers and gyroscopes. However, those skilled in the art would immediately recognize that other devices could be used as self-contained inertial sensors. For example, a camera that compares images over time (such as the camera used in an optical mouse) could be used as a self-contained inertial sensor. But an infrared camera that is designed to work by tracking infrared sources or markers that have been deliberately placed in the environment is not an example of a self-contained inertial sensor.

Accelerometer: a device for measuring acceleration along one or more axes at a point on a moving body. An accelerometer is an example of a self-contained inertial sensor. The device can be from one to tri-axial dependent upon the number of axes it measures at a given location. For example, a tri-axial accelerometer measures acceleration along three axes at the point where the accelerometer is located. A rigid-body can move independently in any of six possible degrees of freedom, three linear and three rotational. Therefore, without additional assumptions about constraints on the motion path, a single accelerometer can never be sufficient to determine the linear and angular motion of a rigid body to which it is attached. Moreover, without making additional assumptions about constraints on the motion path, a single (even tri-axial) accelerometer cannot even determine the motion of the rigid body it is attached to along a single degree of freedom. That is because, without additional information, there is no way to know whether the source of the accelerations it is experiencing are from linear or from angular motion of the rigid body to which it is attached. However, readings from a set of accelerometers placed at different points on a rigid body in some suitable configuration can be processed to determine the linear and angular motion of the rigid body along all six degrees of freedom. Note that, even at rest, an accelerometer is responsive to the Earth's, or any other large enough object's, gravitational field.

Gyroscope: a device for measuring angular velocity around one or more axes at a point on a rotating object. A gyroscope is an example of a self-contained inertial sensor. The device can be from one to tri-axial dependent upon the number of axes it measures at a given location. For example, a tri-axial gyroscope measures angular velocity around three axes at the point where the gyroscope is located. While a tri-axial gyroscope is sufficient to track a rigid body's orientation over time, it provides no information with respect to linear movements of the body in space.

Controller: a movable game controller, preferably but not necessarily wireless and hand-held, with one or more self-contained motion sensors included in the controller, and providing output data to control an associated interactive application such as a computer game.

Basic Controller: A controller, as defined above, lacking sufficient self-contained inertial sensors to track linear and angular motion in all six degrees of freedom.

Composite Controller: A controller, in accordance with this invention in which another controller or device containing self-contained inertial sensors has been attached to a basic controller to enhance the motion sensing capability of the basic controller.

Self-tracking Object: A self-tracking object is an object that contains self-contained inertial sensors that produce a time series that is sufficient to track changes in position and orientation of that object. A composite controller is one example of an object that could be a self-tracking object.

2. Description

Several embodiments of the invention will be described in detail.FIGS. 1-3each depict one of several possible embodiments of a composite game controller in which a unitary output from the composite game controller representative of the linear and angular motion of the moving composite controller must be provided.

With reference toFIG. 1, a composite controller is constructed using a dongle connection. In particular, the ancillary controller or housing containing the ancillary self-contained inertial sensors103is received in the port102of the basic game controller101. In such a linkage, the outputs from the self-contained inertial sensors in the dongle103may be combined with the output of the self-contained inertial sensor in controller101by the data processing unit in controller101to provide the combined output of the whole composite controller. For best results, the combined sensing in all of the self-contained inertial sensors in the composite controller should provide for motion sensing along six axes: three linearly independent axes for the sensing of the linear motion of the moving composite controller relative to the three linear axes, and three linearly independent axes for sensing the three dimensional angular movement. Thus, if the basic controller101has a tri-axial accelerometer, the dongle103should include self-contained inertial sensors, that when combined with the tri-axial accelerometer in the basic controller, provide for the sensing of motion in all six degrees of freedom.

With respect toFIG. 2, a component202may be banded in a fixed position and orientation to a basic controller201by strap203. In the rudimentary illustration of the embodiment, two game controllers are banded together. However, component202may also be an appropriate self-contained inertial sensor mounted in a housing supporting the self-contained inertial sensor. Where the basic controller201has a conventional tri-axial accelerometer, component202should include enough additional self-contained inertial sensors so that the combined output of the tri-axial accelerometer and the additional sensors is sufficient to track all six degrees of freedom. Here again, irrespective of the sensor housing, there must be an implementation in either hardware or software for combining the outputs of the plurality of self-contained inertial sensors forming the composite game controller so as to provide a unitary output from the composite game controller representative of the linear and angular motion of the moving composite controller.

With reference toFIG. 3, another rudimentary illustration similar toFIG. 2is shown. A basic controller301is mounted on a rigid planar supporting substrate305. For the purpose of this illustrative embodiment, two additional components302and303are also mounted on the substrate305. Each of the components provides at least one self-contained inertial sensor. It should be noted that for convenience in illustration for the rudimentary embodiment ofFIG. 3, three game controllers which include self-contained inertial sensors are banded together by band304in fixed positions and orientation with respect to one another, the two self-contained inertial sensor containing components302and303, banded to controller301, need not be controllers. Components302and303may be appropriate self-contained inertial sensors mounted in housings supporting the self-contained inertial sensors. Whether the ancillary components302and303are controllers or just the necessary self-contained inertial sensors, there must be an implementation for combining the outputs of the plurality of self-contained inertial sensors forming the composite game controller so as to provide a unitary output from the composite game controller representative of the linear and angular motion of the moving composite controller.

In each of the composite controllers depicted inFIGS. 1-3, there may be provided some form of data transmission channel between the attached self-contained inertial sensor component and the basic controller. A conventional short range wireless RF transmissions e.g., a Bluetooth™ system, may be used for example to transmit data from attached component202to the basic controller201wherein the outputs could be correlated. Alternatively, the data representative of the angular and/or linear motion of each of components201and202could be wirelessly transmitted to the computer controlled game display, and correlated by the game display computer to provide the angular and linear motion of the composite controller required for playing the computer game.

In accordance with a broad aspect of the present invention, as illustrated inFIG. 4, all of the self-contained inertial sensors needed to provide the sensing along six axes, needed to practice the present invention, may be built into one game controller.

As set forth herein above, the combined self-contained inertial sensors must provide for sensing the total of three linear and three angular axes in order to track unconstrained motion by the user. This requirement may be satisfied in a variety of ways. In particular, any combination of accelerometers and gyroscopes providing readings on at least six distinct axes with at most three gyroscopes will be sufficient if positioned appropriately. When using one tri-axial accelerometer and one tri-axial gyroscope, the sensors may be placed in any known relation to one another. When less than three readings of angular velocity are present, the location and orientation of the combined self-contained inertial sensors with respect to each other is important in order to provide a feasible operative embodiment. Although many combinations of such locations and orientations would be feasible for any given set of self-contained inertial sensors, reference may be made to the above referenced publication,Design and Error Analysis of Accelerometer-Based Inertial Navigation Systems, Chin-Woo Tan et al., Published in June, 2002 by the University of California at Berkeley for the State of California PATH Transit and Highway System, for determining such feasible combinations when using accelerometers.

Considering now the correlation of the mounted self-contained inertial sensors, in any ofFIGS. 1-3, wherein each of the three embodiments may be considered to be a wireless composite game controller, the result is a system that includes a self-contained inertial sensor and programming routines that can convert acceleration and angular velocity data recorded by the device at each time t into the information sufficient to compute a new location and orientation of the device in the world frame at time t. For appropriate conversions, reference is made to the above describedDesign and Error Analysis of Accelerometer-Based Inertial Navigation Systems, Chin-Woo Tan et al. publication, particularly Equation 2.7 on page 6 wherein it is set forth that accelerometer outputs are a function of linear acceleration in the world frame and angular acceleration in the body frame. The application in the above-referenced paper assumes the use of six single-axis accelerometers, but Equation 2.7 can be easily modified to handle one or more gyroscopes instead of accelerometers by directly substituting the observed angular velocities and taking derivatives to calculate angular accelerations. Solving these equations then allows the system to track the position and orientation of a fixed point within the controller over time. All of the self-contained inertial sensors in any of the components in each of the embodiments must be positioned and oriented so as to provide a combination of self-contained inertial sensors feasible to provide outputs sufficient to compute the linear and angular motion of the moving controller as described above. While not all such sensor configurations are feasible, it has surprisingly been found that almost all of the configurations using six accelerometers turn out to be feasible. Thus, the configurations of accelerometer positions and orientations need only be grossly rather than finely adjusted in the formation of the composite game controller.

With respect to the above referenced Tan et al., California PATH Program publication, it should also be noted that the purpose of the present invention is the tracking of motion on centimeter-scale accuracy on a time scale of seconds, rather than on the vehicular scale of the publication: tens of meters scale accuracy on a time scale of tens of minutes.

More specifically with respect to the structures shown in the figures, the advantage of the embodiment ofFIG. 1is that the dongle plug-in represents a practical embodiment of the present invention. As set forth above, a motion sensing dongle103is plugged into the expansion port102of an existing motion sensing game controller101. For example, if the motion sensing game controller is a Nintendo Wii Remote™, then the dongle plugs in to its expansion port. This embodiment is commercially feasible since it relies only on one motion sensing game controller with is available to every consumer of a motion sensing enabled games console (e.g. the Nintendo Wii system). A new dongle containing a plurality of self-contained inertial sensors may then be produced at reasonable cost. No special effort is required to extract desirable motion data out of this configuration because of the interconnection into the basic controller, e.g. Wii Remote controller. As will be subsequently described with respect toFIG. 5, the game controller signal that is directed from the controller to the game display will contain the output from the tri-axial accelerometer in the Wii Remote and the correlated output from the plurality of self-contained motion sensors in the dongle.

In accordance with another aspect of the embodiment ofFIG. 1, the attached dongle contains enough additional self-contained inertial sensors that when combined with the motion sensing game controller's sensors (e.g., a tri-axial accelerometer), the resulting combination of sensor readings is sufficient to estimate the position and orientation of the controller over time. The dongle alone may not necessarily be enough to fully specify all six variables but by selecting the number of self-contained inertial sensors and their approximate positioning in the dongle, it becomes possible through the attachment of such a dongle to a basic game controller such as a Wii remote to create a composite controller with the ability to track motion and orientation in all six dimensions even though each device has insufficient data individually. Assuming the basic game controller possesses a tri-axial accelerometer, possible embodiments of the dongle would include gyroscopes covering three linearly independent axes, or fewer gyroscopes with additional accelerometers to estimate angular acceleration. If a gyroscope is not available to measure angular velocity around the axis running from the tri-axial accelerometer in the basic controller to the inertial sensors in the dongle, it may be necessary to constrain the user's motion in order to get accurate state estimation, since the accelerometers will be unable to directly detect angular acceleration around this axis. The constraint is passed through by the system to inform any user of the system that they need to limit this movement of their wrist as much as they can.

An alternate embodiment is shown inFIG. 2, where multiple game controllers are combined in order to form a composite controller whose joint sensors provide for the sensing of the total of six linear/angular axes. Additional advantages may be found by including more than two controllers or using more sensors than needed to measure along the three linearly independent linear axes and three linearly independent angular axes. Note that as described in the above embodiment it may still be necessary to restrict the user's motion in some manner if the configuration of the composite controller disallows measurements along one or more of the angular axes. However, if the readings of the sensors are linearly independent the methods described in the Tan et al. publication will be sufficient to solve for all six axes even if only accelerometers are available for use.

One advantage of the embodiment ofFIG. 2is that it may allow construction of the composite controller by the user from existing basic controllers. However, this method would likely then require a per-device calibration process to render the configuration known to the system. This can be implemented by having the user place the composite controller in a series of simple static poses (rather than moving the controller along precise arcs). On a flat surface, the controller is permitted to rest for a few moments with the manufacturer's faceplate of each motion sensing game controller resting on a flat surface (i.e. the y axis of each is aligned with gravity). This simple static process allows tuning of the configuration of the above referenced algorithm so that it aligns more closely with what the user has actually produced. As set forth herein and above in combining accelerometers to provide for the tracking of linear and angular motion, even gross positioning of the accelerometers with respect to one another will provide some level of tracking for these motion attributes. Accordingly, relatively gross levels of accuracy of accelerometer alignment may be enhanced by domain-level feed-back into the system which help dampen the errors in positioning that may eventually accumulate. Accordingly, it becomes possible to extrapolate acceleration reading accuracy to compensate for only educated guesses as to the optimum positioning of the controllers.

The above general algorithm may be extended to report results only at the end of one or more repeated motions, wherein each motion starts with identical initial constraints, and follows essentially the same track in time and space, with final velocities and accelerations being zero. Let m>=d1, be the number of those repeated motions. Final motion track estimation may then take as input all m solutions over time, as well as optionally all m sets of data of time series sensor readings, for linear and angular accelerations for the controller and output one final solution which is computed as a function of the m inputs.

Further, the algorithm may be extended to use accelerometer-based motion recognition to constrain which repeated motions are acceptable as inputs to this final motion track estimator. Since each controller of this invention provides a motion signal, through appropriate training or calibration sessions with the proposed user, the gestures may be classified as to their provision of acceptable motion signals. Then, motions that are significantly different from the original can be identified and removed from the aggregation process described above.

The algorithm may be extended to inform the system when the controller has accumulated so much error that it is no longer providing reasonable tracking information. The algorithm may also be extended with additional assumptions such as that the computed velocities are not permitted to exceed human limitations at any point in time t, and along any axis.

FIG. 5is a simple diagrammatic illustration of what has been described with respect to the apparatus ofFIGS. 1-4. Computer controlled interactive game display500has game action501which is controlled by a game controller503which may preferably be a composite controller in accordance with the present invention carried along some path506, which has a linear component504and an angular component505, by a player's moving hand502.

The programming in the computer controlled display500and in the handheld controller503assumes that the player holds the controller in some starting position and then as the player moves the controller the programming is able to estimate the relative position and orientation of the controller503reliably for several seconds. During that time, a game500is able to draw501a representation of the controller's state. Known techniques, such as inverse kinematics, allow the state of the controller to drive an animation in a game. For example, a game character could swing a virtual sword in a manner that is similar to the way the player swung the physical controller.

The location of the boundary of the game, i.e. the limits of the controller503movement with respect to the game display500, is arbitrary and domain-dependent. Preferably there is a radius around initial location of the game display which is about the operational range of most game controllers.

Referring now toFIG. 6, there will be described a generalized flowchart of the programming in the operation of the invention using a game controller for a computer controlled game display, as has been described with respect toFIG. 5.

An initial determination is made as to if the user has started the controller motion, step601. In regard to the initial state of the controller, the following constraints are suggested: initial velocities and accelerations are zero. If the initial determination of motion is “Yes”, then the readings from all the sensors in the controller must be obtained. In the case of a composite controller, this includes all sensor readings from the basic controller as well as all reading from any sensors associated with other components that comprise the composite controller. Typically, the sensor values are read at some suitable high frequency and, at an appropriate point consistent with the computer game being played, the data from the sensor readings is output to the computer controlled game display via the previously described short range RF transmission, step602. Note that, transmission of the sensor readings data typically occurs hundreds of times a second whenever the controller and computer controlled game display are turned on. So step602merely implies that the computer controlled game display will start to process those readings in a manner consistent with the invention. Next, the processor associated with the computer controlled game display executes step603in which the angular motion is extracted from the sensor readings. This step will depend on the particular configuration of sensors used. For example, if three gyroscopes are used, then the gyroscopes will provide readings of angular velocity which can be integrated once to obtain the relative angular motion, i.e. the change in orientation. If accelerometers are used instead, then the readings will provide angular acceleration which can be integrated twice to obtain the relative angular motion. Of course, gyroscopes could be used for some angular axes and accelerometers for others, in which case step603will perform the appropriate action of integrating once for readings from gyroscopes and twice for readings from accelerometers. The change in orientation calculated in step603, is then used in step604to update the previous orientation estimate by adding in the change in orientation. The sensor readings not used in calculating the angular motion are then extracted from the sensor readings data, step605. Typically, the remaining sensor readings will be from accelerometers and the estimate of the angular motion from step603can be factored out of those accelerometer readings, step606, to leave the accelerations due to linear motion along all three linear axes, i.e. the change in position. Then the position of the controller can be updated, step607, using the estimated change in position calculated in step606. As the controller continues to be moved, a determination is made as to whether the movement is being continued, step608. If Yes, the process is returned to step602, and movement tracking is continued. If No, a further determination is made as to whether the game is over, step609. If Yes, the game is exited. If No, the process is branched back to step601wherein the player's next controller movement is awaited.

Referring now toFIG. 7, there will be described an overview of one way of using the software elements.

The left hand side shows the situation at a first time706. The right hand side shows the situation at a second time707.

At a first time period706(shown inFIG. 7as the past), a game player702is holding a self-tracking object703. The output from the self-tracking object703is being communicated to a game console704, wirelessly or by some other technique. The game console704operates in conjunction with a game. The game is presenting a depiction of a game world (such as a made-up, fictional, world) on a display or other presentation device720.

At a second time period707(shown inFIG. 7as the present), the player performs a gesture or other motion711with the self-tracking object703. The motion711includes a translational component712and a rotational component713, as a result of which the device is moved from the first configuration708at the first time period706to a second configuration714at the second time period707. In response to the motion711, the self-tracking object703generates one or more time series of data, those one or more time series data being descriptive of the motion711.

Software elements705being executed on the game console704, or being executed on another device and accessible by the game console704, interpret at least some of the time series of data generated by the self-tracking object703in response to the motion711, and cause the presentation device720to display a corresponding animation of an object corresponding to the self-tracking object703(such as some fictional character in the game world) moving from a first configuration715to a second configuration716. In one embodiment, those software elements705use methods as described herein to create a more faithful corresponding animation of that motion than would otherwise be possible.

Referring now toFIG. 8, there will be described a backfitting algorithm, as used in one embodiment, applied to a self-tracking object.

In a most general case, the motion takes place in three dimensions, with three degrees of translational freedom and three additional degrees of rotational freedom. For expository purposes, and for ease of representation on a 2-dimensional page, the description below relates to a 2-dimensional motion811. However, the description below of the 2-dimensional case is more than adequate to illustrate how the method is applied to a 3-dimensional motion811. Accordingly, those skilled in the art would easily understand from a description of the method with respect to a 2-dimensional motion811, how to apply the same method with respect to a 3-dimensional motion811.

A motion811starts with a self-tracking object, such as the motion sensitive device703shown inFIG. 7, in some first configuration in which the position801and the orientation802are known, or at the least, assumed to be known. Methods for inferring the initial configuration of the object are described in greater detail below. In one embodiment, the orientation802of the self-tracking object is inferred from accelerometer readings that, during momentary periods of quiescence, indicate the direction of gravity relative to the self-tracking object. The software elements705determine, in response to sensor readings from accelerometers and gyroscopes, whether a period of quiescence is taking place. In one embodiment, the origin is set to the location of the self-tracking object.

In alternative embodiments, information from a direct pointing device may be used to infer information about the initial configuration. For example, the self-tracking object might include a laser pointer which the player might orient by directing that laser pointer at the presentation device820, or some other device whose location is known to the software elements705. Those skilled in the art would recognize that a variety of other and further possible sensors, assumptions, or both sensors and assumptions, may be used to obtain information regarding a starting configuration of the self-tracking object.

As the self-tracking object moves, the software elements705integrate and combine gyroscope and accelerometer readings to provide estimates of changes in the self-tracking object's time-varying configuration. The following equations show simplified example computations:
orientation(t+dt)=orientation(t)+Gyro(t)*dt(1)
velocity(t+dt)=velocity(t)+(orientation(t)*(Acc(t)−(Centripetal Accelerations from rotation at timet))−Gravity)*dt;(2)
position(t+dt)=position(t)+velocity(t+dt)*dt(3)

In equation (1) above, Gyro(t) includes three orthogonal readings of angular velocity at time t. Multiplying by dt, the time elapsed since the previous readings, gives the angular change around each axis since the previous readings. This change can be applied to the previous estimate of orientation. Embodiments for making these computations depend on the form in which the orientation information is stored. In the games industry, quaternions are commonly used for this purpose, in which case the angular change from the Gyro(t)*dt term can be converted to a quaternion rotation and added using quaternion arithmetic.

In equation (2) above, Acc(t) includes three orthogonal readings of acceleration at time t in the frame of reference of the object. If the accelerometers are not physically co-located with the gyroscopes, the computation first subtracts any accelerations resulting from the accelerometers rotating around the location of the gyroscopes. For example, if the accelerometers are displaced along the z-axis of the object, the following adjustments would need to be made to the accelerometer readings: Since Acc(t) and Gyro(t) are vectors [0], [1], and [2], and are used to refer to their individual scalar components.
IncreaseAcc(t+dt) byAA*zOffset−(Gyro(t+dt)*Gyro(t+dt))*zOffset  (4)
IncreaseAcc(t+dt) by −AA*zOffset−(Gyro(t+dt)*Gyro(t+dt))*zOffset  (5)
IncreaseAcc(t+dt) by (Gyro(t+dt)2+Gyro(t+dt)2)*zOffset  (6)
where
AA=(Gyro(t+dt)−Gyro(t))/dt(7)
AA=(Gyro(t+dt)−Gyro(t))/dt(8)

The adjusted accelerometer readings are translated from the object frame to the world frame using the current orientation of the object. Acceleration due to gravity (approximately 9.8 m/s/s on planet Earth's surface) is subtracted. The changes in each of the three dimensions of the object position can be found by multiplying by dt*dt.

With these or equivalent computations, the software elements705can generate estimates of position and orientation, as indicated by the dotted line803. Due to the accumulation of errors in sensor readings, e.g., caused by noise, limited precision, or other factors, or possibly due to errors in transmission of the time series data, the actual position and orientation of the self-tracking object are likely to generate a set of estimates of position and orientation805that differ at least somewhat from reality. Over time the difference can become sufficient to be relevant to operation of the game console704and the game. For example, the difference might become large enough that an animation generated by coupling the inferred position and orientation estimates might appear more and more unrealistic to the player as time progresses.

From time to time, the software elements705receive additional information regarding position and orientation of the motion sensitive object703that becomes available at an identifiable time, with the effect that software elements705are able to determine a new instantaneous position806aand orientation806b. For a first example, this can happen if the player stops moving the motion sensitive device703, with the effect that an identifiable period of quiescence is entered. For a second example, the software elements705might receive readings from other sensors, such as a pointing device as described above, with the effect that at least some subset of the new instantaneous position806aand orientation806bcan be more precisely inferred at that time. Some examples of computing these new estimates of configuration are described below.

When more precise information, or other corrective information, becomes available the inventors have discovered that the information can be used for more than just obtaining more reliable estimates at that moment in time. In particular, the information can be used to infer something about the errors over at least some portion of the recent history of sensor readings. By taking those error estimates into account, a new trajectory can be calculated, as shown as the solid line804inFIG. 8. The new trajectory804may still not be a perfect reflection of the actual trajectory of the self-tracking object, but the inventors have found it to be a more accurate and useful one than just the original estimate. In particular, it can be used to drive an animation that, although delayed, appears as a more accurate rendition to the player of the motion811just performed.

In one embodiment, computation of the re-estimated trajectory includes the following elements and steps:

Computation of the re-estimated trajectory draws primary attention to two categories of errors. A first category includes errors that are essentially random in their effect on the sensors at different times. Such errors might be in response to noise in reporting of the sensor readings, truncation errors in reporting of the sensor readings due to limited precision, and the like. For one example, if gyroscope readings are reported as 8-bit values, this would have the effect of essentially random errors—the differences between the true values and the values that are rounded to this limited precision. A second category includes errors that are systematic, i.e., when a particular sensor has its data affected in a substantially consistent way over time. Such errors might be in response to miscalibration of that sensor (e.g., the sensor data reported for a true value of zero might be miscalibrated to a finite nonzero value).

Computation of the re-estimated trajectory first addresses errors in orientation. Labeling the initial position for this segment of the motion time as t.sub.0 and assuming k updates at times t.sub.1 to t.sub.k inclusive, the predicted orientation at time t.sub.k will be
orientation(to)+Σi=1 to kGyro(ti)*(ti−ti-1)  (9)

The predicted orientation can be forced to match the target orientation by adjusting each Gyro(t1) by (tgtOrient−orientation[tk])/(tk−t0)*(ti−ti-1).

The computation allocates these adjustments to the two sources of error described above. Viewing the category of random errors as a random walk of k steps, there should be a typical deviation of sqrt(k)*err, where err is the typical error for that sensor on an individual reading. This value can be determined by experimental analysis. The remaining error, if any, can be assumed to be an offset error on the gyroscopes and applied to future readings. In one embodiment, it might be desirable to limit the maximum corrections that are applied to attribute any residual corrections to further-unknown factors.

The computation applies a similar procedure to adjust the accelerometer readings using the new estimates of orientation during the updating of position. The procedure for position is somewhat more complicated since the adjustments applied to accelerometer readings will have different effects on the final position depending on the orientation of the sensor at the time. First, the computation assumes that the new adjusted orientations and centripetal accelerations are correct. The computation can then calculate the effect of each of the three accelerometer readings on each of the three components of position for each step, using equations (2) and (3) above. This has the effect that, for time tk
position(tk)[j]=position(t0)[j]+velocity(t0)[j]*(tk−t0)+Σi=1 to kØ*Acc(ti)*(ti−ti-1)*(ti−ti)  (10)

for each of the three components j of position

where

Ø is a vector indicating the effect that each component of Acc(ti) has on componentjof velocity given the orientation(ti).

This equation governs how changes in the Acc readings will affect the final position. The computation solves to find the minimum adjustments to make to Acc in order to match the target position. In response to these adjustments, the computation can divide them between noise and offset errors, using the method described above.

The process of re-estimation, or backfitting, is not restricted to occurring just once at the end of a motion811. Whenever extra information becomes available during the course of a motion811, however short or long, that extra information can be incorporated as a resynchronization and re-estimation step for the current motion811and any past portions thereof. It can also be used going forward to more reliably estimate any errors that might otherwise be introduced by the motion sensing device703. Line808shows a first position and orientation path generated by the software elements705using a first set of estimation parameters. Line807shows a second position and orientation path generated by the software elements805using a second set of estimation parameters, after incorporating new information as described above, and after re-estimating the position and orientation path shown by line808. This process can be applied repeatedly and iteratively, with the effect that the software elements705might accurately determine relatively longer sequences of faithful tracking of the motion811.

The computation described above uses information about the target configuration to adjust the estimated trajectory and sensor errors. In alternative embodiments, the computation may use information about target velocities. In such cases, the computation uses a symmetric procedure obtained by substituting equation (11) below for equation (10) above.
velocity(tk)[j]=velocity(t0)[j]+Σi=1Ø*Acc(ti)*(ti−ti-1)  (11)

Referring now toFIG. 9, there follows a description of a data flow diagram900of information flow in a game control system.

A self-tracking object917provides a set of raw sensor readings902, which are received from the self-tracking object917by device driver901. The device driver901applies hardware calibration steps to produce a set of calibrated sensor readings908. Techniques for hardware calibration are known to those skilled in the art and include, for example, (a) modifying the raw sensor readings902according to known or calculated temperature variations, and (b) compensating for errors that may have been introduced in the manufacturing process of the self-tracking object917. Manufacturing errors might be detected in a calibration step performed in the factory when the self-tracking object917is manufactured.

In one embodiment, the game911will formulate assumptions910about the initial configuration of the self-tracking object. These assumptions can include assumptions that the software elements705should make about the initial configuration of the object. For example, the game911might supply one or more components of the initial position or orientation of the object.

A configuration initiator909receives those assumptions910supplied by the game911. From those assumptions910, the configuration initiator909determines an initial configuration that will be used by the tracker918.

In one embodiment, the game911provides an initial position for the object and an assumed value for the rotation of the object around the axis corresponding to gravity, here labeled the z-axis. The other two components of the orientation can be computed by the configuration initiator909in response to readings from the inertial sensors. This computation can be performed when the object is at rest.

In one embodiment, the configuration initiator909can use information from the sensors to infer whether the device is currently in motion. For example, if the self-tracking object917can be assumed or detected to be still, gravity readings can be used to infer orientation information. When the device is relatively motionless, the gyroscope readings will all be close to zero and the acceleration reported by the sensors should be due almost entirely to gravity. In such cases, the accelerometer readings should be consistent over time and have a norm approximating an acceleration of one gravity (approximately 9.8 m/s/s). When these conditions are substantially met, the configuration initiator909determines that the object is substantially at rest.

The configuration initiator909can determine two components of orientation by finding the necessary rotations in the world frame in order to align the accelerometer readings entirely along the z-axis. In one embodiment, the configuration initiator909determines a set of rotations to align the axis with the largest accelerometer reading. This computation might be performed as shown in the following pseudo-code:

if largestAxis is Positive X(12)largeRot[Z] = M_PI/2; largeRot[X] = M_PI/2;else if largestAxis is Negative XlargeRot[Z] = −M_PI/2; largeRo[0]t = M_PI/2;if largestAxis is Positive Y(13)largeRot[Z] = M_PI; largeRot[0] = M_PI/2;else if largestAxis is Negative YlargeRot[Z] = 0; largeRot[0] = M_PI/2;if largestAxis is Positive Z(14)largeRot[Z] = 0; largeRot[0] = M_PI;else if largestAxis is Negative ZlargeRot[Z] = 0; largeRot[0] = 0;set initialOrientation using largeRot;(15)gravReading = initialOrientation*Acc;(16)rotX = −atan(gravReading(Y)/tmpReadings(Z) );adjust initialOrientation by rotating an additional rotX around(17)the X axisgravReading = initialOrientation*Acc;(18)rotY = atan(tmpReadings(X)/tmpReadings(Z) );adjust initialOrientation by rotating an additional rotY around(19)the Y axis

The configuration initiator909sets the remaining component using input from the game. In one embodiment, the configuration initiator909presumes the rotation around the z-axis is zero.

If other sensor readings are available, for example from a pointing device as described above, the configuration initiator909might use these other sensor readings to determine information about the initial configuration.

Initially, the tracker918assumes that the current configuration is the initial configuration912. As time passes, the tracker918determines the current configuration by applying changes to the initial configuration in response to calibrated sensor readings908.

In one embodiment, the sensors in the self-tracking object917include gyroscopes and accelerometers sufficient to track changes in position and orientation of the self-tracking object917. As described above, the software elements705integrate and combine gyroscope and accelerometer readings according to known principles.

Depending on the accuracy and resolution of the sensors in917, known techniques are unlikely to be sufficient to produce reliable configuration estimates for use in one embodiment of computer games. The inventors have therefore discovered techniques that significantly improve the reliability of the estimates and are therefore an enabling technology for a new class of applications.

In one embodiment, the tracker918applies constraints to the calibrated sensor readings908. These constraints include clamping the readings to allowable ranges, clamping values calculated from the readings to known ranges, introducing a drag term, or requiring a minimum impulse to act as a threshold to avoid misinterpreting hand tremors as significant motion.

Once the tracker918has constrained the calibrated readings, it uses known techniques, from the art of inertial guidance and related fields, to generate a configuration estimate905. The configuration estimate is sent to a corrector907that adjusts the estimate to produce a corrected estimate916. In one embodiment, the corrections are dynamically calculated using the backfitting algorithm described above, stored903and periodically updated906. Examples, not intended to be limiting in any way, of corrections include:The tracker918might partially determine constraints on motion of the self-tracking object917by presuming that the self-tracking object917follows motions restricted by a mechanical model of a human figure. For example, if it is known that a position estimate would violate an assumption about limb length, such as a motion that would only occur if the game player's arm would bend unrealistically, the estimate905can be corrected accordingly.In one embodiment, the tracker918might make estimates for position and orientation of a model of the figure of the human being as that person is moving the self-tracking object917. For example, the tracker918might determine an estimate of the position and orientation of the arm, shoulder, and hand holding the controller of that human figure, at each time step in a periodic (or otherwise defined) time sequence. This would include estimating angles for each relevant joint of the human body as well as, possibly, extensions of musculature, collectively referred to herein as “poses”, using, e.g., known techniques for inverse kinematics such as those described in R. FEATHERSTONE, ROBOT DYNAMICS ALGORITHMS.In such embodiments, the tracker918would examine each such estimated pose and adjust its estimates for the likelihood of the estimated pose (in addition to or in lieu of adjusting its estimates for the likelihood of the estimated position and orientation of the self-tracking object917). Likelihood of any particular estimated pose might be determined using (a) information about human physiology, e.g., how elbows, shoulders, wrists, and the like, are able to rotate, and (b) information about the particular application in which the self-tracking object917is being used as a controller.For example, in embodiments in which the self-tracking object917is being used to simulate a baseball bat, e.g., a sports game, the tracker918can evaluate the likelihood of particular baseball-bat-swinging motions, and assign poses relating to those swinging motions in response to their likelihood if performed by a baseball-bat-swinging player. This would have the effect that poses in which the baseball-bat-swinging player contacts an (imaginary) baseball have greater likelihood than otherwise. Moreover, the tracker918can take advantage of one or more simplifying assumptions, e.g., that the baseball-bat-swinging player is standing relatively still and upright while swinging the self-tracking object917with a two-handed grip.In such embodiments, when the tracker918encounters an estimated pose (or sequence of poses) that it deems unlikely—either given human physiology or the nature of the information about the particular application in which the self-tracking object917is being used as a controller—the tracker918can (a) adjust that estimated pose or sequence of poses to one that is more likely, and (b) re-estimate the motion of the self-tracking object917, and consequentially the pose or sequence of poses, to conform with that adjustment.

In alternative embodiments, it might occur that an explicit model of poses for the figure of the human being, as that person is moving the self-tracking object917, might not be necessary. In such cases, the tracker918may use logical assumptions about motions of that human being to determine whether any particular pose, or sequence of poses, is likely or unlikely. For example, if the human being is—by the nature of the application—assumed likely to be standing in a relatively fixed location, any reported or estimated position of the self-tracking object917too far from that relatively fixed location may be adjusted in response to that distance. This has the effect that any reported or estimated position of the self-tracking object917would be substantially constrained to remain within a box or sphere surrounding the human being's initial position and limited by that human being's typical physical reach.

The tracker918might, from time to time, detect the angular orientation of the self-tracking object917with respect to gravity, i.e., from what angle from “up” or “down” the self-tracking object917is instantaneously pointing. For example, if the self-tracking object917enters a quiescent state, the angular orientation of the self-tracking object917can be adjusted accordingly.

The tracker918might, from time to time, assume that the self-tracking object917is in a period of likely quiescence, such as for example when the game indicates that there is nothing for the game player to do, and the game player is thus likely to not be moving the self-tracking object917. If the tracker918is able to detect a period of likely quiescence, the relative velocity and angular velocity of the self-tracking object917can be determined, and parameters describing the position and orientation of the self-tracking object917can be adjusted accordingly.

The tracker918might, from time to time, receive data from the user, such as a game player, indicating information the user supplies that can be used to aid in determining the position and orientation of the self-tracking object917.

For a first example, if the user pushes a button on the self-tracking object917in an application in which that button is used to simulate a gun (e.g., a “first-person shooter” game), the tracker918might use the timing of that button-press to restrict the set of possible positions or orientations of the self-tracking object917, e.g, to those in which the self-tracking object917is oriented so that the simulated gun is actually pointed toward a target. For a second example, if the user enters text data using a console associated with the self-tracking object917, the tracker918might use that text data (or use the fact that such text data is being entered) to restrict the set of possible positions or orientations of the self-tracking object917, with the effect that the tracker918might adjust its determination of position and orientation of the self-tracking object917accordingly. The tracker918might, from time to time, receive input values from additional sensors, such as for example a light pen, infrared remote sensor, or other indicator of the orientation of the self-tracking object917. The tracker918might use those values from additional sensors to restrict the set of possible positions or orientations of the self-tracking object917, with the effect that the tracker918might adjust its determination of position and orientation of the self-tracking object917accordingly.

The corrected estimate915can then be further corrected based on in-game constraints and assumptions. Examples, not intended to be limiting in any way, of in-game corrections include:

The tracker918might determine constraints on the position and orientation of the self-tracking object917according to restrictions it believes on the set of possible final configurations of the self-tracking object917at the end of a motion. For example, if the end of a motion (or any part of a motion) would place the self-tracking object917in the same spatial location as the user's head or body, or in the same spatial location as a wall or the game controller itself, the tracker918might restrict the set of possible positions and orientations of the self-tracking object917to exclude that possibility, and adjust its determination of the position and orientation of the self-tracking object917accordingly.

The tracker918might apply game constraints to the set of possible motions of the self-tracking device917, such as if the self-tracking device917is being used by the game player to emulate a particular type of object (e.g., to use a sword in a fantasy game, or to use a golf club in a sports game). The tracker918might therefore restrict the set of possible motions, and therefore changes in relative position and orientation of the self-tracking device917, accordingly.

For a first example, in an application in which the self-tracking object917were used to simulate a sword (e.g., a fantasy role-playing game), the tracker918would be able to restrict the possible motions of that simulated sword so that it cannot pass through, or terminate its motion in, certain types of objects in the game word, e.g., solid walls or other swords.

For a second example, in an application in which the self-tracking object917were used to simulate a baseball bat in a baseball game, the tracker918would be able to restrict the possible motions of that simulated baseball bat so that it remains in or near a strike zone in the baseball game. This would have the effects of (a) limiting the scope of possible animation, thus simplifying the task of performing that animation, (b) detecting relatively larger errors in tracking of the self-tracking object917, and (c) detecting anomalous behavior by the human being, such as if that human being decides to walk away from the simulated batting region.

An application using the self-tracking device917might involve use of motion recognition signals, such as for example as described in detail in U.S. application Ser. No. 11/486,997, “Generating Motion Recognizers for Arbitrary Motions”. In such cases, the motion recognition signal classifies the movement of the self-tracking device917into one (or possibly more than one) of a set of preselected classes of motions.

For one example, in an application in which the self-tracking device917is used to simulate one or more kitchen utensils (e.g., a game, simulation, or teaching environment relating to cooking), the tracker918might use a motion recognizer that classifies motions by the self-tracking device917into known gestures used in those environments, e.g., frying, flipping, chopping, pounding, and the like. An arbitrary motion by the human being holding the self-tracking device917would be classified into one or more of these known gestures, with the effect of providing a motion recognition signal assigning the motion to one or more of those known gestures.

In various embodiments, the motion recognition signal might (a) uniquely classify the motion as a particular gesture, (b) classify the motion as one of a set of possible gestures, (c) associate the motion with a probability or other score of being each one of those possible gestures, and the like.

The tracker918can use the knowledge it obtains from the motion recognition signal—assignment of the motion to a particular class of gestures—to restrict the set of possible estimates of position and orientation of the self-tracking device917to those consistent with the motion recognition signal.

For example, not intended to be limiting in any way, if the motion recognition signal indicates that the self-tracking device917(simulating a frying pan) has just been used to flip an omelet, any sensor readings or time series data received from the self-tracking device917inconsistent with that gesture (e.g., motions more likely to be associated with chopping vegetables or pounding meat), might be deemed more likely to be erroneous, unintended or insignificant. The tracker918might then dampen use of those sensor readings or time series data, with the effect of improved, or at least more consistent, estimation of position and orientation of the self-tracking device917. Moreover, if the inconsistent motions resulted from unconscious or unintended movement by the human being holding the self-tracking device917, the tracker918would be able to provide that human being with a perception of improved tracking.

In various embodiments, the motion recognition signal might provide additional information relating the motion to particular gestures, such as possibly (a) an evaluation of a measure of distance between that motion and each classified gesture, or (b) an evaluation of a measure of distance between that motion and particular prototypes within particular classes of gesture.

For example, not intended to be limiting in any way, if the motion recognition signal indicates that the self-tracking device917(simulating a frying pan) has just been used to flip an omelet, but that there is a reasonable alternative interpretation that the self-tracking device917(simulating a sharp knife) has just been used to chop vegetables, the tracker918might use the ambiguity between these possibilities to choose to be less aggressive about dampening use of those sensor readings or time series data that are ambiguous.

In one embodiment, a canonical animation might be associated with each particular gesture, with the effect that the animation actually presented in response to a particular motion might be a blend of the estimated actual motion of the self-tracking device917and of the canonical animation assigned to the detected-and-classified gesture. In some applications, this would allow the tracker917to perform a “snap to fit” function, i.e., to present the actual motion in (one of) the way(s) the gesture should ideally be performed, rather than the approximation actually performed by the human being. In alternative embodiments, the presented animation may be a weighted blend of canonical animations associated with those more than one classes of gesture the motion was detected-and-classified to be. Relative weights of that blend might be responsive to the measures of distance to each class, to the probabilities associated with each class, and the like.

Similarly, a canonical animation might be associated with each particular prototype within a particular gesture class. In such cases, the animation actually presented in response to a particular motion might be (a) a blend of the canonical animations associated with those prototype gestures, (b) snapped to fit a selected one of the canonical animations associated with those prototype gestures, (c) a blend of the canonical animations associated with one or more of those prototype gestures and the actual motion of the self-tracking device917. In each such case, weights associated with each possibility for the blend might be responsive to measures as described above.

The tracker918might from time to time, receive DPD (direct pointing device) readings, such as for example determining that the self-tracking object917is aligned in a known orientation from alignment of the self-tracking object917with a set of LEDs or other electromagnetic or sonic alignment elements. The tracker918might use those DPD readings to restrict the set of possible positions or orientations of the self-tracking object917, with the effect that the tracker918might adjust its determination of position and orientation of the self-tracking object917accordingly. The tracker918might, from time to time, presume that the self-tracking object917is no longer moving, such as for example when the game controller indicates that there is no action for the user to take. The tracker918might use that assumption to restrict the set of possible positions or orientations of the self-tracking object917, with the effect that the tracker918might adjust its determination of position and orientation of the self-tracking object917accordingly.

The game corrected estimate912is communicated back to the game where it is typically used to drive an animation that is intended to correspond to the motion of a game element corresponding to the self-tracking object917. For example, if the game element is a sword (in a fantasy game), or a golf club (in a sports game), a presentation would be made of that game element moving in accordance with the way the user moved the self-tracking device917.

In one embodiment, the tracker918assumes a one-to-one mapping of the self-tracking object917to motion of a simulated object in a virtual environment, the latter being presented to a user using animation. In alternative embodiments, other and further mappings are possible, as described below.

In one set of alternative embodiments, the virtual environment might determine a force or other motive power in that virtual environment, in response to changes in position or orientation of the self-tracking object917. For example, a sharp change in position or orientation of the self-tracking object917might be interpreted as a directive to impart a throwing or maneuvering force on an object in that virtual environment. In such examples, the virtual environment would (a) determine an amount of force to apply, (b) in response thereto, determine a set of changes in position or orientation of that object in that virtual environment, and (c) in response thereto, determine an animation of that virtual environment including that object.

Examples of such action in response to changes in position or orientation of the self-tracking object917include:

the self-tracking object917might be used to simulate a throwable object—e.g., a baseball—in the virtual environment, with changes in position or orientation of the self-tracking object917being used to determine how hard and in what direction that object is thrown (e.g., the user might pretend to throw a baseball using the self-tracking object917to simulate the baseball, being careful of course not to actually throw the self-tracking object917at the game controller, unless that self-tracking object917is padded or otherwise secured to allow for actual throwing);

the self-tracking object917might be used to simulate a tool for striking or throwing—e.g., a baseball bat—in the virtual environment, with changes in position or orientation of the self-tracking object917being used to determine a force or other motive power in that virtual environment, with that force being used to determine how hard and in what direction an object is struck (e.g., the user might pretend to hit a baseball with a baseball bat using the self-tracking object917to simulate the baseball bat;

similarly, the self-tracking object917might be used to simulate a tool in the virtual environment, such as a light switch or a trigger of a gun, with the effect that changes in position or orientation of the self-tracking object917would be interpreted by that virtual environment to indicate that sufficient force had been applied in that virtual environment to switch on a light or fire a gun.

In one set of alternative embodiments, a viewpoint (such as a viewpoint of a user or of a camera) in the virtual environment might be responsive to changes in position or orientation of the self-tracking object917. In such embodiments, the user may be allowed to indicate a direction or to select objects in a virtual environment (such as a virtual reality depiction of a possibly-fictional 3D environment). The user might be provided with this capability even in circumstances where the user's view is otherwise obstructed in the virtual reality depiction. (1) For one example, the user might be physically holding self-tracking object917, with the virtual environment simulating an object corresponding to that object. The simulated object, or “CVD” (corresponding virtual device), could be pretty much any object, such as a gun, a whip, a pointing device (e.g., a laser pointer). In the virtual environment, a line segment (or another path) is computed, coupled to the CVD and extending into a region of the virtual environment near the CVD. The computation is based on changes in one or more of position or orientation of the self-tracking object917, possibly combined with additional information from the virtual environment. This could include setting the direction of the line segment or other path in direct correspondence to the change in orientation of the self-tracking object917. The line segment might be straight, such as for a laser pointer, or nearly so, such as for a gun (taking into account gravity and windage), or might be deliberately curved, such as for a whip. The computed line segment thus represents a line segment desired by the user. If the computed line segment intersects a object or construct (e.g., a non-physical “object” such as a surface or region) in the virtual environment, e.g., touching a virtual object, plane, or character, the virtual environment determines that the user is deliberately selecting or otherwise indicating that intersected object or construct.

For example,FIG. 10shows a three-dimensional virtual environment1000including various objects1001and1002. A CVD1005is a representation of the self-tracking object917in the virtual environment1000. As described above, objects and constructs can be selected using some line segment (or “ray”)1006. As the self-tracking object917is moved, thus changing its position or orientation, the CVD1004and line segment's new location1003allow the user to select a new object1002. This allows game players in virtual worlds to select objects that are otherwise obscured along their normal viewing plane.

Examples of using this method could include a game player holding a self-tracking object917as a controller that corresponds, in a virtual environment, to a gun, flashlight, or whip. The player can then move the controller around to indicate the direction of the gun, flashlight, or flexible cable. Changes in the position or orientation of the self-tracking object917could be applied to the corresponding CVD.

In examples in which the self-tracking object917corresponds in the virtual world to a gun (or another projectile device, such as a bow), a game player might be taking cover in a virtual environment behind a low wall, as shown on the left-hand side ofFIG. 11. Using the technique described above, the game player would lift the self-tracking object917, causing the CVD gun to move within the virtual environment, and point the CVD gun at an angle down into the area behind the wall, without the game player changing their vantage point, as shown on the right-hand side ofFIG. 11.

Similarly, in examples in which the self-tracking object917corresponds in the virtual world to a flashlight (or another pointing device, such as a laser pointer), the game player would be provided with the capability for indicating any angle for the CVD flashlight into the screen, and would not be restricted to only those origin points above the viewing plane.

FIG. 12, on the left-hand side, shows an object in the virtual environment obscuring the game player's view deeper into the room. Using the technique described above, the game player would move the self-tracking object917to cause the CVD flashlight to illuminate regions behind the obstructing object without requiring any change in vantage point within the virtual environment, as shown on the right-hand side ofFIG. 12.

Similarly, in examples in which the self-tracking object917corresponds in the virtual world to a whip (or another flexible cable, such as a rope), the game player would be provided with the capability of “reaching around” an obstructing object in the virtual environment, e.g., to thread the cable through a pipe that would otherwise be unseen by the game player. The technique described above allows a game player to indicate direction within a virtual environment using all six degrees of freedom of position and orientation of the self-tracking object917. This has substantial advantages over known methods of indicating direction.

The process will typically repeat in a game as the player moves the controller around in response to stimuli and instructions from the game.

Referring now toFIG. 13, there follows a description of the control flow in one embodiment.

The game starts and there is some initial period of setup. This setup may include memory allocation and any other well-known steps.

The game then waits for a signal that a button has been pressed on the self-tracking object. Pressing a button is only one example of a starting criteria for tracking to begin. Alternatively the game may signal the start based on internal state and communicate suitable instructions to the player.

Once the game has indicated that it wants to start tracking, the self-tracking object may still not yet be ready to being tracking. For example, in one embodiment, a brief period of quiescence might be required before tracking can begin so the player still needs to hold still for a while after pressing the button. Alternatively, additional sensor readings may be required in order to determine a initial configuration. For example, the player might initially point at a direct pointing device.

Once the tracking can proceed the initial configuration is setup from any information available to the tracker918.

The tracker918then iterates the process described above, in which the tracker918(a) performs the following: The tracker918receives time series data from the self-tracking device917. The tracker918makes estimates of the position and orientation of the self-tracking device917in response to those time series data. The tracker918receives information allowing it to make a better estimate the position and orientation of the self-tracking device917. The tracker918updates its estimates for the most recent set of historical position and orientation of the self-tracking device917.

In one embodiment, this process is repeatedly iterated until the tracker918receives a signal that tracking can cease. Termination signals could be time based, game event based, or button based.

The tracker918continues to make estimates of the position and orientation of the self-tracking device917, until the game finishes.

Alternative Embodiments

While the preferred embodiments of this invention have been described above, there are many variations which can be understood and derived from the concept and principles set forth.

Such potential variations and embodiments include the following.

In the case of some game controller configurations, clever game design can be used to take advantage of some set of assumptions to give an illusion of enhanced motion tracking. For example, a player may be instructed to hold the controller in a certain way and move along a certain axis. Analysis of the sensor data can then allow a corresponding animation to be rendered. However, this approach has its limitations. If the player violates any of the assumptions, the animation produced will typically not correspond to the actual player's motion.

In some cases, sensor data provided by the controllers of this invention may be analyzed and compared to a provided standard data output that corresponds to specific animations. The animation to which the sensor data is the best match is then selected and played. It is also possible to modify the selected animation based on the degree of correspondence between the sensor data and the best match. For example, if the sensor data indicates that the motion is a faster version of some provided animation, then the animation can be played at a correspondingly faster speed.

Most currently available game controllers do not contain the required six axial accelerometer configuration to fully determine the player's actual motion in a gyroscope-free controller. For example, in some modern game controllers there are only three approximately co-located accelerometers or a single tri-axial accelerometer. Using such controllers to render an animation on the screen that corresponds to a player's motion requires strong assumptions to be made about the player's intended motion. In some cases, this requirement can be mitigated with known techniques. For example, some modern game controllers contain an infrared sensor that when pointing at some direct point device (DPD) provides additional information that can be used to determine more information about player movement. However, the player's movement has to be restricted to a narrow range of motions that keep the DPD within range of the infrared sensor.

The concepts of the present invention may be extended to add more sensors into the system. The above described general algorithm may be extended to such configurations. For example, there could be three motion sensing game controllers have nine accelerometer sensing axes, not just six. The sensing of the three additional axes could provide feedback to be applied to the general algorithm.

Similarly, the general algorithm could be applied to shorten time. There may be many potential competing errors in the system. The samples/sec. may be reduced to limit sensitivity over time, while trading off against integration errors. This in part is based on time scale in which a human movement occurs. Based on the concept of the present invention, a cube with accelerometers placed in a certain configuration on each face can reliably track position and orientation of the controller for longer periods of time. Such a cube could be mounted on a controller, e.g., via an appropriate dongle connection.

In configuring the composite structure of the self-contained inertial sensors, whether in or out of controllers, so as to select the best position and orientation of those sensors to provide a feasible composite controller, additional parameters that describe each sensor and the physical relationship of the different sensors within a sufficiently rigid body must be taken into account. For example, the configuration estimate for the composite controllers inFIGS. 1-3could include estimates of: the self-contained inertial sensor reading when the device is at rest; the upper and lower range of the self-contained inertial sensor; the sensitivity of the self-contained inertial sensor; how the sensitivity varies with time, temperature, and other conditions; the relative positions of the individual self-contained inertial sensors within the controller; the physical size of the controller; the distance between each controller; the relative orientation of each controller from one another; the relative position of the center of mass of each controller.

Claims

  1. A method for controlling a video game using a wireless game controller, the method comprising: sensing linear and angular accelerations of the controller over a broad range of controller motion;sensing an electromagnetic alignment element over a narrow range of controller motion, less than the broad range of controller motion, the narrow range limited to an alignment between the electromagnetic alignment element and the controller;tracking the sensed linear and angular accelerations to calculate a path of the controller as the controller passes through the broad range of controller motion outside of and including the narrow range of controller motion;computing, from the calculated path, an animation path of the controller and driving an animation of an object from the animation path;dynamically correcting the animation path responsive to the sensed electromagnetic alignment element as the controller passes through the narrow range of motion;and restricting the calculated path based upon a mechanical model of a human figure.
  1. The method of claim 1 , wherein the linear and angular accelerations include three linear accelerations.
  2. The method of claim 2 , wherein the linear and angular accelerations include three angular accelerations.
  3. The method of claim 1 , wherein the calculated path represents the controller motion with centimeter scale accuracy on a time scale of seconds.
  4. The method of claim 1 , further comprising clamping the calculated path within allowable ranges.
  5. The method of claim 1 , further comprising correlating the sensed linear and angular accelerations so that both three dimensional linear translation and angular orientation of the moving game controller is tracked with centimeter-scale accuracy on a time scale of seconds.
  6. The method of claim 6 , further comprising providing a model of a fictional world, the model being responsive to the motion of the game controller.
  7. A game system comprising: an electromagnetic alignment element to emit light;a motion sensing wireless game controller having: a sensor to sense the light from the alignment element over a narrow range of controller motion in which the alignment element is within range of the sensor, the sensor to generate first output responsive to the light over the narrow range of motion;an accelerometer to generate second output responsive to linear motion of the controller over a broad range of motion of the controller greater than the narrow range of motion;and a gyroscope to generate third output responsive to angular motion of the controller over the broad range of motion;wherein the first, second, and third outputs collectively represent controller movement over the broad range of motion;and a tracker to receive the first output over the narrow range of motion and the second and third outputs over the broad range of motion, the tracker to produce from the second and third outputs game-controller signals that represent position and orientation of the controller over the broad range of motion and outside the narrow range of motion over time;wherein the tracker, responsive to a controller motion describing a path through the broad range of motion and intersecting the narrow range of motion, dynamically corrects the game-controller signals responsive to the first output while passing through the narrow range of motion.
  8. The game system of claim 8 , wherein the tracker is part of a game display computer wirelessly coupled to the game controller, the computer to compute an animation path and orientation of the controller and to drive an animation of an object from the animation path and the orientation.
  9. The game system of claim 9 , the game display computer to provide a model of a fictional world, the model being responsive to the movement of the game controller.
  10. The game system of claim 9 , the computer to compute data representative of the motion in six degrees of freedom from the game-controller signals.
  11. The game system of claim 8 , the controller to combine the first, second, and third outputs into data representative of the motion in six degrees of freedom.
  12. The game system of claim 8 , wherein the tracker resides in the controller.
  13. The game system of claim 8 , wherein the game-controller signals represent motion of the controller with centimeter scale accuracy on a time scale of seconds.
  14. The game system of claim 8 , wherein the tracker clamps the game-controller signals within allowable ranges.
  15. The game system of claim 8 , wherein tracker restricts the game-controller signals based upon a mechanical model of a human figure.
  16. The game system of claim 8 , the tracker to correlate the second and third output to track both three dimensional linear translation and angular orientation of the game controller with centimeter-scale accuracy on a time scale of seconds.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.