Future Acts

We have completed animating the first act: the comedian Frank Bush. To watch that animation, click here. The next act will be the strongman Sandow the Magnificent. Our work on that act is well underway. You can see sample images and animations here.

We have completed research and developed scripts for two additional Vaudeville acts: the Irish singer, Maggie Cline, the sketch comedy of the four Cohans, whose youngest member, George M. Cohan, went on to become one of the great stars of early 20th century Broadway. We are currently seeking funding to model and animate these acts.

Live Performance Simulation System

The Live Performance Simulation System (LPSS) a flexible set of techniques and technologies that scholars and theatre practitioners can use to simulate a wide range of performance traditions, from classical Greek theater to Japanese Noh. “Virtual Vaudeville,” a simulation of 19th century American vaudeville theatre, is the first prototype application for this system.

The basic premise of the LPSS is to allow users to fly through the theatre space to observe the performance, the theatre architecture and any of the spectators from any vantage point. This goal pushes the limits of what even the fastest PC’s today are capable of achieving in real-time. In the case of Virtual Vaudeville, for example, the system needs to animate up to four photo-realistic performers on stage simultaneously (10,000 – 20,000 polygons each), in addition to 800 spectators in the audience (500-800 polygons each; click here for more information on solutions we devised for animating the audience efficiently).

To achieve the real-time performance required, we licensed a commercial game engine, Gamebryo. Gamebryo is a set of C++ subroutines that programmers can use to drive complex 3D animations in real-time. We are using the Miles Sound System to deliver 3D sound in the virtual environment so that the sound of the performance and spectators varies as you move through the theater. Gamebryo is compatible with Windows as well as with most stand-alone gaming platforms (e.g. Xbox and Playstation); at this time, however, we have licensed it for Windows only. Unfortunately, Gamebryo is not compatible with Macintosh. Macintosh users, however, can see pre-rendered versions of the Virtual Vaudeville animations and read the hypermedia notes using our Shockwave based Performance Viewer.

In addition to letting users navigator freely through a theater, the Live Performance Simulation System incorporates what we call avatar mode, currently under development. In avatar mode, the user takes control over a specific spectator to respond to the performance and to interact with surrounding spectators. Click here to learn more.

We have licensed Gamebryo, not merely for Virtual Vaudeville, but for the Live Performance Simulation System in general, so that the software foundation we are building can be used to create any number of performance simulations. Our team of programmers at the Georgia Institute of Technology is creating a general purpose interface applicable to any performance simulation. This interface provides navigational tools for the 3D environment and content developers to create hypermedia notes linked to either temporal cues in the performance or 3D objects (either static or animated) in the virtual theatre. Scholars, theater practitioners and 3D animators will be able to create performance simulations using the LPSS without doing any actual programming; they will simply run a configuration application to enter the names of the 3D models, animations and sound files they have created, along with the cues and urls for hypermedia notes. Of course, creating the models and animations themselves is a major undertaking that requires a wide variety of specialized theatrical and CG skills. If you are interested in exploring the possibility of creating your own performance simulations, contact the Virtual Vaudeville Principal Investigator, David Saltz.


Sandow lifting his fellow vaudeville performers.


Maggie Cline


Four Cohans (1896)

Avatar Mode

The first phase of the Live Performance Simulation System gives users the ability to fly through the theatre to watch the performance from any vantage point and to observe the reaction of any spectator at any time. We call this “fly-through” mode. While this mode is interactive, the user’s interactions are restricted to navigating through the space and accessing hypermedia notes.

In second phase of the Live Performance Simulation System, currently in development, the user will be able to adopt an embodied perspective, watching the performance through the eyes of a particular spectator. This virtual spectator becomes the user’s avatar. The user can move the avatar's head to focus on different areas of the stage or auditorium, and can trigger a limited set of avatar responses, for example, applauding or laughing. Some of these responses are verbal, such as cracking a joke or heckling the performance. In these cases, the viewer selects only the generic response type, and the system produces a specific response appropriate to what is happening onstage and off. The surrounding spectators respond interactively to the viewer's avatar, giving the user a feeling of total immersion in the historical scene. Avatar mode gives each user a unique experience of the virtual performance event – just like a real performance.


 

For Virtual Vaudeville, we are designing four “avatar pairs,” consisting of an avatar spectator and a companion who is the focus of most of the avatar’s interactions. Each avatar pair represents a different socioeconomic group in 19th century America: (1) Mrs. Dorothy Shopper, a wealthy socialite attending the performance with her young daughter; (2) Mr. Luigi Calzilaio, an Italian immigrant fresh off the boat, attending the performance with his more Americanized brother; (3) Mr. Jake Spender, a young "sport" sitting next to a Chorus Girl (he may or may not strike up a relationship, depending on the viewer's choices); and (4) Miss Lucy Teacher, an African-American schoolteacher watching the performance with her boyfriend from the second balcony, where she is confined by the theatre's segregation policy. (Note that in each of these pairs, the avatar will not be the named character, but the companion.)

An “agent-based” response engine, to determine how surrounding spectators react to the avatar’s actions, was developed at the Moves Institute of the Naval Postgraduate School by Mike Bailey, under the direction of Dr. Michael Zyda. Scott Robertson, Georgia Institute of Technology, is creating a simple tabular interface to the response engine that allows performance scholars to define avatar and spectator interactions for performance simulations without needing to program.