Thesis Revised

  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Thesis Revised as PDF for free.

More details

  • Words: 2,286
  • Pages: 7
Aaron Lieb 02 . 05 . 09 Thesis Statement Concept (ProZeuxis) Live Visual Effects Table – a compositing system controlled by a tangible user interface and extended reality techniques that manipulates projected graphics supporting a live performance.

Statement The ProZeuxis system will be a compositing system to generate visual effects for the purpose of enhancing a live performance of any type. The system will be flexible and easily allow the user to create expressive visuals that incorporate aspects of the performance including live beat detection, and motion tracking to drive effects. Using these features, my concentration will be to implement a reactiveness to improvisation from the actions of the user and performers.

Synopsis ProZeuxis is a system that will be designed to control projected graphics for live performances. Much like in existing systems, a VJ using ProZeuxis will need to prepare effects and select video file clips before the performances. However, unlike existing software, the user will have at their disposal a full tool set for combining their video loops and still images with procedural effects that are controlled by audio and video stimuli coming from live performers on a stage. This will allow for interesting interactive effects to be created improvisationally. The nature of using a table interface will also permit for multiple people to use the system at once. This feature adds to the depth of complexity and uniqueness of each performance that is uncommon in other VJ software. This technology would work best in medium sized clubs, somewhat larger sized venues, and theaters. Setting up the system would require there be a space for the VJ performer or performers to stand and be able to view the entire stage. Some hardware including the stage projector, camera, and microphones will also need to be installed in the venue space before the performance. These are important components that will need to communicate with a laptop running ProZeuxis in order to incorporate live audio and motion detection. The system will be able to support different configurations of audio and video input feeds. This flexibility will be an added feature as different performances may have different requirements for setting up the system. For example, for a trio of of instrumentalists performing at a smaller venue, it might make sense to have one camera and each persons' instrument separately miced and running to a mixer. This would allow the system to track each band member's position on stage from the camera and also have access to a custom mixed signal of what each are playing. Imagine if you will, that the trio is a drummer, upright bass and sax player. When the system detects that each is playing at a relatively similar volume, at an even mix, one visual background or environment will be shown, and be able to be manipulated by the VJ changing the position of reactTable markers. Now the drummer has launched into a maddening drum solo, while the other players remain silent. The system could detect this audio activity, and morph the visual to a particle field of sparks exploding in relative time to the drummer, while also tracking the particle emitter to his camera tracked position. Another scenario might include a prerecorded music piece with twelve dancers on a larger stage. With this setup, you would not need a mixer between your audio input and the system, but

you would need to have two cameras to cover the wider stage for proper tracking. Many interesting combinations of visual effects could be accomplished with such a setup. For example, a background layer of visuals may be purely controlled by the prerecorded score, to establish a reactive environment. For foreground animations, you could coordinate a color tracking effect to analyze the movement of the dancers. Perhaps the dancers are representative of two rivaling forces, and half are clad in red, while the others are in white. As the movement of red dancers outweighs the movement of the white, a projected red avatar could be set to react with a set of animated movements. When the white dancers respond, an avatar acting as their character could also retort. Essentially, the system would be able to support many conceivable stage and performance configurations taking into account the size of the space, number of performers, and the types of performances being given. It would also encourage experimentation on the part of the user for preparing custom visualizations that would be able to withstand improvisational situations. This aspect holds a great importance to the concept of this project, as many of the existing ways of performing projected stage shows make it quite difficult to achieve this level of expressiveness. Without brining in the aspect of laser light shows, for the projected concert visuals strictly, there is currently this issue of the lack of expressiveness which could be viewed as a spectrum. At one end, you have your visuals for pop concerts. Every video clip is laboriously timed out with some extra frames at the head and tail to account for timing slippage. As the song progresses, video clips are triggered to sync up with the various parts of the songs in sections. Transitions such as fading between the looped clips are utilized to smooth out sections that do not warrant a jarring cut. The reverse of this, are the “eye candy” visuals that are more common in techno performances that you might see at a rave. These visuals are typically linked to the music's tempo, but usually provide very little in terms of story telling as they usually consist of quickly flashing colors and shapes. This can be seen as an updated and automated version of the wet show traditions from psychedelic music in which overhead projectors would be used to display a dish of swirling colored oil paints. In both of these techniques, there is certainly the ability to improvise and enhance a performance, but visually, not much more is being said than, “look at the colors”. In the center of this spectrum you have VJ's who utilize both of these in a manual approach of transitioning between manipulated video clips and reactive, eye candy type visuals by

using a diverse lineup of equipment. Performers who are capable of striking this balance between the two, most successful produce expressive and entertaining performances. All of these tried and true approaches to visual performance allows for only some improvisation on part of the performers involved. In the case of a sudden change of the song, a VJ will be forced to repeat clips, perhaps with a controllable effect applied to help it make sense in the context of what is now being played, or otherwise performed on stage. For instance, the frame rate of a clip can be manually adjusted using a midi control wheel or through a software to match an unexpected tempo. This situation is fine, and is still manageable for a VJ. However, it would be better if you could apply this timing effect less manually to an improvisationally selected clip to free up your hands to manipulate something else simultaneously. To solve this, you would need to plug the drummer's detected bpm from your audio mix channel directly into a newly selected clip's frame rate. This sounds like it could be too many steps to do quickly so as not to miss a beat. The ProZeuxis implementation uses tangible markers for the purposed of addressing issues such as these. Let's say one marker node is configured to emit a proximal connection of the audio detected bpm. Proximal, meaning that other nodes that come closer in proximity to this audio node will have one or more of their properties effected with a strength determined by this distance. For a simple looped video clip node, it would be easy to simply place this on the table and slide it closer to the audio node. The only extra step would be to tell the system which property of this clip to effect with bpm. For a standard looped, full screen video, you would only have the properties; scale-x, scale-y, transparency, and frame rate, to choose from. Each of these applicable, unconfigured properties would appear as a fingertip sized bubble with a text caption, and will float around the physical node on the table's back lit UI projection. As a user slides this video node toward the audio node, they could simultaneously be choosing to effect the “frame rate” property by allowing their finger to extend toward the floating property bubble. This example, and a variety of other problem-solution investigations can be worked out in this implementation of the ProZeuxis system. By using this relatively simple tangible user interface approach, the system will provide better tools for performing in the center of the visual performance spectrum. This combined with existing live video compositing techniques can provide a more organic and expressive user experience while also being a powerfully capable system.

Components Laptop – A powerful laptop with an external unit for splitting video-out into two separate projectors would be required to run the ProZeuxis software properly. One projector is needed for the ReacTable user interface, while the other is needed to display the final performance composition on stage. ReacTable – The system's main interface will be a flat, table like surface in which small plastic tracking symbols are placed and manipulated to change effects. The tracking markers, called fidicuals, are detected by a camera mounted beneath the semi-transparent table, while a rear projection of the user interface is aimed upward onto this surface. As the user moves the fidicuals, the projected user interface animates and reacts to these movements. The graphics projected onto this table are for the eyes of the VJ only, while the “composition” they are creating is a different set of graphics that are separately projected toward the stage area. MIDI Controller – The ReacTable interface's control of the system will be supplemented by a MIDI Keyboard controller. Various effect strengths and thresholds will be better suited to be linked to the keys, knobs, and sliders of this type of device. Many visual node parameters needing to be controlled solely through the touch interface would require too much time to drill down and find the one that you wish to change. It will be a more fluid setup to manipulate some aspects of the visuals by moving and rotating markers with one hand, while the other hand can be used to “trigger” groups of visuals by using a keyboard located directly in front of the ReacTable. Audio Mixer – Real time analysis of audio signals is known to be a costly operation in terms of computer processing. The ProZeuxis system will, therefore, most likely only feasibly be able to support one master audio line-in. This audio input can be analyzed once, but then reused for effects looking for different qualities of sound found in this data. Performances with multiple instrumentalists would benefit from the ProZeuxis VJ having access to the separate audio input of each performer being fed into their own mixer before being fed into the

software. This would give then the ability to isolate one or more of these signals before they are mixed and sent to the computer line-in. By changing or muting selective mixer channels, the visuals would appear to change between which instrumentalist is controlling them. Microphones – For many situations where an audio mixer would be necessary, you may also need microphones. This kind of setup, for most sized spaces, would easily be accomplished by capturing the audio using a wireless microphone system. It would be necessary to use a system that supports multiple wireless signals and a receiver capable of outputting each signal separately to the mixer. Stage Camera – Gestural input and general tracking of the performers is also a key component of the system. One or more stage cameras would need to be mounted in the venue space and aimed at the stage. These video signals would then be fed into the computer via a capture card and used to accomplish real time tracking of events on the stage. A tracker in ProZeuxis could feasibly be configured to track a particular color, luminance values, or a specific image.

Actors VJ – Individual who directly accesses the ProZeuxis system and its associated hardware. They create interesting video loops, textures, generative effects and other visual components to add to the system's visual library. They are also responsible for properly setting up the system in order to put on the performance. Musician – A musician's actions are used as a type of audio visual data source that is sensed by cameras and recording devices. These streams of stage data are fed into the system to aid in the manipulation of visual effects. Dancer – A dancer's actions and movements can also be used in this way. For multiple dancers, they may be required to wear disparate colored clothing in order to achieve disambiguate one other as seen by the system to achieve a specific effect. MC / Speaker – MC's, speakers, poets, narrators and other performers of this type, can also be motion tracked and have their voices processed by the system, as well. For these types of performances, the content of what is being said and the dynamic range of their voice would probably be of the most interest for creating effects. Audience – Possibly outside of the scope of this project, but the system could also conceivably be used to generate effects that are largely controlled by audience participation by also aiming a camera at them. Hands waving in the air or some other group activity could change a tracking parameter that could be fed into any desired effect.

Related Documents

Thesis Revised
December 2019 13
Chapter 1 Thesis Revised
November 2019 18
Revised Thesis Again
November 2019 16
Thesis Proposal - Revised
December 2019 44