Research in the AlloSphere
The AlloSphere Provides a Collaborative Research Environment
Photo by Valerie Liu
The AlloSphere Research Facility consists of a number of digital media laboratories and the 3-story high AlloSphere - a spherical space in which fully immersive, interactive, stereoscopic/pluriphonic, virtual environments can be experienced. For more information about the associated media laboratories, refer to the link Facilities.
Unlike the virtual reality "cubes" of the 1990s, the AlloSphere accommodates 20-30 people, not just one person, so the promise of a communal experience in the spheres of art, science, entertainment and education is achieved.
The AlloSphere is physically located in the recently completed California NanoSystems Institute (CNSI). CNSI and AlloSphere researchers regularly collaborate to showcase important nanoscale research findings in the AlloSphere.
In proximity to the AlloSphere are our associated research laboratories, run by other Media Arts and Technology professors: Four Eyes Lab, Vision Research Lab, Systemics Lab, Experimental Visualization Lab, TransLab, Simulation and Animation Lab, Professional Artists Lab, and the Center for Research in Electronic Art Technology (CREATE).
UCSB also has numerous other research units that may be of interest. For a complete listing, visit the Research Centers and Units page of the UCSB website.
The AlloSphere Supports Diverse Research Applications
For convenience, we divide the AlloSphere's research applications into two broad categories: activities that use the instrument as a research framework for immersive, multimodal environments ("inherent" research) and activities that use the AlloSphere as a functional tool for scientific exploration ("functional" research).
Application areas to pursue in the AlloSphere are diverse. Representative options include:
Arts and Entertainment
Clean "Green" Technology
Computer and Networking
Industry or Retail Business
AlloSphere Research Groups
There are a number of existing AlloSphere research groups: neurobiological, new materials, fluid dynamics and turbulence, HCI, HPC, Panoramic high definition video, 3D audio, Panoram and 3D visual, geo data, and others. New groups are regularly forming. Please contact us if you have an idea for a focus area.
AlloSphere Research Demonstrations
The AlloSphere has selected a few research efforts (described below) that are illustrative of work being conducted at both the atomic and macroscopic levels as research demonstrations. The best way to view these demos is, of course, inside the AlloSphere in 3D.
1. Artistic Patterning and Structural Growth New Atomic Bonding: Multi-Center Hydrogen Bond. An Interactive Visualization and Multi-modal Representation of Unique Atomic Bonds for Alternative Fuel Sources
In this multi-center hydrogen bond demo, the source of conductivity is zinc. Hydrogen replaces oxygen and forms a highly unusual multi-center bond. Simulations will allow for calculations at a higher level of complexity, leading to the investigation of how bonding strength changes as hydrogen is gradually drawn out of a hydride compound. This is a technique for using hydrogen as an alternative energy source, functioning as it would in a real world hydrogen car. The research is focusing on substances that hold hydrogen like a sponge, with the hydrogen atoms bonded weakly to the crystal structure of the host material so that they can be released with a small amount of heat. Visualizations and interactive simulations are leading to new discoveries on how these materials bond and can be released.
The work is an artistic as well as scientific representation that was created as an interactive artistic multi-modal installation in which one flies through the 2000 atom lattice navigating by the sonification of the atomic emission spectra of oxygen and zinc. The unique hydrogen bond has its own "musical voice". All sonic information comes from precise mathematical calculations transposing the atomic emission spectra into the audio domain.
Key faculty, postdoctoral, and graduate student researchers associated with the project: Professor Chris Van de Walle, Dr. Anderson Janotti, Professor JoAnn Kuchera-Morin, Lance Putnam, Basak Alper.
A video of this project is located on the Media page.
2. Multimodal Representation of Quantum Mechanics: The Hydrogen Atom
As the sciences increasingly rely on mathematical constructs to describe the invisible processes of nature, it is important to remain cognizant of the effectiveness of empirical observation towards gaining new insights. Digital systems provide not only a means of simulating models, but also a medium for communicating through image and sound.
This work interactively visualizes and sonifies the wavefunction of an electron of a single hydrogen atom. The atomic orbitals are modeled as solutions to the time-dependent Schrödinger equation with a spherically symmetric potential given by Coulomb's law of electrostatic force. Different orbitals of the electron can be combined in superposition to observe dynamic behaviors such as photon emission and absorption.
The interactive component of the simulation allows one to fly through the atom with a probe that emits "stream particles" that follow along the largest changes in the probability current and gradient of the electron. The electron probability amplitude is sonified by scanning through groups of stream particles in the space. The pitch can be adjusted by the rate at which a particular set of stream particles is scanned across. This allows us to give the sonification procedure a certain type of musicality, by assigning specific pitches to different features in the wavefunction.
This investigation is just the beginning of an effort to multimodally represent mathematical models used in physical and theoretical sciences. By finding a common meeting ground, artists and scientists can share insights and pursue similar fundamental questions about symmetry, pattern formation, and emergence.
Key faculty and graduate student researchers associated with the project: Professor JoAnn-Kuchera Morin, Professor Luca Peliti, Lance Putnam.
3. Nano-Scaled Devices
In this project, we are developing an interactive software simulation of nano-scaled devices and structures, with atom-level visualization of those structures implemented on the 30 foot diameter, 360-degree projection dome of the AlloSphere. When completed, this will allow the user wearing 3D stereoscopic glasses to stand in the middle of a simulation of a nano-scaled device and interact with the atoms and physical variables of that device, using novel and intuitive user interfaces. It will be the world's first fully immersive nanostructured matter simulator.
We shall implement computational materials science algorithms such as molecular dynamics and density functional theory on a novel hardware accelerator that transforms a single PC workstation into a 4 Teraflop supercomputer. This allows us to run nanoscale simulations that are 2-3 orders of magnitude, or 100x -1000x faster than current implementations. Complex calculations of the physics for nano-structured devices that previously took days will take minutes. We will also be able to use this extra computational power to solve for the physical properties of much larger structures and devices than were previously possible, allowing nano-engineers to design and simulate devices composed of millions of atoms.
Key faculty, postdoctoral, and graduate student researchers associated with the project: Professor Tobias Hollerer, Brent Oster, Professor JoAnn Kuchera-Morin.
4. Generating Audible Tones from Coherent Electron Spin Precession in a Quantum Dot
An audio synthesis model of electronic measurements on a quantum dot is the subject of yet another research group. Quantum dots, sometimes called artificial atoms, have utility for making new sources of clean energy. The model is a literal interpretation of electron spin precession experiments presented in the publication referenced below. The mathematical model of the experiment was mapped directly using wavelength as the basis for transposing optical frequencies into the audio domain. The frequency of electron spin precession is transposed from gigahertz to the audible range and is thereby auralized for a 3 dimensional acoustic environment. Visualizations may be derived directly and literally from the audio output and may be represented with animations of the bloch-sphere diagram or intuitive graphical renderings. The model is intended to be incorporated as a functional component into higher musical, compositional and generative systems and for that reason, is constructed with an open architecture. Conceptually, this project follows in the evolution of sound generation from earlier developments in musical instrumentation by the application of electronic pickup on acoustic instruments to analog signal generation and digital synthesis, now to map the resonant qualities of a quantum structure.
The physical experiment from which the model is derived is a pump probe measurement of coherent electron spin in a quantum dot. The sinusoidal precession of the superposition of quantum spin states is established by the laser pump pulse incident on a quantum dot device. The phase is arbitrarily perturbed by the application of a tipping pulse that interacts with the spin precession through the Stark Tipping effect. The measurement establishes the feasibility of a quantum spin computing device by the setting, and subsequent readout of a single coherent quantum spin state at a rate sufficient for multiple read/write interactions within the time envelope of the coherent event. In the audible model, the wave-shaping effect of the coherent interaction of the tipping laser pulse is used to synthesize a sound grain when the frequency of spin precession is modeled to be within the audible range. Individual grains from the audio model are used to construct a continuous audible waveform through the process of granular synthesis.
Future work should focus on developing an audio model for entanglement between q-bits. Intermodulations between q-bits may result in novel synthetic audio processes in a 3D environment and moreover, may provide some intuitive insight into quantum interactions. As our contemporary technology now taps the quantum level of physical phenomena, this implementation may serve to realize new musical potential.
Ultrafast Coherent Optical Manipulation of a Single Electron Spin in a Quantum Dot.
J. Berezovsky, M. H. Mikkelsen, N. G. Stoltz, L. A. Coldren, D. D. Awschalom.
Center for Spintronics and Quantum Computation, University of California, Santa Barbara, CA 93106.
Key faculty, postdoctoral, and graduate student researchers associated with the project: Professor David Awschalom, Dennis Adderton, Professor JoAnn Kuchera-Morin, and Lance Putnam.
5. Center for Nanomedicine Research
In the Center for Nanomedicine research collaboration we are building an interactive simulator that will facilitate virtual experiments in the delivery of chemotherapy to cancerous tumors in the pancreas and liver through nanoscale particles. In the past year we were able to re-construct an anatomically correct human body from MRI data with all vasculature intact that connects the pancreas and liver. We are currently working on the fluid dynamics equations for blood flow through the vasculature. This will allow scientists to study blood flow through different size arteries and veins, witnessing the blood in the bifurcations of the vasculature. We are also building a particle system that will allow the study of nanoscale particle flow within the fluid dynamics simulations. As we receive more data from the experiments of the materials scientists who are building the nanoscale particles, we can simulate the precise geometries of the particles, trying various scenarios to discern which shapes will bind better to the sides of the vessels to leak through to the organs where the tumors occur. Once we receive the binding equations from the nano researchers we can build out the simulator for the complete experiment. This research is uniting an interdisciplinary group of nanoscientists that cross physics, biochemistry, chemical engineering, mechanical engineering and fluid dynamics.
Key faculty, postdoctoral, and graduate student researchers associated with the project: Pablo Colapinto, John Delaney, Haru Ji, Qian Liu, Gustavo Rincon, Graham Wakefield, Dr. Matthew Wright, Professor JoAnn-Kuchera Morin, and Professor Jamey Marth.
6. Artificial Nature/Biogenerative Art
One may recall experiences from childhood playing in the flow of a river or watching the path of marching insects to produce fascinating natural patterns and provoke deep insights: lucid investigations in an infinite game. We approach this trans-disciplinary subject through an audiovisual evolutionary art installation and multi-agent system entitled "Artificial Nature". The system comprises a complex, dynamic and dissipative virtual world interweaving physico-chemical, biological and symbolic strata, with both visual and spatial sound projection and physical interfaces. Spectators can witness, control and discover generative and abstract spatio-temporal patterns evolving from the behaviors of artificial life agents, exploring beauty and creativity in nature and culture.
Key faculty, postdoctoral and graduate student researchers associated with the project: Haru Ji, Graham Wakefield and Professor JoAnn Kuchera-Morin.
The AlloBrain reconstructs an interactive 3D model of a human brain from macroscopic, organic fMRI data sets. The current model contains several layers of tissue blood flow, in which 12 "intelligent" agents interactively mine the data set for blood density level, and gather the information to deliver back to the researchers. 3D electrocardiogram data will be superimposed on the model, with the ultimate goal of superimposing computational models of synaptic nerve response, to move toward the nano-scaled organic level in this research project. The simulation contains several generative audio-visual systems. These systems are stereo-optically displayed and controlled by two wireless (Bluetooth) input devices that feature custom electronics, integrating several MEMs sensor technologies. The first controller allows one to navigate the space using 6 degrees of freedom. The second one contains twelve buttons that control the twelve agents. This same controller also moves the ambient sounds spatially around the sphere. Its shape is based on the hyper-dodecahedron, a 4-dimensional geometrical polytope, its shadow projected onto 3 dimensions. It was developed using procedural modeling techniques, and constructed with a 3-D printer capable of building solid objects. Using these controls along with the immersive qualities of the AlloSphere have allowed associated neuroscientists to explain the structure of the brain to varied audiences. This virtual interactive prototype also illustrates some of the key research topics undertaken in the AlloSphere; multimedia/multimodal computing, interactive immersive environments, and scientific data representation through art.
Key faculty, postdoctoral, and graduate student researchers associated with the project: Professor Marcos Novak, Professor JoAnn Kuchera-Morin, Dr. Xavier Amatrain, Dr. Dan Overholt, Lance Putnam, Wesley Smith, John Thompson and Graham Wakefield.
A video of this project is located on the Media page.
8. Sonifying the Cosmic Microwave Background
We present a new technique to sonify the power spectrum of the map of temperature anisotropy in the Cosmic Microwave Background (CMB), the oldest observable light of the universe. According to the Standard Cosmological Model, the universe began in a hot, dense state, and the first 380,000 years of its existence were dominated by tightly coupled plasma of baryons and photons, which was permeated by gravity-driven pressure oscillations - sound waves. The imprint of these primordial sound waves remains as light echoes in the CMB, which we measure as small-amplitude red and blue shifts in the black body radiation of the universe, with a typical angular scale of one degree. With our software, users can observe how the temperature map and power spectrum of the CMB change in response to different compositions of baryonic matter, dark matter, and dark energy, and explore these different universes in "sound space". Our simulation is designed to enhance understanding of how we can infer properties of the universe from the power spectrum of CMB temperature anisotropies. We discuss the theory, the software, and potential applications in education. The work was supported by NASA and the Planck Mission.
Key faculty, postdoctoral, and graduate student researchers associated with the project: Ryan McGee, Dr. Jatila van der Veen, Dr. Matthew Wright, Professor JoAnn Kuchera-Morin, Basak Alper, and Professor Philip Lubin.
9. Stereoscopic Highlighting: 2D Graph Visualization on Stereographic Displays
The two images on top and bottom represent stereo pairs. By crossing eyes, the reader can perceive the 3rd dimension of the stereoscopic highlighting. Bottom pair uses static visual highlighting besides the stereoscopic highlighting. Green nodes sit at a depth closer than the other nodes, whereas yellow nodes are at an even closer depth plane.
Recent scientific visualizations techniques take advantage of advances in VR and 3D display technologies. However, for information visualization tasks, utilizing the third dimension is a more complicated problem. Typically, data sets in information visualization have no inherent spatial encoding. Mapping data in 3D can complicate perception because issues related to occlusion and perspective and an ensuing necessity for viewpoint navigation can make it harder to glean relationships among data elements. However, 2D representations have their own limitations, such as issues related to the scalability of the data in terms of size and complexity. A primary example of a challenging problem for 2D visualization techniques is the design and layout of node-link representations for dense graphs.
Node-link diagrams aim to provide an understanding of the overall structure of a graph while enabling users to identify individual links. For instance, presenting node-link graphs of small-world network structures, which are common for social networks, often show too many edge crossings, which makes it hard to discern adjacency without an on-demand highlighting mechanism. 3D node-link visualizations can eliminate many edge crossings, and make it easier to identify the adjacency of specific elements. Hence, certain tasks become easier, such as enumerating nodes that are accessible from a particular starting node. However, due to perspective, when the graph is rendered in 3D some nodes are rendered much closer to the virtual camera viewpoint than others. With the addition of stereoscopic cues, much of the visual emphasis is placed on the nodes that are closer to the user’s viewpoint. But this emphasis is particular to the graph layout algorithm and the observer’s viewpoint, and independent of the data. When a graph is rendered as a 2D layout, the viewpoint remains relatively equidistant to each point on the graph. 3D layouts on the other hand often require increased viewpoint navigation. When a node of interest is rendered at a further depth in virtual space, the user may be required to rotate, zoom and adjust the view angle to get an optimal view. When the user is rotating the view angle of a 3D node and link diagram, however, the 2D projection on the screen is constantly changing. Thus, the graph appears different from different view angles, further complicating the task of building a mental map of the graph.
Our motivation for this project is to investigate alternative ways of making use of stereo displays for graph visualization. We propose a technique called stereoscopic highlighting that utilizes the visual emphasis provided by virtual depth to highlight points of interest on a 2D node and link diagram. Our technique utilizes stereoscopic depth to highlight regions of interest in a 2D graph by projecting these parts onto a plane closer to the viewpoint of the user. This technique aims to isolate and magnify specific portions of the graph that need to be explored in detail without resorting to other highlighting techniques like color or motion, which can then be reserved to encode other data attributes. This mechanism of stereoscopic highlighting also enables focus+context views by juxtaposing a detailed image of a region of interest with the overall graph, which is visualized at a further depth with correspondingly less detail.
Key faculty, postdoctoral, and graduate student researchers associated with the project: Basak Alper, Professor JoAnn Kuchera-Morin, and Professor Tobias Hollerer.
10. Ensemble Interaction In Virtual Reality Environments Using Mobile Devices
In typical multi-user Virtual Reality Environments (VREs), one user actively manipulates the environment via interactive controls while other users observe passively. In contrast to this, the AlloSphere Research Group proposes a model where users adopt roles and then perform associated tasks concurrently. Early results of our research enabled multiple scientists to probe changes in the probability current and gradient of an electron's wavefunction (see project 2); in another project, scientists can selectively call agents to report blood density levels from different parts of a human brain (see project 7).
In addition to providing interactive controls to multiple users, our current research also provides users individual viewports into data visualizations. In one example (pictured at right) users perform data mining tasks on graph visualizations. Using tablet devices, users can select graph nodes on the AlloSphere screen and then read the data associated with the nodes on their tablet. Since users have their own personal displays to consume text there is no need for text to be presented on the AlloSphere; this allows the shared screen to be devoted solely to the visualization. In another recent example multiple users can concurrently filter and consume data from the Seattle Public Library system on their tablets while viewing an overview of the data on the surface of the AlloSphere.
Mobile devices interact with AlloSphere applications using the app Control, available for free from both the Apple App Store and the Android Market. Control is an open-source application that allows users to define custom interfaces for controlling virtual reality, art and music software. It has been downloaded over 50,000 times since its introduction last year.